Revisit The 17th Quantitative Finance Conference, originally presented 17th-19th November 2021
WORKSHOPS:
Machine Learning Models for the Interest Rates
by Alexander Sokol, Executive Chairman and Head of Quant Research, CompatibL
Agenda
- Principles of interest rate model construction
- Defining the model in continuous vs. discrete time
- Term rate vs. forward rate models
- Stochastic drivers
- State variables
- Arbitrage-free calibration in Q-measure
- Historical calibration in P-measure
- Calibrated vs. non-calibrated model parameters
- Introduction to variational autoencoders (VAE)
- The roles of Encoder and Decoder
- Introducing uncertainty in reconstruction
- Loss function and optimization loop
- Reconstruction with VAE
- Generation with VAE
- Handwritten digits VAE example
- Yield curve VAE example
- Machine learning models for the interest rates
- Changing the architecture from reconstruction to simulation
- Replacing SDE with neural network based Decoder
- Replacing calibration with neural network based Encoder
- Loss function and optimization loop
- Arbitrage free calibration in Q-measure
- Historical calibration in P-measure
- One factor interest rate model example
- Multifactor interest rate model example
ESG & Climate Risk in Quantitative Finance
by Navin Rauniar, Advisory Partner focusing on LIBOR, ESG, Climate Risk & TCFD, HSBC
Introduction to ESG
- What is ‘E’, ‘S’ and ‘G’?
- Explaining Climate Risk, Sustainability, GHG and Net Zero
Global Regulatory Requirements for ESG Frameworks
- Latest update of regulatory requirements including Climate, Sustainability, Carbon & Net Zero
- Integrating into ESG Regulatory Frameworks
Overview of ESG and Climate products
- The Key Characteristics of ESG and Climate Products in the Current Market
- Matching the Client’s ESG Returns and solutions required for hedging, structuring, etc
ESG Products Design Framework: Aligning the Desired ESG Products with Market Strategies
- Key considerations for ESG product design
- Challenges and opportunities especially with ESG metrics
- Taxonomies for Investment Products
Managing ESG and Climate data – sourcing the right data sets
- Identifying the data source, historic and forward looking
- Addressing the typical paint points and associated vendor solutions
Group Discussion, Case Studies & Market Opportunities
Volatility, Options Pricing and Modelling Stream & Panel
When the going gets rough — the tough get going
by Jesper Andreasen (Kwant Daddy), Global Head Of Quantitative Research, Saxo Bank
Modeling Volatility for Options on Crypto Assets
by Artur Sepp, Head Systematic Solutions and Portfolio Construction, Sygnum Bank
- Specifics of crypto option markets and exchanges
- Challenges in applying traditional dynamic volatility models for arbitrage-free dynamics
- Log-normal volatility model with quadratic drift: robustness and analytics
- Applications to modeling implied volatilities of Bitcoin and Ethereum
Applying Markovian Projection for American Basket Options
by Lech Grzelak, Quantitative Analyst, Rabobank and Assistant Professor, TUDelft and
Juliusz Jabłecki, Divisional Head, Narodowy Bank Polski
Harvesting the FX Volatility Risk Premium with Python
by Saeed Amen, Founder: Cuemacro
-
Abstract:
In this talk, we shall discuss strategies for extracting the volatility risk premium in FX. We shall look at how various factors such as delta hedging impact our P&L, as well as assessing how the P&L can change depending upon which part of the vol surface we are trading. We’ll also be talking about the open source finmarketpy library we used for generating the results, and we shall be walking through the Python code used.
Recent Advances in VIX Modeling
by Julien Guyon, Senior Quant, Bloomberg L.P.
-
The class will cover:
- Optimal bounds for VIX futures given S&P 500 smiles
- Robust hedging of derivatives on S&P 500 and/or VIX: VIX-constrained martingale optimal transport
- Joint S&P 500/VIX arbitrages
- Exact joint calibration of S&P 500 and VIX smiles: VIX-constrained martingale Schrodinger problems/bridges
- Inversion of convex ordering: a remarkable empirical feature of the VIX market
- Inversion of convex ordering: Local volatility does not maximize the price of VIX futures
- Learning the VIX from the S&P 500 path: A Machine Learning perspective on Path-Dependent Volatility
Challenging Economic Capital
by Manola Santilli, Market Risk Manager, Intesa Sanpaolo and
Marco Scaringi, Quantitative Analyst, Risk Management, Intesa Sanpaolo
-
Contents:
- Economic Capital based on risk measure at extreme percentile and over long time horizon;
- Testing different approaches: naïve (Gaussian approximation) vs statistical (bootstrap) vs parametric (Cornish Fisher expansion);
- Searching mean reversion patterns in market risk factors: univariate and multivariate analysis
Authors: Marco Bianchetti, Manola Santilli, Marco Scaringi, Davide Stelitano, Roberto Ugoccioni (Intesa Sanpaolo, Risk Management)
Machine Learning for Valuation and Risk Panel
- Alexander Sokol
Executive Chairman and Head of Quant Research, CompatibL - Alexandre Antonov
Chief Analyst, Danske Bank - Ignacio Ruiz
Head of Counterparty Credit Risk Measurement and Analytics, Scotiabank - Ryan Ferguson
Founder & CEO, Riskfuel
TOPICS
- Can we build trusted ML for quant finance
- If you do not understand how the model works, should you use it
- What data can we get from the model to make it trusted
- How to watch for the signs of model instability with ML, when traditional convergence control techniques do not apply
- Is there enough data to use ML for valuation and risk
- Which ML techniques require more data and which require less
- When market data is scarce, can we generate more
- Introducing fundamental data into quant models using ML
- Traditional valuation, market risk, or counterparty risk models rarely if ever use fundamental data even if it meets the criterial of being public and readily available.
- Does ML change that – e.g. would adding Twitter feed to our market risk model make it better?
- And if we do, what would the regulators say?
- Does ML blur the boundary between the model and the numerical method
- With SDE based models, this boundary is clear – the model SDE determines the answer, and the numerical method is a way to obtain it
- With ML, the two can no longer be fully separated
- What implication does it have for model risk and model validation
Machine Learning Stream
Machine Learning in Finance
by Paul Bilokon, Founder, CEO, Thalesians & Senior Quantitative Consultant, BNP Paribas and
Arvid Bertermann, Quantitative Analytics, Barclays
Machine learning is making inroads in finance. In this talk, we present some recent results obtained by the author and his collaborators and consider the possibilities for the future.
Genetic Algorithms and Evolutionary Computation
by Achin Agarwal, Director, Algo One AI and Head of Quant Research, First Global
Genetic algorithms are inspired by nature and evolution. They help in solving hard computational problems in finite time. Based on a straightforward theoretical foundation of natural selection, they provide an easy-to-understand framework for solving different kinds of search and optimization problems. They are especially useful when the search space is too large or complex to handle using traditional search algorithms. The talk will provide a practical demonstration of how this framework can be applied to solve a common problem of portfolio construction, highlighting the key steps involved and examining the nuances of each step. The talk will also provide a bird’s-eye view of other common problems in quantitative finance that can be handled using genetic algorithms.
Principles of Machine Learning Model Construction for the Interest Rates
by Alexander Sokol, Executive Chairman and Head of Quant Research, CompatibL
- Traditional model construction involves selecting a model SDE and then calibrating its parameters
- With machine learning (ML), there is no explicit SDE and the boundary between the model (a set of assumptions about the stochastic process) and the numerical method for solving it (neural network architecture) is not always clear
- In this presentation, we will discuss the principles of generative ML model construction in Q- and P-measure for the interest rates, and will build ML counterparts to several popular interest rate models
- We will show that familiar concepts of stochastic drivers and state variables have clear and intuitive interpretations in the proposed ML model framework
- For the interest rates where risk factors form a curve, the ability of ML to optimize in high dimensional space that led to its tremendous advances in image processing proves equally valuable and eliminates the need to select model factors or parsimonious parameterization explicitly
- In order to avoid overfitting or underfitting, the model loss function must exclude aleatory uncertainty (inherent randomness in financial markets the model aims todescribe) and include only epistemic uncertainty (calibration error the model aims to reduce or eliminate). Expecting ML to separate the two sources of uncertainty in training data on its own will inevitably lead to overfitting or underfitting.
Function approximation in Risk Calculations: When to use Deep Neural Networks and when to use Chebyshev Tensors
by Ignacio Ruiz, Head of Counterparty Credit Risk Measurement and Analytics, Scotiabank
- Speed and convergence of DNNs and CTs
- Speed of evaluation
- Convergence
- Convergence rate in real live contexts
- The questions of dimension
- Taking into account the application
- Partial derivatives and ex-ante error estimation
- Key takeaways
Differential Machine Learning – Dimension reduction done right
by Antoine Savine: Chief Quantitative Analyst, Danske Bank and
Brian Norsk Huge, Senior Specialist Quant, Saxo Bank
- Speed and convergence of DNNs and CTs
- Speed of evaluation
- Convergence
- Convergence rate in real live contexts
- The questions of dimension
- Taking into account the application
- Partial derivatives and ex-ante error estimation
- Key takeaways
Looking for Trouble: Validating ML Pricers
by Dr Maxime Bergeron, Director of R&D, Riskfuel
Abstract:
Riskfuel pioneered the use of machine learning to produce excellent analytic approximations of traditional derivatives pricing models, achieving performance improvements of more than a million-fold without compromising accuracy. In this talk, I will describe some of the techniques Riskfuel uses to validate its models across the entire domain of approximation.
Deep Learning / Monte Carlo Stream
Alternatives to Deep Neural Networks for Function Approximations in Finance
by Alexandre Antonov, Chief Analyst, Danske Bank
Deep Pricing: Theory and Practice
by Youssef Elouerkhaoui, Managing Director, Global Head of Credit and Commodities Quantitative Analysis, Citi
NLP applications and use cases in Capital Markets
by Gary Wong, PhD, Advisory and Business Development, Artemis AG
Applying AAD to American Monte Carlo Option Pricing
by Dmitri Goloubentsev, Head of Automatic Adjoint Differentiation, Matlogica
Presenting an approach to efficiently implement adjoint differentiation for Longstaff Schwartz lower bound pricing method with focus on memory efficiency and parallelisation(both vectorization and multi-threading). This technique allows to propagate adjoints through LS regression and numerical results are presented demonstrating when this differentiation is required and when it can be omitted. Using pathwise differentiation where possible, total memory requirements for the AAD version is just about twofold of the original algorithm. Extension of this method can be used to implement adjoint differentiation for xVA.
Forecasting Intraday Stock Returns with Deep Learning Using the Limit Order Book &
Reinforcement Learning for Solving Trading and Portfolio Construction Problems
by Petter Kolm, Director of the Mathematics in Finance Master’s Program and Clinical Professor, Courant Institute of Mathematical Sciences, New York University
Large-scale Least-squares Monte Carlo method
by Kathrin Glau, Lecturer in Financial Mathematics, Queen Mary University of London
Least-squares Monte Carlo methods are frequently used in finance. Their implementation faces a computational bottleneck when increasing the number of basis functions. We overcome this burden by exploiting recent developments in weighted sampling and leveraging the randomized extended Kaczmarz algorithm to solve large-scale least-squares problems. Moreover, we benefit from using polynomial bases in polynomial models. The key advantage of this combined methodology is that it achieves high accuracy already for relatively small sample sizes, and even for high dimensional problems. The method is therefore of particular significance for finance. It applies to classical pricing tasks involving several dependent risk factors such as pricing basket options. Moreover, the method is well-suited for risk management purposes exposed to the particular challenge of severe simulation budget constraints. Our error and cost analysis along with numerical experiments shows the effectiveness of the methodology in both low and high dimensions, and under the assumption of a limited number of available simulations.
Ibor, ESG and Climate Risk Stream
ESG & Climate Risk
by Navin Rauniar, Advisory Partner focusing on LIBOR, ESG, Climate Risk & TCFD, HSBC
- What is ESG and why does it matter to you?
- Key regulations and frameworks financial institutions need to be aware of
- Impacts to the Risk function
Non-Linear Discounting: Modelling Notional-Dependent Discounting (with a Motivation from Climate Models)
by Christian Fries, Head of Model Development, DZ Bank
We develop a model for non-linear discounting, where discount factors depend on the (accumulated) notional.
After giving a short review of some aspects of risk-neutral valuation - e.g.~replication and default protection - we derive a risk-free discount factor from defaultable funding providers. In a second step, we introduce a notional dependent default probability. The modelling cannot be achieved through a bounded default intensity, but intensity-based models appear as a limit case.
Our numerical analysis combines the approach with classical stochastic interest rate models, e.g. discrete term structure models (LIBOR market models).
The model may have relevance in the application of discount in climate risk \textit{integrated assessment models}. If time permits, we give a short example.
Inspired by Libor Reform: Expected Median of a Shifted Brownian Motion
by Vladimir Piterbarg: MD, Head of Quantitative Analytics and Quantitative Development at NatWest Markets
ICE Swap Rate: fallback, approximation, exotic and convexity
by Marc Henrard, Managing Partner muRisQ Advisory and Visiting Professor, University College London
- Fallback proposal by working groups
- How good/bad is the implicit approximation?
- Non-linear transformation of payoffs and new strikes
- Implicit convexity adjustment embedded in proposals
Sustainable Investment - Exploring the Linkage between Alpha, ESG, and SDG's
by Miquel Noguer Alonso, Co-Founder and Chief Science Officer, Artificial Intelligence Finance Institute – AIFI
Optimal ESG Portfolios
by Anatoly B. Schmidt, Finance and Risk Engineering, NYU Tandon School of Engineering
- Mean variance portfolio theory is expanded to accommodate investors’ preferences for the portfolio ESG value (PESGV). Namely, PESGV is added to the minimizing objective function so that portfolio weights are simultaneously optimized in terms of returns, risk (volatility), and PESGV.
- It is found that higher PESGVs may yield more concentrated portfolios and lower Sharpe ratios.
- A new ESG portfolio performance measure, the ESG tilted Sharpe ratio, is introduced.
- Two suggestions are offered to address growing criticism of the ESG-based investing.
Machine Learning and Alt Data Stream
Generative Models and Market Data Generation: A Review
by Andrew Green, Managing Director and XVA Lead Quant, Scotiabank
- On overview of the main generative models
- Restricted Boltzmann Machines
- Variational Autoencoders
- Generative Adversarial Networks
- Application to Market Data Generation
Chebyshev Tensors and Machine Learning in the computation of dynamic sensitivities
by Mariano Zeron, Head of Research and Development: MoCaX Intelligence
- The computational cost of pricing in risk calculations
- Mathematical properties of Chebyshev Tensors
- Convergence properties and its implications for pricing function approximation
- How to use Chebyshev Tensors in risk calculations
- The problem of dimension
- Different techniques to address the curse of dimensionality
- Chebyshev Tensors and the computation of dynamic sensitivities
- The composition technique and Chebyshev Tensors in the computation of dynamic sensitivities
- Numerical results for dynamic sensitivities and dynamic initial
Efficient Model Risk Management with Synthetic Data
by Jos Gheerardyn, Co-Founder and CEO, Yields.io and
Chamberlain Mbah, Senior Data Scientist, Yields.io
- Why we need generative models
- Overview of various approaches
- GAN architectures for detecting model issues
- Results
Conditional Expectations - Model Free, Data Driven, Fast (with Applications to Pricing / Hedging)
by Jörg Kienitz, Finciraptor, AcadiaSoft, University of Wuppertal and Cape Town
We present a new method for calculating conditional expectations in a model free and data driven way that at the same time is semi-analytic and, thus, fast. It is relevant to many fields of quantitative finance, e.g. we consider the calibration of stochastic local volatility models, pricing of exotic bermudan options in one and multiple dimensions or discuss possible applications to xVA. Pricing of vanilla options with rough stochastic volatility models and rainbow/basket options with high dimensional Heston models serve as illustrating examples.
The method applies statistical learning techniques placed into the quantitative finance setting. The key ingredient, the distribution, is stabilized with a proxy hedge. In our illustrations this leads to time discrete minimal variance delta hedges. The distribution estimation is numeric but does not use kernel estimation and, thus, faces no subtile bandwidth selection, the further calculations for obtaining the delta and the conditonal expectation value are purely analytic. Since the applied methodology is at the same time a generative method simulation wrt the distributions is also possible.
Finally we discuss the challenges for applications in high dimensional settings and techniques for mitigation.
Related but different approaches recently applied are Differential Machine Learning, Q-Learners for financial models or dynamically controlled kernel estimation.
Data-Driven Market Simulators & Some Applications of Signature Kernel Methods in Mathematical Finance
by Blanka Horvath, Lecturer, King’s College London and Researcher, The Alan Turing Institute
Abstract:
Techniques that address sequential data have been a central theme in machine learning research in the past years. More recently, such considerations have entered the field of finance-related ML applications in several areas where we face inherently path dependent problems: from (deep) pricing and hedging (of path-dependent options) to generative modelling of synthetic market data, which we refer to as market generation.
We revisit Deep Hedging from the perspective of the role of the data streams used for training and highlight how this perspective motivates the use of highly accurate generative models for synthetic data generation. From this, we draw conclusions regarding the implications for risk management and model governance of these applications, in contrast to risk-management in classical quantitative finance approaches.
Indeed, financial ML applications and their risk-management heavily rely on a solid means of measuring and efficiently computing (similarity-)metrics between datasets consisting of sample paths of stochastic processes. Stochastic processes are at their core random variables with values on path space. However, while the distance between two (finite dimensional) distributions was historically well understood, the extension of this notion to the level of stochastic processes remained a challenge until recently.
We discuss the effect of different choices of such metrics while revisiting some topics that are central to ML-augmented quantitative finance applications (such as the synthetic generation and the evaluation of similarity of data streams) from a regulatory (and model governance) perspective. Finally, we discuss the effect of considering refined metrics which respect and preserve the information structure (the filtration) of the market and the implications and relevance of such metrics on financial results.
Neural-Networks for Cross-Currency Options under the Correlated SABR Model
by Katia Babbar, University of Oxford”, Academic Visitor & “QuantBright” Consultant
Synopsis:
Assuming EURUSD and USDJPY each follow a SABR process, under some mild assumption on correlations, a EURJPY consistent cross-smile can be inferred. Such models require a 4-factor Monte Carlo simulation and are too slow for calibration. Here a Neural Network is trained on a set of data generated by Conditional Monte Carlo in order to speed up Calibration of a range of Cross-Smiles. This allows to more practically explore the dynamics implied by the model and to contrast such dynamics against cross-smiles observed in the market.
xVA and Modelling Stream
Pricing Basket Credit Derivatives when Copula is nearly co-monotone
by Andrey Chirikhin, Head of Structured Credit QA, Barclays Investment Bank
- A (Gaussian) copula approach is the market standard method to price and quote basket tranches and other basket credit derivatives
- At the times of market distress, implied (base) correlation may approach 100%
- Traditional semi-analytical pricing methods become unstable in this case
- A perfectly analytical solution exists for the purely co-monotone copula for certain basket credit payoffs
- We utilize such solution to propose an interpolation-based method of pricing basket credit derivatives in the nearly co-monotone case
Bayesian Backtesting
by Matthias Arnsdorf, Global head of Counterparty Credit Risk Quantitative Research, J.P. Morgan
- We explore an alternative counterparty risk backtesting approach using Bayesian statistics.
- It is well know that classical null hypothesis testing which underlies typical backtesting methodologies suffers from conceptual and practical issues.
- Counterparty backtesting in particular suffers from low power and lack of interpretability.
- In this talk we will outline a practical alternative that can provide more intuitive and more meaningful results.
Model enhancements to increase CVA proxy hedging efficiencies-before and after FRTB
by Shengyao Zhu, Senior Quantitative Analyst, XVA Trading Desk, Nordea
The presentation details the xVA desks learnings over the covid-19 pandemic from a modelling perspective.
Studying the limitations of a popular model used to generate proxy credit spreads for counterparties with no liquid CDS (often referred to as the cross-section method) over February-April 2020 resulted in some tweaks which gave clearer and more intuitive deltas and therefore better and more efficient hedging.
In the presentation I will discuss the analysis behind these model tweaks as well as some numerical testing results using the data covering the aforementioned period. Also I would discuss how proxy hedge could work under future FRTB CVA regulation from a desk perspective.
Darwinian Model Risk and Reverse Stress Testing
by Stéphane Crépey, Professor of Mathematics at the Université de Paris, Laboratoire de Probabilités, Statistique et Modélisation (LPSM)
We consider the model risk born of adverse selection, within the available models, of those leading to high purchase prices (as necessary for competitiveness on the market), even if this means alpha-leakage, but with corresponding losses that are more than offset in the short-and medium-term by gains on the hedging side of the position. At least, this happens until a financial crisis reveals the erroneous nature of the model used, forcing the bank to liquidate its position and its hedge at the cost of heavy losses. This “Darwinian” model risk is directional (related to a long-term moment of order one of), likely to stay unnoticed from traditional risk systems, which are focused on shorter-term moments of order two and beyond. One possible approach to detect it consists of long-term, large-scale simulations, revealing the consequences of using various models in extreme scenarios. The erroneous models are then discarded while the admissible models can be combined within a Bayesian robust approach.
Based on joint works with Claudio Albanese (Global Valuation Ltd) and Stefano Iabichino (JP Morgan).
Regulatory Capital for Counterparty Risk or CVA Risk
by Michael Pykhtin, Manager, Quantitative Risk, U.S. Federal Reserve Board
Optionality as a Binary Operation
by Peter Carr, Professor and Dept. Chair of FRE Tandon, New York University
Please note some presenter slides and video lecture recordings may be restricted due to company compliance.