The 17th Quantitative Finance Conference, 17th19th November 2021
Revisit The 17th Quantitative Finance Conference, originally presented 17th19th November 2021
WORKSHOPS:
Machine Learning Models for the Interest Rates
by Alexander Sokol, Executive Chairman and Head of Quant Research, CompatibL
Agenda
 Principles of interest rate model construction
 Defining the model in continuous vs. discrete time
 Term rate vs. forward rate models
 Stochastic drivers
 State variables
 Arbitragefree calibration in Qmeasure
 Historical calibration in Pmeasure
 Calibrated vs. noncalibrated model parameters
 Introduction to variational autoencoders (VAE)
 The roles of Encoder and Decoder
 Introducing uncertainty in reconstruction
 Loss function and optimization loop
 Reconstruction with VAE
 Generation with VAE
 Handwritten digits VAE example
 Yield curve VAE example
 Machine learning models for the interest rates
 Changing the architecture from reconstruction to simulation
 Replacing SDE with neural network based Decoder
 Replacing calibration with neural network based Encoder
 Loss function and optimization loop
 Arbitrage free calibration in Qmeasure
 Historical calibration in Pmeasure
 One factor interest rate model example
 Multifactor interest rate model example
ESG & Climate Risk in Quantitative Finance
by Navin Rauniar, Advisory Partner focusing on LIBOR, ESG, Climate Risk & TCFD, HSBC
Introduction to ESG
 What is ‘E’, ‘S’ and ‘G’?
 Explaining Climate Risk, Sustainability, GHG and Net Zero
Global Regulatory Requirements for ESG Frameworks
 Latest update of regulatory requirements including Climate, Sustainability, Carbon & Net Zero
 Integrating into ESG Regulatory Frameworks
Overview of ESG and Climate products
 The Key Characteristics of ESG and Climate Products in the Current Market
 Matching the Client’s ESG Returns and solutions required for hedging, structuring, etc
ESG Products Design Framework: Aligning the Desired ESG Products with Market Strategies
 Key considerations for ESG product design
 Challenges and opportunities especially with ESG metrics
 Taxonomies for Investment Products
Managing ESG and Climate data – sourcing the right data sets
 Identifying the data source, historic and forward looking
 Addressing the typical paint points and associated vendor solutions
Group Discussion, Case Studies & Market Opportunities
Volatility, Options Pricing and Modelling Stream & Panel
When the going gets rough — the tough get going
by Jesper Andreasen (Kwant Daddy), Global Head Of Quantitative Research, Saxo Bank
Modeling Volatility for Options on Crypto Assets
by Artur Sepp, Head Systematic Solutions and Portfolio Construction, Sygnum Bank
 Specifics of crypto option markets and exchanges
 Challenges in applying traditional dynamic volatility models for arbitragefree dynamics
 Lognormal volatility model with quadratic drift: robustness and analytics
 Applications to modeling implied volatilities of Bitcoin and Ethereum
Applying Markovian Projection for American Basket Options
by Lech Grzelak, Quantitative Analyst, Rabobank and Assistant Professor, TUDelft and
Juliusz Jabłecki, Divisional Head, Narodowy Bank Polski
Harvesting the FX Volatility Risk Premium with Python
by Saeed Amen, Founder: Cuemacro

Abstract:
In this talk, we shall discuss strategies for extracting the volatility risk premium in FX. We shall look at how various factors such as delta hedging impact our P&L, as well as assessing how the P&L can change depending upon which part of the vol surface we are trading. We’ll also be talking about the open source finmarketpy library we used for generating the results, and we shall be walking through the Python code used.
Recent Advances in VIX Modeling
by Julien Guyon, Senior Quant, Bloomberg L.P.

The class will cover:
 Optimal bounds for VIX futures given S&P 500 smiles
 Robust hedging of derivatives on S&P 500 and/or VIX: VIXconstrained martingale optimal transport
 Joint S&P 500/VIX arbitrages
 Exact joint calibration of S&P 500 and VIX smiles: VIXconstrained martingale Schrodinger problems/bridges
 Inversion of convex ordering: a remarkable empirical feature of the VIX market
 Inversion of convex ordering: Local volatility does not maximize the price of VIX futures
 Learning the VIX from the S&P 500 path: A Machine Learning perspective on PathDependent Volatility
Challenging Economic Capital
by Manola Santilli, Market Risk Manager, Intesa Sanpaolo and
Marco Scaringi, Quantitative Analyst, Risk Management, Intesa Sanpaolo

Contents:
 Economic Capital based on risk measure at extreme percentile and over long time horizon;
 Testing different approaches: naïve (Gaussian approximation) vs statistical (bootstrap) vs parametric (Cornish Fisher expansion);
 Searching mean reversion patterns in market risk factors: univariate and multivariate analysis
Authors: Marco Bianchetti, Manola Santilli, Marco Scaringi, Davide Stelitano, Roberto Ugoccioni (Intesa Sanpaolo, Risk Management)
Machine Learning for Valuation and Risk Panel
 Alexander Sokol
Executive Chairman and Head of Quant Research, CompatibL  Alexandre Antonov
Chief Analyst, Danske Bank  Ignacio Ruiz
Head of Counterparty Credit Risk Measurement and Analytics, Scotiabank  Ryan Ferguson
Founder & CEO, Riskfuel
TOPICS
 Can we build trusted ML for quant finance
 If you do not understand how the model works, should you use it
 What data can we get from the model to make it trusted
 How to watch for the signs of model instability with ML, when traditional convergence control techniques do not apply
 Is there enough data to use ML for valuation and risk
 Which ML techniques require more data and which require less
 When market data is scarce, can we generate more
 Introducing fundamental data into quant models using ML
 Traditional valuation, market risk, or counterparty risk models rarely if ever use fundamental data even if it meets the criterial of being public and readily available.
 Does ML change that – e.g. would adding Twitter feed to our market risk model make it better?
 And if we do, what would the regulators say?
 Does ML blur the boundary between the model and the numerical method
 With SDE based models, this boundary is clear – the model SDE determines the answer, and the numerical method is a way to obtain it
 With ML, the two can no longer be fully separated
 What implication does it have for model risk and model validation
Machine Learning Stream
Machine Learning in Finance
by Paul Bilokon, Founder, CEO, Thalesians & Senior Quantitative Consultant, BNP Paribas and
Arvid Bertermann, Quantitative Analytics, Barclays
Machine learning is making inroads in finance. In this talk, we present some recent results obtained by the author and his collaborators and consider the possibilities for the future.
Genetic Algorithms and Evolutionary Computation
by Achin Agarwal, Director, Algo One AI and Head of Quant Research, First Global
Genetic algorithms are inspired by nature and evolution. They help in solving hard computational problems in finite time. Based on a straightforward theoretical foundation of natural selection, they provide an easytounderstand framework for solving different kinds of search and optimization problems. They are especially useful when the search space is too large or complex to handle using traditional search algorithms. The talk will provide a practical demonstration of how this framework can be applied to solve a common problem of portfolio construction, highlighting the key steps involved and examining the nuances of each step. The talk will also provide a bird’seye view of other common problems in quantitative finance that can be handled using genetic algorithms.
Principles of Machine Learning Model Construction for the Interest Rates
by Alexander Sokol, Executive Chairman and Head of Quant Research, CompatibL
 Traditional model construction involves selecting a model SDE and then calibrating its parameters
 With machine learning (ML), there is no explicit SDE and the boundary between the model (a set of assumptions about the stochastic process) and the numerical method for solving it (neural network architecture) is not always clear
 In this presentation, we will discuss the principles of generative ML model construction in Q and Pmeasure for the interest rates, and will build ML counterparts to several popular interest rate models
 We will show that familiar concepts of stochastic drivers and state variables have clear and intuitive interpretations in the proposed ML model framework
 For the interest rates where risk factors form a curve, the ability of ML to optimize in high dimensional space that led to its tremendous advances in image processing proves equally valuable and eliminates the need to select model factors or parsimonious parameterization explicitly
 In order to avoid overfitting or underfitting, the model loss function must exclude aleatory uncertainty (inherent randomness in financial markets the model aims todescribe) and include only epistemic uncertainty (calibration error the model aims to reduce or eliminate). Expecting ML to separate the two sources of uncertainty in training data on its own will inevitably lead to overfitting or underfitting.
Function approximation in Risk Calculations: When to use Deep Neural Networks and when to use Chebyshev Tensors
by Ignacio Ruiz, Head of Counterparty Credit Risk Measurement and Analytics, Scotiabank
 Speed and convergence of DNNs and CTs
 Speed of evaluation
 Convergence
 Convergence rate in real live contexts
 The questions of dimension
 Taking into account the application
 Partial derivatives and exante error estimation
 Key takeaways
Differential Machine Learning – Dimension reduction done right
by Antoine Savine: Chief Quantitative Analyst, Danske Bank and
Brian Norsk Huge, Senior Specialist Quant, Saxo Bank
 Speed and convergence of DNNs and CTs
 Speed of evaluation
 Convergence
 Convergence rate in real live contexts
 The questions of dimension
 Taking into account the application
 Partial derivatives and exante error estimation
 Key takeaways
Looking for Trouble: Validating ML Pricers
by Dr Maxime Bergeron, Director of R&D, Riskfuel
Abstract:
Riskfuel pioneered the use of machine learning to produce excellent analytic approximations of traditional derivatives pricing models, achieving performance improvements of more than a millionfold without compromising accuracy. In this talk, I will describe some of the techniques Riskfuel uses to validate its models across the entire domain of approximation.
Deep Learning / Monte Carlo Stream
Alternatives to Deep Neural Networks for Function Approximations in Finance
by Alexandre Antonov, Chief Analyst, Danske Bank
Deep Pricing: Theory and Practice
by Youssef Elouerkhaoui, Managing Director, Global Head of Credit and Commodities Quantitative Analysis, Citi
NLP applications and use cases in Capital Markets
by Gary Wong, PhD, Advisory and Business Development, Artemis AG
Applying AAD to American Monte Carlo Option Pricing
by Dmitri Goloubentsev, Head of Automatic Adjoint Differentiation, Matlogica
Presenting an approach to efficiently implement adjoint differentiation for Longstaff Schwartz lower bound pricing method with focus on memory efficiency and parallelisation(both vectorization and multithreading). This technique allows to propagate adjoints through LS regression and numerical results are presented demonstrating when this differentiation is required and when it can be omitted. Using pathwise differentiation where possible, total memory requirements for the AAD version is just about twofold of the original algorithm. Extension of this method can be used to implement adjoint differentiation for xVA.
Forecasting Intraday Stock Returns with Deep Learning Using the Limit Order Book &
Reinforcement Learning for Solving Trading and Portfolio Construction Problems
by Petter Kolm, Director of the Mathematics in Finance Master’s Program and Clinical Professor, Courant Institute of Mathematical Sciences, New York University
Largescale Leastsquares Monte Carlo method
by Kathrin Glau, Lecturer in Financial Mathematics, Queen Mary University of London
Leastsquares Monte Carlo methods are frequently used in finance. Their implementation faces a computational bottleneck when increasing the number of basis functions. We overcome this burden by exploiting recent developments in weighted sampling and leveraging the randomized extended Kaczmarz algorithm to solve largescale leastsquares problems. Moreover, we benefit from using polynomial bases in polynomial models. The key advantage of this combined methodology is that it achieves high accuracy already for relatively small sample sizes, and even for high dimensional problems. The method is therefore of particular significance for finance. It applies to classical pricing tasks involving several dependent risk factors such as pricing basket options. Moreover, the method is wellsuited for risk management purposes exposed to the particular challenge of severe simulation budget constraints. Our error and cost analysis along with numerical experiments shows the effectiveness of the methodology in both low and high dimensions, and under the assumption of a limited number of available simulations.
Ibor, ESG and Climate Risk Stream
ESG & Climate Risk
by Navin Rauniar, Advisory Partner focusing on LIBOR, ESG, Climate Risk & TCFD, HSBC
 What is ESG and why does it matter to you?
 Key regulations and frameworks financial institutions need to be aware of
 Impacts to the Risk function
NonLinear Discounting: Modelling NotionalDependent Discounting (with a Motivation from Climate Models)
by Christian Fries, Head of Model Development, DZ Bank
We develop a model for nonlinear discounting, where discount factors depend on the (accumulated) notional.
After giving a short review of some aspects of riskneutral valuation  e.g.~replication and default protection  we derive a riskfree discount factor from defaultable funding providers. In a second step, we introduce a notional dependent default probability. The modelling cannot be achieved through a bounded default intensity, but intensitybased models appear as a limit case.
Our numerical analysis combines the approach with classical stochastic interest rate models, e.g. discrete term structure models (LIBOR market models).
The model may have relevance in the application of discount in climate risk \textit{integrated assessment models}. If time permits, we give a short example.
Inspired by Libor Reform: Expected Median of a Shifted Brownian Motion
by Vladimir Piterbarg: MD, Head of Quantitative Analytics and Quantitative Development at NatWest Markets
ICE Swap Rate: fallback, approximation, exotic and convexity
by Marc Henrard, Managing Partner muRisQ Advisory and Visiting Professor, University College London
 Fallback proposal by working groups
 How good/bad is the implicit approximation?
 Nonlinear transformation of payoffs and new strikes
 Implicit convexity adjustment embedded in proposals
Sustainable Investment  Exploring the Linkage between Alpha, ESG, and SDG's
by Miquel Noguer Alonso, CoFounder and Chief Science Officer, Artificial Intelligence Finance Institute – AIFI
Optimal ESG Portfolios
by Anatoly B. Schmidt, Finance and Risk Engineering, NYU Tandon School of Engineering
 Mean variance portfolio theory is expanded to accommodate investors’ preferences for the portfolio ESG value (PESGV). Namely, PESGV is added to the minimizing objective function so that portfolio weights are simultaneously optimized in terms of returns, risk (volatility), and PESGV.
 It is found that higher PESGVs may yield more concentrated portfolios and lower Sharpe ratios.
 A new ESG portfolio performance measure, the ESG tilted Sharpe ratio, is introduced.
 Two suggestions are offered to address growing criticism of the ESGbased investing.
Machine Learning and Alt Data Stream
Generative Models and Market Data Generation: A Review
by Andrew Green, Managing Director and XVA Lead Quant, Scotiabank
 On overview of the main generative models
 Restricted Boltzmann Machines
 Variational Autoencoders
 Generative Adversarial Networks
 Application to Market Data Generation
Chebyshev Tensors and Machine Learning in the computation of dynamic sensitivities
by Mariano Zeron, Head of Research and Development: MoCaX Intelligence
 The computational cost of pricing in risk calculations
 Mathematical properties of Chebyshev Tensors
 Convergence properties and its implications for pricing function approximation
 How to use Chebyshev Tensors in risk calculations
 The problem of dimension
 Different techniques to address the curse of dimensionality
 Chebyshev Tensors and the computation of dynamic sensitivities
 The composition technique and Chebyshev Tensors in the computation of dynamic sensitivities
 Numerical results for dynamic sensitivities and dynamic initial
Efficient Model Risk Management with Synthetic Data
by Jos Gheerardyn, CoFounder and CEO, Yields.io and
Chamberlain Mbah, Senior Data Scientist, Yields.io
 Why we need generative models
 Overview of various approaches
 GAN architectures for detecting model issues
 Results
Conditional Expectations  Model Free, Data Driven, Fast (with Applications to Pricing / Hedging)
by Jörg Kienitz, Finciraptor, AcadiaSoft, University of Wuppertal and Cape Town
We present a new method for calculating conditional expectations in a model free and data driven way that at the same time is semianalytic and, thus, fast. It is relevant to many fields of quantitative finance, e.g. we consider the calibration of stochastic local volatility models, pricing of exotic bermudan options in one and multiple dimensions or discuss possible applications to xVA. Pricing of vanilla options with rough stochastic volatility models and rainbow/basket options with high dimensional Heston models serve as illustrating examples.
The method applies statistical learning techniques placed into the quantitative finance setting. The key ingredient, the distribution, is stabilized with a proxy hedge. In our illustrations this leads to time discrete minimal variance delta hedges. The distribution estimation is numeric but does not use kernel estimation and, thus, faces no subtile bandwidth selection, the further calculations for obtaining the delta and the conditonal expectation value are purely analytic. Since the applied methodology is at the same time a generative method simulation wrt the distributions is also possible.
Finally we discuss the challenges for applications in high dimensional settings and techniques for mitigation.
Related but different approaches recently applied are Differential Machine Learning, QLearners for financial models or dynamically controlled kernel estimation.
DataDriven Market Simulators & Some Applications of Signature Kernel Methods in Mathematical Finance
by Blanka Horvath, Lecturer, King’s College London and Researcher, The Alan Turing Institute
Abstract:
Techniques that address sequential data have been a central theme in machine learning research in the past years. More recently, such considerations have entered the field of financerelated ML applications in several areas where we face inherently path dependent problems: from (deep) pricing and hedging (of pathdependent options) to generative modelling of synthetic market data, which we refer to as market generation.
We revisit Deep Hedging from the perspective of the role of the data streams used for training and highlight how this perspective motivates the use of highly accurate generative models for synthetic data generation. From this, we draw conclusions regarding the implications for risk management and model governance of these applications, in contrast to riskmanagement in classical quantitative finance approaches.
Indeed, financial ML applications and their riskmanagement heavily rely on a solid means of measuring and efficiently computing (similarity)metrics between datasets consisting of sample paths of stochastic processes. Stochastic processes are at their core random variables with values on path space. However, while the distance between two (finite dimensional) distributions was historically well understood, the extension of this notion to the level of stochastic processes remained a challenge until recently.
We discuss the effect of different choices of such metrics while revisiting some topics that are central to MLaugmented quantitative finance applications (such as the synthetic generation and the evaluation of similarity of data streams) from a regulatory (and model governance) perspective. Finally, we discuss the effect of considering refined metrics which respect and preserve the information structure (the filtration) of the market and the implications and relevance of such metrics on financial results.
NeuralNetworks for CrossCurrency Options under the Correlated SABR Model
by Katia Babbar, University of Oxford”, Academic Visitor & “QuantBright” Consultant
Synopsis:
Assuming EURUSD and USDJPY each follow a SABR process, under some mild assumption on correlations, a EURJPY consistent crosssmile can be inferred. Such models require a 4factor Monte Carlo simulation and are too slow for calibration. Here a Neural Network is trained on a set of data generated by Conditional Monte Carlo in order to speed up Calibration of a range of CrossSmiles. This allows to more practically explore the dynamics implied by the model and to contrast such dynamics against crosssmiles observed in the market.
xVA and Modelling Stream
Pricing Basket Credit Derivatives when Copula is nearly comonotone
by Andrey Chirikhin, Head of Structured Credit QA, Barclays Investment Bank
 A (Gaussian) copula approach is the market standard method to price and quote basket tranches and other basket credit derivatives
 At the times of market distress, implied (base) correlation may approach 100%
 Traditional semianalytical pricing methods become unstable in this case
 A perfectly analytical solution exists for the purely comonotone copula for certain basket credit payoffs
 We utilize such solution to propose an interpolationbased method of pricing basket credit derivatives in the nearly comonotone case
Bayesian Backtesting
by Matthias Arnsdorf, Global head of Counterparty Credit Risk Quantitative Research, J.P. Morgan
 We explore an alternative counterparty risk backtesting approach using Bayesian statistics.
 It is well know that classical null hypothesis testing which underlies typical backtesting methodologies suffers from conceptual and practical issues.
 Counterparty backtesting in particular suffers from low power and lack of interpretability.
 In this talk we will outline a practical alternative that can provide more intuitive and more meaningful results.
Model enhancements to increase CVA proxy hedging efficienciesbefore and after FRTB
by Shengyao Zhu, Senior Quantitative Analyst, XVA Trading Desk, Nordea
The presentation details the xVA desks learnings over the covid19 pandemic from a modelling perspective.
Studying the limitations of a popular model used to generate proxy credit spreads for counterparties with no liquid CDS (often referred to as the crosssection method) over FebruaryApril 2020 resulted in some tweaks which gave clearer and more intuitive deltas and therefore better and more efficient hedging.
In the presentation I will discuss the analysis behind these model tweaks as well as some numerical testing results using the data covering the aforementioned period. Also I would discuss how proxy hedge could work under future FRTB CVA regulation from a desk perspective.
Darwinian Model Risk and Reverse Stress Testing
by Stéphane Crépey, Professor of Mathematics at the Université de Paris, Laboratoire de Probabilités, Statistique et Modélisation (LPSM)
We consider the model risk born of adverse selection, within the available models, of those leading to high purchase prices (as necessary for competitiveness on the market), even if this means alphaleakage, but with corresponding losses that are more than offset in the shortand mediumterm by gains on the hedging side of the position. At least, this happens until a financial crisis reveals the erroneous nature of the model used, forcing the bank to liquidate its position and its hedge at the cost of heavy losses. This “Darwinian” model risk is directional (related to a longterm moment of order one of), likely to stay unnoticed from traditional risk systems, which are focused on shorterterm moments of order two and beyond. One possible approach to detect it consists of longterm, largescale simulations, revealing the consequences of using various models in extreme scenarios. The erroneous models are then discarded while the admissible models can be combined within a Bayesian robust approach.
Based on joint works with Claudio Albanese (Global Valuation Ltd) and Stefano Iabichino (JP Morgan).
Regulatory Capital for Counterparty Risk or CVA Risk
by Michael Pykhtin, Manager, Quantitative Risk, U.S. Federal Reserve Board
Optionality as a Binary Operation
by Peter Carr, Professor and Dept. Chair of FRE Tandon, New York University
Please note some presenter slides and video lecture recordings may be restricted due to company compliance.