Neural Parametric Methods: Models Off the Bias

According to Thijs van den Berg novel modeling methods in finance utilizing Neural Parametric Models can overcome speed issues in training static neural networks and, importantly, avoid errors caused by biases inherent in traditionally derived models

Thijs van den Berg, a consultant and author on machine learning in quantitative finance will present a talk on neural parametric models, novel modeling methods in finance for the CQF Institute on 22nd September. Thijs will present a novel, generic machine learning modeling method to learn and extract parametric models and calibration algorithms directly from data.

A Neural network is nothing but an approximation that could have well been a polynomial with terms. When you fit a model like that to data, you try to estimate the model parameters. But fitting a neural network is a very time-consuming thing to do.

What Thijs does is split the model into having two types of parameters; a set of fixed parameters might define the shape family, like functions that are oscillating, for example, and then have some additional parameters that you can very quickly calibrate that, for example, specify the frequency or amplitude.

The Implications

“If you have a lot of data that that shows all kinds of frequencies, then you train the model through exposure to all the types of data that you can see and in finance an application I’m going to talk about is fitting implied volatility curves and interest rate curves.” Says Thijs.

First Thijs tries to capture the family of possible shapes, observing that In classical quantitative finance people propose models, they think about it and they say this looks familiar, this looks like a certain type of function and then they parameterize it. “I tried to do that with neural networks where they look at lots of data and then see the possible shapes from the past that are plausible and then come up with a kind of a parametric model where there’s a couple of parameters that define the different shapes you see and try to fit those on data, but without retraining a neural network. So more like what you do with regression or fitting.”

There’s two types of parameters in these types of neural networks. The ones that you train by exposing them to lots of data and that, like classical neural networks; takes a long time to train. Once done, then you can have a calibration algorithm that looks at tomorrow’s prices tomorrow, implied volatility curves for example, and then quickly picks the right shape from the ones that the neural network can express.

“If you go to the implied volatility example in classical quantitative finance people come up with a family of curves and those have kind of curve shaped parameters like SABR;  there’s three parameters and you can tweak those and then the shape deforms and then you can make it fit today’s market prices.”

An Empirical Approach

Thijs’ concern is that the ideas behind models might be pulled out of thin air: “based on what  I’ve learned from experience or because there’s some nice mathematical properties – however people come up with these models, that might not produce the best fitting model. What I try to do is replace the part where people create models from experience with models created from looking at the data directly with machine learning”

“I have this neural network, now it’s trained and there’s a couple of parameters and if you fill the right values in you get very nice fitting curves and so it tries to learn two types of neural network. One is this parametric model with a family of curves where there’s a couple of parameters in each set. And secondly, another neural network will aggregate, for example, today’s implied volatility price data into the parameters of that model and that’s a calibration network; that part is superfast.”

The benefit, according to Thijs, is that you have models of shapes extracted from data, instead of being chosen because of ‘mathematically nice’ properties, which should fit the real world better. The problem of neural networks taking a long time to train is solved by changing a static neural network into one that has a couple of parameters that you can set.

“When modeling implied volatilities we could reach perfect fitting shapes with models with a few parameters, we get really good fits of the market data, but at the same time we also solve underlying problems that some models have, like it should be arbitrage free. The SABR model can have certain parameter configurations that makes the solution kind of wrong: it is a very useful model, but it has some risks in it that sometimes the parameters can be calibrated to wrong values and then you can have option prices with arbitrage between them.”

Safeguards

“In my view, you need to make models that put everything in place in the right way so when you do these data driven models, you also need safeguards like I don’t want to have any arbitrage in there, or if you, for example, use these methods to model probability distributions, that you can also do that like you have a set of numbers and you would say what kind of distributions are these numbers from? Then you can use the same framework. But then obviously you don’t want probability distributions with negative probabilities and nonsense like that.”

“You have these building blocks in this framework where you can put in sensible limitations, like I want things to be arbitrage free. It’s more about doing things the right way and then maximizing performance. But having these safeguards in place is kind of an essential element in this modeling framework.”

Another risk is the overfit. “So you have this data and you might think, OK, this was yesterday’s data and the week before, and these curves are going to have to look exactly like they’ve been in the past and nothing else and that’s probably not a really good generalization of what these curves will look like in the future and there’s also methods in place to guard against that, to make sure that you do generalize. So in my talk, I’m going to talk about how to build these models, how to calibrate them, how to make sure they don’t overfit and make sure they don’t give wrong solutions that create problems in the real world.”

Thijs van den Berg will present an online talk entitled Neural Parametric Models, Novel Modeling Methods in Finance  for CQF Institute members on 22nd September, 2020 at 6pm BST. Membership is free and tickets are complementary. Register Now

 

 

UA-42805348-1