## Kalman Filtering for the Heston model with Matlab code, Part 1

By GormGeier on March 17th, 2015

I aim to make this a two-part series on the application of Kalman filtering to the Heston model. In this first post I will go over the basics of the Kalman filter and in the second part I will go into the specifics of applying it to parameter estimation for the model itself. That means, that if you are interested in the practical application for the Heston model specifically, you should click here [Update].

Kalman filtering is an extremely effective tool for estimating the parameters of a stochastic process from historical data.

The basic premise of the theory is that there exist some kind of subjectivity in our estimates or observations of a process. Assuming we have some unobservable or “latent” process called the state process, which can only be observed through a realization containing noise, the algorithm aims to recursively update our estimate of the state based on new information arriving. The standard theory takes a Gaussian, linear state space model (all notation in bold are vectors/matrices):

where **x** is the latent process we are estimating, **y** is our observation and **w** and **v** are two independent Gaussian processes, with covariance matrices **Q** and **R** respectively. **F** is a transition matrix determining how the state changes over time while **H** is the measurement matrix defining how the state is observed.

Given some prior assumption of the distribution of we know that

which is just a simple expected value and (co-)variance calculation.

Based on Bayes’ rule and the conditional expectation of we can now recursively determine the filtered state having first an “a-priori” estimate of our model:

Now comes the important part. The algorithm takes use of the so-called Kalman gain. Generally speaking this is an adjustment aimed at minimizing the covariance of our state estimate and more explicitly it can be viewed as a least-squares adjustment between the a priori estimate and our a posteriori, or actual, state estimate. Concurrently, it is defined by:

The state and covariance can now easily be estimated using:

A simple extension of this is the Extended Kalman filter, which takes a non-linear system and tries to linearise it through the use of Jacobians. However, the Extended Filter performs relatively poorly when faced with empirical data, which I also demonstrated in my dissertation. Instead, a better but still simple approach is to use the Unscented Kalman Filter.

Assume now that we have some sort of L-dimensional non-linear state-space model:

first we define an augmented state vector:

Which, due to the nature of w and v as Gaussian white noise means:

And equally the augmented covariance matrix is given by:

We now apply the unscented transform in order to find 2L+1 deterministic sample points, called sigma points:

where is a scaling parameter:

determines the dispersion of the sigma points and is usually chosen such that to ensure a tight spread, while is a secondary scaling parameter, where is normal.

These parameters can now be propagated through the non-linear system function and estimates for the mean and covariance can be found from:

where

We can now propagate the observation through its non-linear measurement function and define:

where denotes the observation components.

The Kalman gain is then found through:

where and are found by standard covariance calculation using the Ws defined above as weighting.

Then lastly the measurement update becomes:

As a summary then, the Uncented Kalman Filter is quite simple extension of the standard Filter, where one takes uses of a deterministic sampling distribution around the estimates in order to better capture the dynamics of the underlying model.

If you think that you need some additional information on this I would highly recommend Haykin (2001) and Javaheri (2006) which both present this in quite straightforward fashion.

Relevant sources:

Haykin, S. S. (2001). Kalman filtering and neural networks. *Wiley*.

Javaheri, A. (2006). Inside Volatility Arbitrage: The Secrets of Skewness. Wiley.

Javaheri, A., Lautier, D., & Galli, A. (2003). Filtering in Finance. Wilmott Magazine, 2003(3), 67-83.

Petris, G., Petrone, S., & Campagnoli, P. (2009). Dynamic linear models with R. *Springer*.