Categories
Finance Statistics

High-Frequency Statistical Arbitrage

Computational statistical arbitrage systems are now de rigeur, especially for high-frequency, liquid markets (such as FX).
Statistical arbitrage can be defined as an extension of riskless arbitrage,
and is quantified more precisely as an attempt to exploit small and consistent regularities in
asset price dynamics through use of a suitable framework for statistical modelling.

Statistical arbitrage has been defined formally (e.g. by Jarrow) as a zero initial cost, self-financing strategy with cumulative discounted value \(v(t)\) such that:

  • \( v(0) = 0 \),
  • \( \lim_{t\to\infty} E^P[v(t)] > 0 \),
  • \( \lim_{t\to\infty} P(v(t) < 0) = 0 \),
  • \( \lim_{t\to\infty} \frac{Var^P[v(t)]}{t}=0 \mbox{ if } P(v(t)<0) > 0 \mbox{ , } \forall{t} < \infty \)

These conditions can be described as follows: (1) the position has a zero initial cost (it is a self-financing trading strategy), (2) the expected discounted profit is positive in the limit, (3) the probability of a loss converges to zero, and (4) a time-averaged variance measure converges to zero if the probability of a loss does not become zero in finite time. The fourth condition separates a standard arbitrage from a statistical arbitrage opportunity.

We can represent a statistical arbitrage condition as $$ \left| \phi(X_t – SA(X_t))\right| < \mbox{TransactionCost} $$ Where \(\phi()\) is the payoff (profit) function, \(X\) is an arbitrary asset (or weighted basket of assets) and \(SA(X)\) is a synthetic asset constructed to replicate the payoff of \(X\). Some popular statistical arbitrage techniques are described below. Index Arbitrage

Index arbitrage is a strategy undertaken when the traded value of an index (for example, the index futures price) moves sufficiently far away from the weighted components of the index (see Hull for details). For example, for an equity index, the no-arbitrage condition could be expressed as:

\[ \left| F_t – \sum_{i} w_i S_t^i e^{(r-q_i)(T-t)}\right| < \mbox{Cost}\] where \(q_i\) is the dividend rate for stock i, and \(F_t\) is the index futures price at time t. The deviation between the futures price and the weighted index basket is called the basis. Index arbitrage was one of the earliest applications of program trading. An alternative form of index arbitrage was a system where sufficient deviations in the forecasted variance of the relationship (estimated by regression) between index pairs and the implied volatilities (estimated from index option prices) on the indices were classed as an arbitrage opportunity. There are many variations on this theme in operation based on the VIX market today. Statistical Pairs trading is based on the notion of relative pricing - securities with similar characteristics should be priced roughly equally. Typically, a long-short position in two assets is created such that the portfolio is uncorrelated to market returns (i.e. it has a negligible beta). The basis in this case is the spread between the two assets. Depending on whether the trader expects the spread to contract or expand, the trade action is called shorting the spread or buying the spread. Such trades are also called convergence trades.

A popular and powerful statistical technique used in pairs trading is cointegration, which is the identification of a linear combination of multiple non-stationary data series to form a stationary (and hence predictable) series.

Trading Algorithms

In recent years, computer algorithms have become the decision-making machines behind many trading strategies. The ability to deal with large numbers of inputs, utilise long variable histories, and quickly evaluate quantitative conditions to produce a trading signal, have made algorithmic trading systems the natural evolutionary step in high-frequency financial applications. Originally the main focus of algorithmic trading systems was in neutral impact market strategies (e.g. Volume Weighted Average Price and Time Weighted Average Price trading), however, their scope has widened considerably, and much of the work previously performed by manual systematic traders can now be done by “black box” algorithms.

Trading algorithms are no different from human traders in that they need an unambiguous measure of performance – i.e. risk versus return. The ubiquitous Sharpe ratio (\(\frac{\mu_r – \mu_f}{\sigma}\)) is a popular measure, although other measures are also used. A measure of trading performance that is commonly used is that of total return, which is defined as

\[ R_T \equiv \sum_{j=1}^{n}r_j \]

over a number of transactions n, and a return per transaction \(r_j\). The annualized total return is defined as \(R_A = R_T \frac{d_A}{d_T}\), where \(d_A\) is the number of trading days in a year, and \(d_T\) is the number of days in the trading period specified by \(R_T\). The maximum drawdown over a certain time period is defined as \(D_T \equiv \max(R_{t_a}-R_{t_b}|t_0 \leq t_a \leq t_b \leq t_E)\), where \(T = t_E – t_0\), and \(R_{t_a}\) and \(R_{t_b}\) are the total returns of the periods from \(t_0\) to \(t_a\) and \(t_b\) respectively. A resulting indicator is the Stirling Ratio, which is defined as

\[ SR = \frac{R_T}{D_T} \]

High-frequency tick data possesses certain characteristics which are not as apparent in aggregated data. Some of these characteristics include:

  • Non-normal characteristic probability distributions. High-frequency data may have large kurtosis (heavy tails), and be asymmetrically skewed;
  • Diurnal seasonality – an intraday seasonal pattern influenced by the restrictions on trading times in markets. For instance, trading activity may be busiest at the start and end of the trading day. This may not apply so much to foreign exchange, as the FX market is a decentralized 24-hour operation, however, we may see trend patterns in tick interarrival times around business end-of-day times in particular locations;
  • Real-time high frequency data may contain errors, missing or duplicated tick values, or other anomalies. Whilst historical data feeds will normally contain corrections to data anomalies, real-time data collection processes must be aware of the fact that adjustments may need to be made to the incoming data feeds.
Categories
Statistics

The Bias/Variance Tradeoff

Probably one of the nicest explanations of the bias/variance tradeoff is the one I found in the book Introduction to Information Retrieval (full book available online). The tradeoff can be explained mathematically, and also more intuitively. The mathematical explanation is as follows:

if we have a learning method that operates on a given set of input data (call it $x$) and a “real” underlying process that we are trying to approximate (call it $\alpha$), then the expected (squared) error is:

\[
\begin{aligned}
E[x-\alpha]^2 = Ex^2 – 2Ex\alpha + \alpha^2\\
= (Ex)^2 – 2Ex\alpha + \alpha^2 + Ex^2 – 2(Ex)^2 + (Ex)^2\\
= [Ex-\alpha]^2 + Ex^2 – E2x(Ex) + E(Ex)^2\\
= [Ex-\alpha]^2 + E[x-Ex]^2
\end{aligned}
\]

Taking advantage of the linearity of expectation and adding a few extra cancelling terms, we end up with the representation:

$$
Error = bias (E[x-\alpha]^2) + variance (E[x-Ex]^2)
$$

Thats the mathematical equivalence. However, a more descriptive approach is as follows:

Bias is the squared difference between the true underlying distribution and the prediction of the learning process, averaged over our input datasets. Consistently wrong prediction equal large bias. Bias is small when the predictions are consistently right, or the average error across different training sets is roughly zero. Linear models generally have a high bias for nonlinear problems. Bias can represent the domain knowledge that we have built into the learning process – a linear assumption may be unsuitable for a nonlinear problem, and thus result in high bias.

Variance is the variation in prediction (or the consistency) – it is large if different training sets result in different learning models. Linear models will generally have lower variance. High variance generally results in overfitting – in effect, the learning model is learning from noise, and will not generalize well.

Its a useful analogy to think of most learning models as a box with two dials – bias and variance, and the setting of one will affect the other. We can only try and find the “right” setting for the situation we are working with. Hence the bias-variance tradeoff.