The Geometry Of Least Squares

Here’s a simple and intuitive way of looking at the geometry of a least squares regression:

Take the bottom left point in the triangle below as the origin O. For the linear model:

$$Y=X\beta + \epsilon$$

Both $Y$ and $$X\beta$$ are vectors, and the residual vector $$\epsilon$$ is the difference. The standard least squares error technique uses $$\epsilon^2$$ or $$(Y-X\beta)^T(Y-X\beta)$$ as the error measure to be minimised, and this leads to the calculation of the $$\beta$$ coefficient vector.

Least Squares Geometry

Geometrically, the beta coefficients calculated by the least squares regression minimise the squared length of the error vector. This turns out to be the projection of $$Y$$ on to $$X\beta$$ – i.e. the perpendicular vector that turns (O, $$Y$$, $$X\beta$$) into a right-angled triangle.

The projection of $$Y$$ onto $$X\beta$$ is done using the projection matrix P, which is defined as

$P = X\left(X^{T}X\right)^{-1}X^{T}$

So $$X\beta = \hat{Y} = PY$$.

Using the Pythagorean theorem:

$$Y^TY = \hat{Y}^T\hat{Y} + (Y-X\beta)^T(Y-X\beta)$$

In other words, the total sum of squares = sum of squares due to regression + residual sum of squares. This is a fundamental part of analysis of variance techniques.

Plotting Tick Data with ggplot2

Here are some examples of using ggplot2 and kdb+ together to produce some simple graphs of data stored in kdb+. I am using the qserver extension for R (http://code.kx.com/wsvn/code/cookbook_code/r/) to connect to a running kdb+ instance from within R.

First, lets create a dummy data set: a set of evenly-spaced timestamps and a random walk price series:

ONE_SEC:long$1e9 tab:([]time:.z.P+ONE_SEC * (til 1000);price:sums?[1000?1.<0.5;-1;1]) Then import the data into R: >tab <- execute(h,'select from tab') Then plot a simple line graph – remember ggplot2 works natively with data frames: >library(ggplot2) >ggplot(tab, aes(x=time, y=price)) + geom_line() + ggtitle("Stock Price Evolution") This will produce a line graph similar to the one below: Next, we can do a simple bin count / histogram on the price series: ggplot(tab, aes(x=(price))) + geom_histogram() Which will produce a graph like the following: We can adjust the bin width to get a more granular graph using the binwidth parameter: > ggplot(tab, aes(x=(price))) + geom_histogram(position="identity", binwidth=1) We can also make use of some aesthetic attributes, e.g. fill color – we can shade the histogram by the number of observations in each bin: ggplot(tab, aes(x=(price), fill=..count..)) + geom_histogram(position="identity", binwidth=1) Which results in: Some other graphs: Say I have a data frame with a bunch of currency tick data (bid/offer/mid prices). The currencies are interspersed. Here is a sample: > head(ccys) sym timestamp bid ask mid 1 AUDJPY 2013-01-15 11:00:16.127 94.485 94.496 94.4905 2 AUDJPY 2013-01-15 11:00:22.592 94.486 94.496 94.4910 3 AUDJPY 2013-01-15 11:00:30.117 94.498 94.505 94.5015 4 AUDJPY 2013-01-15 11:00:30.325 94.498 94.506 94.5020 5 AUDJPY 2013-01-15 11:00:37.118 94.499 94.507 94.5030 6 AUDJPY 2013-01-15 11:00:47.348 94.526 94.536 94.5310  I want to add a column containing the log-returns calculated separately for each currency:  log.ret <- function(x) do.call("rbind", lapply(seq_along(x), function(i) cbind(x[[i]],lr=c(0, diff(log(x[[i]]$mid))))))
ccys <- log.ret(split(ccys, f=ccys$sym))  Then I can plot a simple line chart of the log returns using the lovely facets feature in ggplot to split out a separate panel per symbol: ggplot(ccys, aes(x=timestamp, y=lr)) + geom_line() + facet_grid(sym ~ .)  Which produces the following: Another nice graph – display a visual summary of the tick frequency by time. This one uses a dummy column that represents a tick arrival. Note in the following graph I have reduced the line width and set the alpha value to a tiny value (creating a large transparency effect) as otherwise the density of tick lines is too great. The overall effect is visually pleasing: ccys <- cbind(ccys, dummy=rep(1,nrow(ccys))) ggplot(ccys, aes(x=timestamp,y=dummy, ymin=0, ymax=1)) + geom_linerange(alpha=1/2,size=.01,width=.01) + facet_grid(sym ~ .) + theme(axis.text.y=element_blank()) + xlab("ticks") + ylab("time") + ggtitle("Tick Density")  Which results in: Next time (time permitting) – covariance matrices and bivariate density plots.. Rmathlib and kdb+ part 3: Utility Functions In the first two parts of this series, I looked at the basics of the interface I created between rmathlib and kdb+. In this post, I’ll go through some of the convenience functions I wrote to emulate some basic R functionality. NOTE: I wrote these functions as a learning exercise to familiarise myself with q/kdb+ a bit more – they are very simplistic and I am sure there are much better ways to do them (I know the very nice qml project does a lot of this stuff, plus interfaces to external BLAS libraries). Some basics: load the library: q)\l rmath.q seq – sequence function The first convenience function is seq, which is like R’s seq() or Numpy’s arange() in that it takes a start and end point, and generates a range of numbers. q)seq[1;100] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26... The function is just a curried wrapper around seqn, which also takes a step size: q)seqn[1;10;.5] 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10  This is handy when working with some of the probability generation functions shown last time. For instance the commands below calculate the normal distribution (cdf) values for evenly spaced points from -3 to 3: q)each[{pnorm[x;0;1]}] seqn[-3;3;.5] 0.001349898 0.006209665 0.02275013 0.0668072...  table – Symbol Tabulation R provides a handy function to show summary count tables of factor levels. In q, this can be used as follows: q)t:([] sym:1000?ABCD) q)table[sym;t] B| 241 A| 244 C| 256 D| 259 This tabulates the column sym from the table t. The summary is ordered by increasing frequency count. NOTE: this has really no advantage over the standard q)x xasc select count i by sym from t sym| x ---| --- B | 241 A | 244 C | 256 D | 259 except some slight brevity. range – Min/Max The range function simply returns the boundaries of a set of values: q)x:rnorm[10000] q)range[x] -3.685814 4.211363 q)abs (-) . range[x] / absolute range 7.897177 summary – Min/Max/Mean/Median/IQR The summary function provides summary stats, a la R’s summary() function: q)x:norm[10000;3;2] q)summary x min | -4.755305 1q | 1.59379 median| 2.972523 mean | 2.966736 3q | 4.336589 max | 10.00284 quantile – Quantile Calculations A very simple quantile calc: q)x:norm[10000;3;2] q)quantile[x;.5] 2.973137 hist – Bin Count Very crude bin count – specify the data and the number of bins: q)hist[x;10] -4.919383| 14 -3.279589| 101 -1.639794| 601 0 | 1856 1.639794 | 3043 3.279589 | 2696 4.919383 | 1329 6.559177 | 319 8.198972 | 40 9.838766 | 1 diag – Identity Matrix Generation q)diag 10 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 Scale functions Sometimes its very useful to scale an input data set – e.g. when feeding multiple inputs into a statistical model, large differences between the relative scales of the inputs combined with finite-precision computer arithmetic can result in some inputs being dwarfed by others. The scale function just adjusts the input as follows: $$X_{s}=(X-\mu)/\sigma$$. The example below scales two inputs with different ranges: q)x:norm[10;0;1]; y:norm[10;5;3] q)x 1.920868 -1.594028 -0.02312519 1.079606 -0.5310111 0.2762119 0.1218428 0.9584264 -0.4244091 -0.7981221 q)y 10.69666 2.357529 8.93505 3.65696 5.218461 3.246216 5.971919 7.557135 1.412827 1.246241 q)range x -1.594028 1.920868 q)range y 1.246241 10.69666 q)range scale x -1.74507 1.878671 q)range scale y -1.232262 1.845551 There are other useful scaling measures, including the min/max scale: $$\frac{x-min(x)}{max(x)-min(x)}$$. This is implemented using the minmax function: q)minmax x 1 0 0.4469273 0.760658 0.302432 0.5320897 0.4881712 0.726182 0.3327606 0.226438 range minmax x 0 1f  There are other functions which are useful for scaling, e.g. the RMSD (root-mean-square deviation): $$\sqrt{\frac{\sum{x_i^2}}{N}}$$: q)x:rnorm 1000 q)rms x 1.021065 nchoosek – Combinations The nchoosek function calculates the number of combinations of N items chosen k at a time (i.e. $${N}\choose{k}$$: q)nchoosek[100;5] 7.528752e+07 q)each[{nchoosek[10;x]}] seq[1;10] 10 45 120 210 252 210 120 45 10 1f The source file is available here: https://github.com/rwinston/kdb-rmathlib/blob/master/rmath_aux.q. Rmathlib and kdb+, part 2 – Probability Distribution Functions Following on from the last post on integrating some rmathlib functionality with kdb+, here is a sample walkthrough of how some of the functionality can be used, including some of the R-style wrappers I wrote to emulate some of the most commonly-used R commands in q. Loading the rmath library Firstly, load the rmathlib library interface: q)\l rmath.q Random Number Generation R provides random number generation facilities for a number of distributions. This is provided using a single underlying uniform generator (R provides many different RNG implementations, but in the case of Rmathlib it uses a Marsaglia-multicarry type generator) and then uses different techniques to generate numbers distributed according to the selected distribution. The standard technique is inversion, where a uniformly distributed number in [0,1] is mapped using the inverse of the probability distribution function to a different distribution. This is explained very nicely in the book “Non-Uniform Random Variate Generation”, which is availble in full here: http://luc.devroye.org/rnbookindex.html. In order to make random variate generation consistent and reproducible across R and kdb+, we need to be able to seed the RNG. The default RNG in rmathlib takes two integer seeds. We can set this in an R session as follows: > .Random.seed[2:3]<-as.integer(c(123,456))  and the corresponding q command is: q)sseed[123;456] Conversely, getting the current seed value can be done using: q)gseed[] 123 456i The underlying uniform generator can be accessed using runif: q)runif[100;3;4] 3.102089 3.854157 3.369014 3.164677 3.998812 3.092924 3.381564 3.991363 3.369.. produces 100 random variates uniformly distributed between [3,4]. Then for example, normal variates can be generated: q)rnorm 10 -0.2934974 -0.334377 -0.4118473 -0.3461507 -0.9520977 0.9882516 1.633248 -0.5957762 -1.199814 0.04405314 This produces identical results in R: > rnorm(10) [1] -0.2934974 -0.3343770 -0.4118473 -0.3461507 -0.9520977 0.9882516 1.6332482 -0.5957762 -1.1998144 [10] 0.0440531  Normally-distributed variables with a distribution of $$N(\mu,\sigma)$$ can also be generated: q)dev norm[10000;3;1.5] 1.519263 q)avg norm[10000;3;1.5] 2.975766 Or we can alternatively scale a standard normal $$X ~ N(0,1)$$ using $$Y = \sigma X + \mu$$: q)x:rnorm[1000] q) int$ (avg x; dev x) 0 1i q)y:(x*3)+5 q) int$(avg y; dev y) 5 3i Probability Distribution Functions As well as random variate generation, rmathlib also provides other functions, e.g. the normal density function: q)dnorm[0;0;1] 0.3989423 computes the normal density at 0 for a standard normal distribution. The second and third parameters are the mean and standard deviation of the distribution. The normal distribution function is also provided: q)pnorm[0;0;1] 0.5 computes the distribution value at 0 for a standard normal (with mean and standard deviation parameters). Finally, the quantile function (the inverse of the distribution function – see the graph below – the quantile value for .99 is mapped onto the distribution function value at that point: 2.32): q)qnorm[.99;0;1] 2.326348 We can do a round-trip via pnorm() and qnorm(): q)int$ qnorm[ pnorm[3;0;1]-pnorm[-3;0;1]; 0; 1] 3i

Thats it for the distribution functions for now – rmathlib provides lots of different distributions (I have just linked in the normal and uniform functions for now. There are some other functions that I have created that I will cover in a future post.

All code is on github: https://github.com/rwinston/kdb-rmathlib

[Check out part 3 of this series]

Integrating Rmathlib and kdb+

The R engine is usable in a variety of ways – one of the lesser-known features is that it provides a standalone math library that can be linked to from an external application. This library provides some nice functionality such as:

* Probability distribution functions (density/distribution/quantile functions);
* Random number generation for a large number of probability distributions

In order to make use of this functionality from q, I built a simple Rmathlib wrapper library. The C wrapper can be found here and is simply a set of functions that wrap the appropriate calls in Rmathlib. For example, a function to generate N randomly-generated Gaussian values using the underlying rnorm() function is:

K rnn(K n, K mu, K sigma) {
int i,count = n->i;
K ret = ktn(KF, count);
for (i = 0; i < count; ++i)
kF(ret)[i] = rnorm(mu->f, sigma->f);
return ret;
}


These have to be imported and linked from a kdb+ session, which is done using special directives (the 2: verb). I decided to automate the process of generating these directives – the code shell script below parses a set of function declarations in a delimited section of a C header file and produces the appropriate load statements:

INFILE=rmath.h
DLL=\:rmath

echo "dll:$DLL" DECLARATIONS=$(awk '/\/\/ BEGIN DECL/ {f=1;next} /\/\/ END DECL/ {f=0} f {sub(/K /,"",$0);print$0}' $INFILE) for decl in$DECLARATIONS; do
FNAME=${decl%%(*} ARGS=${decl##$FNAME} IFS=, read -r -a CMDARGS <<< "$ARGS"
echo "${FNAME}:dll 2:(\$FNAME;${#CMDARGS[*]})" done echo "\\l rmath_aux.q"  This generates a set of link commands such as the following: dll::rmath rn:dll 2:(rn;2) rnn:dll 2:(rnn;3) dn:dll 2:(dn;3) pn:dll 2:(pn;3) qn:dll 2:(qn;3) sseed:dll 2:(sseed;2) gseed:dll 2:(gseed;1) nchoosek:dll 2:(nchoosek;2)  It also generates a call to load a second q script, rmath_aux.q, which contains a bunch of q wrappers and helper functions (I will write a separate post about that later). A makefile is included which generates the shared lib (once the appropriate paths to the R source files is set) and q scripts. A sample q session looks like the following:  q) \l rmath.q q) x:rnorm 1000 / generate 1000 normal variates q) dnorm[0;0;1] / normal density at 0 for a mean 0 sd 1 distribution  The project is available on github: https://github.com/rwinston/kdb-rmathlib. Note that loading rmath.q loads the rmath dll, which in turn loads the rmathlib dll, so the rmathlib dll should be available on the dynamic library load path. [Check out Part 2 of this series] Exporting Data From R to KDB Here is the beginnings of a simple routine to convert R data frames to Q format (in this case a dictionary). It uses the S3 dispatch mechanism to handle the conversion of different data types. Extremely basic (I havent even bothered to put in proper file output – just capturing the output of cat) but very quick to knock up. The code is mainly a top-level function called to_dict: to_dict <- function(x) { cat(substitute(x),":", sep="") nms <- names(x) for (n in nms) { cat("",n,sep="") } cat("!(",sep="") r <- rep(c(";",")"),times=c(length(nms)-1,1)) for (i in 1:length(nms)) { cat(qformat(x[[nms[i]]]),r[i],sep="") } }  Where qformat is a generic S3 function: qformat <- function(x) { UseMethod("qformat") } qformat.default <- function(x) { cat("",format(x)) } qformat.logical <- function(x) { cat(ifelse(x==TRUE,"1b","0b")) } qformat.factor <- function(x) { cat("",gsub("\\s","",format(x)), sep="") }  It can be used as follows (using the famous Anscombe quartet data): > write(capture.output(to_dict(anscombe)), file="/tmp/anscombe.q")  Then within a Q session: q)\l /tmp/anscombe.q q)anscombe x1| 10 8 13 9 11 14 6 4 12 7 5 x2| 10 8 13 9 11 14 6 4 12 7 5 x3| 10 8 13 9 11 14 6 4 12 7 5 x4| 8 8 8 8 8 8 8 19 8 8 8 y1| 8.04 6.95 7.58 8.81 8.33 9.96 7.24 4.26 10.84 4.82 5.68 y2| 9.14 8.14 8.74 8.77 9.26 8.1 6.13 3.1 9.13 7.26 4.74 y3| 7.46 6.77 12.74 7.11 7.81 8.84 6.08 5.39 8.15 6.42 5.73 y4| 6.58 5.76 7.71 8.84 8.47 7.04 5.25 12.5 5.56 7.91 6.89 Density Estimation of High-Frequency Financial Data Frequently we will want to estimate the empirical probability density function of real-world data and compare it to the theoretical density from one or more probability distributions. The following example shows the empirical and theoretical normal density for EUR/USD high-frequency tick data $$X$$ (which has been transformed using log-returns and normalized via $$\frac{X_i-\mu_X}{\sigma_X}$$). The theoretical normal density is plotted over the range $$\left(\lfloor\mathrm{min}(X)\rfloor,\lceil\mathrm{max}(X)\rceil\right)$$. The results are in the figure below. The discontinuities and asymmetry of the discrete tick data, as well as the sharp kurtosis and heavy tails (a corresponding interval of $$\approx \left[-8,+7\right]$$ standard deviations away from the mean) are apparent from the plot. Empirical and Theoretical Tick Density We also show the theoretical and empirical density for the EUR/USD exchange rate log returns over different timescales. We can see from these plots that the distribution of the log returns seems to be asymptotically converging to normality. This is a typical empirical property of financial data. Density Estimate Across Varying Timescales The following R source generates empirical and theoretical density plots across different timescales. The data is loaded from files that are sampled at different intervals. I cant supply the data unfortunately, but you should get the idea. # Function that reads Reuters CSV tick data and converts Reuters dates # Assumes format is Date,Tick readRTD <- function(filename) { tickData <- read.csv(file=filename, header=TRUE, col.names=c("Date","Tick")) tickData$Date <- as.POSIXct(strptime(tickData$Date, format="%d/%m/%Y %H:%M:%S")) tickData } # Boilerplate function for Reuters FX tick data transformation and density plot plot.reutersFXDensity <- function() { filenames <- c("data/eur_usd_tick_26_10_2007.csv", "data/eur_usd_1min_26_10_2007.csv", "data/eur_usd_5min_26_10_2007.csv", "data/eur_usd_hourly_26_10_2007.csv", "data/eur_usd_daily_26_10_2007.csv") labels <- c("Tick", "1 Minute", "5 Minutes", "Hourly", "Daily") par(mfrow=c(length(filenames), 2),mar=c(0,0,2,0), cex.main=2) tickData <- c() i <- 1 for (filename in filenames) { tickData[[i]] <- readRTD(filename) # Transform: $Y = \nabla\log(X_i)$ logtick <- diff(log(tickData[[i]]$Tick))
# Normalize: $\frac{(Y-\mu_Y)}{\sigma_Y}$
logtick <- (logtick-mean(logtick))/sd(logtick)
# Theoretical density range: $\left[\lfloor\mathrm{min}(Y)\rfloor,\lceil\mathrm{max}(Y)\rceil\right]$
x <- seq(floor(min(logtick)), ceiling(max(logtick)), .01)
plot(density(logtick), xlab="", ylab="", axes=FALSE, main=labels[i])
lines(x,dnorm(x), lty=2)
#legend("topleft", legend=c("Empirical","Theoretical"), lty=c(1,2))
plot(density(logtick), log="y", xlab="", ylab="", axes=FALSE, main="Log Scale")
lines(x,dnorm(x), lty=2)
i <- i + 1
}
par(op)
}


Binomial Pricing Trees in R

Binomial Tree Simulation

The binomial model is a discrete grid generation method from $$t=0$$ to $$T$$. At each point in time ($$t+\Delta t$$) we can move up with probability $$p$$ and down with probability $$(1-p)$$. As the probability of an up and down movement remain constant throughout the generation process, we end up with a recombining binary tree, or binary lattice. Whereas a balanced binomial tree with height $$h$$ has $$2^{h+1}-1$$ nodes, a binomial lattice of height $$h$$ has $$\sum_{i=1}^{h}i$$ nodes.

The algorithm to generate a binomial lattice of $$M$$ steps (i.e. of height $$M$$) given a starting value $$S_0$$, an up movement $$u$$, and down movement $$d$$, is:

 FOR i=1 to M FOR j=0 to i STATE S(j,i) = S(0)*u^j*d^(n-j) ENDFOR ENDFOR 

We can write this function in R and generate a graph of the lattice. A simple lattice generation function is below:

# Generate a binomial lattice
# for a given up, down, start value and number of steps
genlattice <- function(X0=100, u=1.1, d=.75, N=5) {
X <- c()
X[1] <- X0
count <- 2

for (i in 1:N) {
for (j in 0:i) {
X[count] <- X0 * u^j * d^(i-j)
count <- count + 1
}
}
return(X)
}


We can generate a sample lattice of 5 steps using symmetric up-and-down values:

> genlattice(N=5, u=1.1, d=.9)
[1] 100.000  90.000 110.000  81.000  99.000 121.000  72.900  89.100 108.900 133.100  65.610
[12]  80.190  98.010 119.790 146.410  59.049  72.171  88.209 107.811 131.769 161.051


In this case, the output is a vector of alternate up and down state values.

We can nicely graph a binomial lattice given a tool like graphviz, and we can easily create an R function to generate a graph specification that we can feed into graphviz:

function(S, labels=FALSE) {
shape <- ifelse(labels == TRUE, "plaintext", "point")

cat("digraph G {", "\n", sep="")
cat("node[shape=",shape,", samehead, sametail];","\n", sep="")
cat("rankdir=LR;","\n")

# Create a dot node for each element in the lattice
for (i in 1:length(S)) {
cat("node", i, "[label=\"", S[i], "\"];", "\n", sep="")
}

# The number of levels in a binomial lattice of length N
# is $\frac{\sqrt{8N+1}-1}{2}$
L <- ((sqrt(8*length(S)+1)-1)/2 - 1)

k<-1
for (i in 1:L) {
tabs <- rep("\t",i-1)
j <- i
while(j>0) {
cat("node",k,"->","node",(k+i),";\n",sep="")
cat("node",k,"->","node",(k+i+1),";\n",sep="")
k <- k + 1
j <- j - 1
}
}

cat("}", sep="")
}


This will simply output a dot script to the screen. We can capture this script and save it to a file by invoking:

> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9)))
> cat(x, file="/tmp/lattice1.dot")


We can then invoke dot from the command-line on the generated file:

$dot -Tpng -o lattice.png -v lattice1.dot  The resulting graph looks like the following: Binomial Lattice (no labels) If we want to add labels to the lattice vertices, we can add the labels attribute: > x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9), labels=TRUE)) > cat(x, file="/tmp/lattice1.dot")  Binomial Lattice (labels) Statistical Arbitrage II : Simple FX Arbitrage Models In the context of the foreign exchange markets, there are several simple no-arbitrage conditions, which, if violated outside of the boundary conditions imposed by transaction costs, should provide the arbitrageur with a theoretical profit when market conditions converge to theoretical normality. Detection of arbitrage conditions in the FX markets requires access to high-frequency tick data, as arbitrage opportunities are usually short-lived. Various market inefficiency conditions exist in the FX markets. Apart from the basic strategies outlined in the following sections, other transient opportunities may exist, if the trader or trading system can detect and act on them quickly enough. Round-Trip Arbitrage Possibly the most well-known no-arbitrage boundary condition in foreign exchange is the covered interest parity condition. The covered interest parity condition is expressed as: $(1+r_d) = \frac{1}{S_t}(1+r_f)F_t$ which specifies that it should not be possible to earn positive return by borrowing domestic assets at$$$r_d$$ for lending abroad at $$r_f$$ whilst covering the exchange rate risk through a forward contract $$F_t$$ of equal maturity.

Accounting for transaction costs, we have the following no-arbitrage relationships:

$(1+r_d^a) \geq \frac{1}{S^a}(1+r_f^b)F^b$
$(1+r_f^b) \geq S^b(1+r_d^b)\frac{1}{F^a}$

For example, the first condition states that the cost of borrowing domestic currency at the ask rate ($$1+r_d^a$$) should be at least equal to the cost of converting said currency into foreign currency ($$\frac{1}{s^a}$$) at the prevailing spot rate $$S^a$$ (assuming that the spot quote $$S^a$$ represents the cost of a unit of domestic currency in terms of foreign currency), invested at $$1+r_f^b$$, and finally converted back into domestic currency via a forward trade at the ask rate ($$F^a$$). If this condition is violated, then we can perform round-trip arbitrage by converting, investing, and re-converting at the end of the investment term. Persistent violations of this condition are the basis for the roaring carry trade, in particular between currencies such as the Japanese Yen and higher yielding currencies such as the New Zealand dollar and the Euro.

Triangular Arbitrage
A reduced form of FX market efficiency is that of triangular arbitrage, which is the geometric relationship between three currency pairs. Triangular arbitrage is defined in two forms, forward arbitrage and reverse arbitrage. These relationships are defined below.

$$\left(\frac{C_1}{C_2}\right)_{ask} \left(\frac{C_2}{C_3}\right)_{ask} = \left(\frac{C_1}{C_3}\right)_{bid} \\ \left(\frac{C_1}{C_2}\right)_{bid} \left(\frac{C_2}{C_3}\right)_{bid} = \left(\frac{C_1}{C_3}\right)_{ask}$$

With two-way high-frequency prices, we can simultaneously calculate the presence of forward and reverse arbitrage.

A contrived example follows: if we have the following theoretical two-way tradeable prices: $$\left(\frac{USD}{JPY}\right) = 90/110$$, $$\left(\frac{GBP}{USD}\right) = 1.5/1.8$$, and $$\left(\frac{JPY}{GBP}\right) = 150/200$$. By the principle of triangular arbitrage, the theoretical two-way spot rate for JPY/GBP should be $$135/180$$. Hence, we can see that JPY is overvalued relative to GBP. We can take advantage of this inequality as follows, assuming our theoretical equity is 1 USD:

• Pay 1 USD and receive 90 JPY ;
• Sell 90 JPY ~ and receive $$\left(\frac{90}{135}\right)$$ GBP ;
• Pay $$\left(\frac{90}{135}\right)$$ GBP, and receive $$\left(\frac{90}{135}\right) \times$$ 1.5 USD = 1.32 USD.

We can see that reverse triangular arbitrage can detect a selling opportunity (i.e. the bid currency is overvalued), whilst forward triangular arbitrage can detect a buying opportunity (the ask currency is undervalued).

The list of candidate currencies could be extended, and the arbitrage condition could be elegantly represented by a data structure called a directed graph. This would involve creating an adjacency matrix $$R$$, in which an element $$R_{i,j}$$ contains a measure representing the cost of transferring between currency $$i$$ and currency $$j$$.

Estimating Position Risk
When executing an arbitrage trade, there are some elements of risk. An arbitrage trade will normally involve multiple legs which must be executed simultaneously and at the specified price in order for the trade to be successful. As most arbitrage trades capitalize on small mispricings in asset values, and rely on large trading volumes to achieve significant profit, even a minor movement in the execution price can be disastrous. Hence, a trading algorithm should allow for both trading costs and slippage, normally by adding a margin to the profitability ratio. The main risk in holding an FX position is related to price slippage, and hence, the variance of the currency that we are holding.

Quasi-Random Number Generation in R

Random number generation is a core topic in numerical computer science. There are many efficient
algorithms for generating random (strictly speaking, pseudo-random) variates
from different probability distributions. The figure below shows a
sampling of 1000 two-dimensional random variates from the standard Gaussian and
Cauchy distributions, respectively. The size of the extreme deviations of the
Cauchy distribution is apparent from the graph.

However, sometimes we need to produce numbers that are more evenly distributed (quasi-random numbers). For example, in a Monte Carlo integration exercise, we can get faster convergence with a lower error bound using so-called low-discrepancy random sequences, using the GSL library. In the figure below, we show two-dimensional normal and Sobol (a low-discrepancy generator) variates.

Normal, Cauchy, and Sobol 2-d variates

To generate the graph below, I used the GSL library for R, as shown below:

library(gsl)
q <- qrng_alloc(type="sobol", 2)
rs <- qrng_get(q,1000)
par(mfrow=c(3,1))
plot(rnorm(1000), rnorm(1000), pch=20, main="~N(0,1)",
ylab="", xlab="")
plot(rs, pch=20, main="Sobol",
ylab="", xlab="")
plot(rcauchy(1000), rcauchy(1000),pch=20,
main="~C(0,1)", ylab="",xlab="")


The property of low-discrepancy generators is even more apparent if we view the random variates in a higher dimension, for example the figure below shows the variates as a 3-dimensional cube. Note how the clustering around the centre of the cube is much more pronounced for the Gaussian cube.

3D Random Variates

To plot the figure above, I used the GSL and Lattice libraries:

library(gsl)
library(lattice)
q <- qrng_alloc(type="sobol", 3)
npoints <- 200
rs <- qrng_get(q,npoints)
ltheme <- canonical.theme(color = FALSE)
ltheme$strip.background$col <- "transparent"
lattice.options(default.theme = ltheme)
trellis.par.set(layout.heights =
`