# The Euler Method In R

The Euler Method is a very simple method used for numerical solution of initial-value problems. Although there are much better methods in practise, it is a nice intuitive mechanism.

The objective is to find a solution to the equation

$$\frac{dy}{dt} = f(t,y)$$

over a grid of points (equally spaced, in our case). Euler’s method uses the relation (basically a truncated form of Taylor’s approximation theorem with the error terms chopped off):

$$y(t_{i+1}) = y(t_i) + h\frac{df(t_i, y(t_i))}{dt}$$

In R, we can express this iterative solution as:

euler <- function(dy.dx=function(x,y){}, h=1E-7, y0=1, start=0, end=1) {
nsteps <- (end-start)/h
ys <- numeric(nsteps+1)
ys[1] <- y0
for (i in 1:nsteps) {
x <- start + (i-1)*h
ys[i+1] <- ys[i] + h*dy.dx(x,ys[i])
}
ys
}


Note that given the start and end points, and the size of each step, we figure out the number of steps. Inside the loop, we calculate each successive approximation.

An example using the difference equation

$$\frac{df(x,y)}{dx} = 3x – y + 8$$

is:

dy.dx <- function(x,y) { 3*x - y + 8 }
euler(dy.dx, start=0, end=0.5, h=0.1, y0=3)
[1] 3.00000 3.50000 3.98000 4.44200 4.88780 5.31902


# Newton’s Method In R

Here is a toy example of implementing Newton’s method in R. I found some old code that I had written a few years ago when illustrating the difference between convergence properties of various root-finding algorithms, and this example shows a couple of nice features of R.

Newtons method is an iterative root-finding algorithm where we start with an initial guess $$x_0$$ and each successive guess is refined using the iterative process:

$$x_{n+1} = x_n – \frac{f(x_n)}{f’(x_n)}$$

Here is a simple implementation of this in R:

newton <- function(f, tol=1E-12,x0=1,N=20) {
h <- 0.001
i <- 1; x1 <- x0
p <- numeric(N)
while (i<=N) {
df.dx <- (f(x0+h)-f(x0))/h
x1 <- (x0 - (f(x0)/df.dx))
p[i] <- x1
i <- i + 1
if (abs(x1-x0) < tol) break
x0 <- x1
}
return(p[1:(i-1)])
}


Note a couple of things:

* The step-size for numerical differentiation is hardcoded to be 0.001. This is arbitrary and should probably be a parameter.
* The algorithm will run until either the number of steps N has been reached, or the error tolerance $$\left|x_{n+1}-x_n\right|< \epsilon$$, where $$\epsilon$$ is defined as the tolerance parameter tol.
* The function returns a vector of the iterated x positions, which will be <= N.

Here is a simple example: for instance, to find the zeros of the function

$$f(x) = x^3 + 4x^2 - 10$$

which in R we can represent as:

f <- function(x) { x^3 + 4*x^2 -10 }

Let's plot the function over the range [1,2]:

plot(x,f(x), type='l', lwd=1.5, main=expression(sin(x/2) * cos(x/4)))
abline(h=0)


It seems obvious from the plot that the zero of f(x) in the range [1,2] is somewhere between x=1.3 and x=1.4.

This is made even more clear if we tabulate the x,y values over this range:

> xy <- cbind(x,f(x))
> xy
x
[1,] 1.0 -5.000
[2,] 1.1 -3.829
[3,] 1.2 -2.512
[4,] 1.3 -1.043
[5,] 1.4  0.584
[6,] 1.5  2.375
[7,] 1.6  4.336
[8,] 1.7  6.473
[9,] 1.8  8.792
[10,] 1.9 11.299
[11,] 2.0 14.000


Using the function defined earlier, we can run the root-finding algorithm:

> p <- newton(f, x0=1, N=10)
> p
[1] 1.454256 1.368917 1.365238 1.365230 1.365230 1.365230 1.365230


This returns 1.365230 as the root of f(x). Plotting the last value in the iteration:

> abline(v=p[length(p)])


In the iteration loop, we use a schoolbook-style numerical derivative:

$$\frac{dy}{dx} = \frac{f(x+\epsilon)-f(x)}{\epsilon}$$

Its also worth noting that R has some support for symbolic derivatives, so we could use a symbolic function expression. A simple example of symbolic differentiation in R (note that R works with expression instances when using symbolic differentiation):

> e <- expression(sin(x/2)*cos(x/4))
> dydx <- D(e, "x")
> dydx
cos(x/2) * (1/2) * cos(x/4) - sin(x/2) * (sin(x/4) * (1/4))


In order for this to be useful, obviously we need to be able to evaluate the calculated derivative. We can do this using eval:

> z <- seq(-1,1,.1)
> eval(dydx, list(x=z))
[1] 0.3954974 0.4146144 0.4320092 0.4475873 0.4612640 0.4729651 0.4826267
[8] 0.4901964 0.4956329 0.4989067 0.5000000 0.4989067 0.4956329 0.4901964
[15] 0.4826267 0.4729651 0.4612640 0.4475873 0.4320092 0.4146144 0.3954974


Note that we can bind the x parameter in the expression when calling eval().

# Building QuickFIX on Solaris

I just had to build the QuickFIX C++ client library on Solaris, and here are a few notes that might be useful when doing the same:

* Make sure you have /usr/ccs/bin in your PATH (so that ar can be found);
* If you are using gmake, you may need to export MAKE=gmake before you run configure.
* If building static libs as well as .so files, run configure --enable-shared=yes.

Once the libraries are built, you can compile and link using:

$g++ -I /usr/local/include/quickfix/include/ main.cpp libquickfix.a -o main -lxml2 This will statically compile against quickfix and dynamically link against libxml2. # The Bias/Variance Tradeoff Probably one of the nicest explanations of the bias/variance tradeoff is the one I found in the book Introduction to Information Retrieval (full book available online). The tradeoff can be explained mathematically, and also more intuitively. The mathematical explanation is as follows: if we have a learning method that operates on a given set of input data (call it$x$) and a “real” underlying process that we are trying to approximate (call it$\alpha), then the expected (squared) error is: \begin{aligned} E[x-\alpha]^2 = Ex^2 – 2Ex\alpha + \alpha^2\\ = (Ex)^2 – 2Ex\alpha + \alpha^2 + Ex^2 – 2(Ex)^2 + (Ex)^2\\ = [Ex-\alpha]^2 + Ex^2 – E2x(Ex) + E(Ex)^2\\ = [Ex-\alpha]^2 + E[x-Ex]^2 \end{aligned} Taking advantage of the linearity of expectation and adding a few extra cancelling terms, we end up with the representation: $$Error = bias (E[x-\alpha]^2) + variance (E[x-Ex]^2)$$ Thats the mathematical equivalence. However, a more descriptive approach is as follows: Bias is the squared difference between the true underlying distribution and the prediction of the learning process, averaged over our input datasets. Consistently wrong prediction equal large bias. Bias is small when the predictions are consistently right, or the average error across different training sets is roughly zero. Linear models generally have a high bias for nonlinear problems. Bias can represent the domain knowledge that we have built into the learning process – a linear assumption may be unsuitable for a nonlinear problem, and thus result in high bias. Variance is the variation in prediction (or the consistency) – it is large if different training sets result in different learning models. Linear models will generally have lower variance. High variance generally results in overfitting – in effect, the learning model is learning from noise, and will not generalize well. Its a useful analogy to think of most learning models as a box with two dials – bias and variance, and the setting of one will affect the other. We can only try and find the “right” setting for the situation we are working with. Hence the bias-variance tradeoff. # Financial Amnesia The FT has a nice article on financial amnesia, which talks about the desire of the CFA UK discipline to incorporate some financial history into their curriculum, ostensibly to implant enough of a sense of deja vu in budding financial managers when they encounter potential disaster scenarios that they may avoid repeating them. I think this is a great idea – the only problem is that it probably wont have any sort of meaningful impact other than provide some fascinating course material for CFA students. The problem with an industry that makes its living from markets that essentially operate on two gears – fear and greed – is that despite all of the prior knowledge in the world, we will willingly impose a state of amnesia upon ourselves if we can convince ourselves that what is happening now is in some way different, or just different enough from whatever disasters happened before. The Internet bubble, tulip-mania, and the various property boom and busts that we have seen over the last decade share at their core a common set of characteristics, but they are different (or “new”) enough that we were able to live in a state of denial. The “fear and greed” mentality also means that even if you know you are operating in a bubble that will eventually burst, you carry on regardless, as you plan to be out of the game before it all goes bad (The Greater Fool theory). Incidentally, if you want to read a beautifully written and entertaining account of financial bubbles by one of the greatest writers on this topic, you should read this book (“A Short History of Financial Euphoria” by J.K. Galbraith). It packs a huge amount of wit and insight into its relatively small page count. NOTE: Galbraith also wrote the definitive account on the Great Crash of 1929, which rapidly became required reading around three years ago…it’s also excellent. Galbraith has a beautiful turn of phrase. # The Elements of Modern Javascript Style In the spirit of the last post, I wanted to share my newfound interest in another language that I previously hated. Ive always treated JavaScript like an unloved cousin. I hated dealing with it, and my brief encounters with it were really unenjoyable. However, I’ve had to deal with it a bit more recently, and thankfully my earlier encounters are distant memories, and the more recent experience is much better. There are two reasons for this: one is the excellent JQuery framework, and the other is the disparate collection of elegant patterns and best practises that have made JavaScript into a reasonably elegant language to work with. The single best reference I have found so far that elucidates on all of these is the excellent “Javascript Web Applications” from O’Reilly, which expands on many facets of modern JavaScript usage. I would definitely recommend checking this book out. # The Elements of Modern C++ Style Herb Sutter has a fantastic short article over at his site on the new C++11 standard. The new extensions to the language are way overdue. One particular example that I really liked was the application of library algorithms and lambdas to create “extensions” to the language. For instance, the example below adds a “lock” construct (similar to the one in C#): // C# lock( mut_x ) { ... use x ... } // C++11 without lambdas: already nice, and more flexible (e.g., can use timeouts, other options) { lock_guard<mutex> hold { mut_x }; ... use x ... } // C++11 with lambdas, and a helper algorithm: C# syntax in C++ // Algorithm: template<typename T, typename F> void lock( T& t, F f ) { lock_guard<t> hold(t); f(); } lock( mut_x, [&]{ ... use x ... });  Of course, all of this is dependent on compiler support. A couple of nice resources on the current state of C++11 support are: http://www.aristeia.com/C++11/C++11FeatureAvailability.htm and https://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport # Presentation on Building R Packages Last week I gave a presentation to the Melbourne R User Group on Building R Packages. The talk covered a simple package example, and an example of interfacing R with native code. The slides are here: The R community in Melbourne (and Australia in general) is exploding, and it was great to see so many people from different backgrounds there. Looking forward to the next event! # Deducing the JDK Version of a .jar File Here is a little script that uses Pyton to examine the contents of a jar file (or specifically the first .class file it comes across) and then reads the major version byte and maps it to a JDk version. May be useful if you have a bunch of jars compiled by different JDKs and want to figure out which is which. #!/usr/bin/python import zipfile import sys import re class_pattern=re.compile("/?\w*.class")

for arg in sys.argv[1:]:
print '%s:' % arg,
file=zipfile.ZipFile(arg, "r")

for entry in file.filelist:
if class_pattern.search(entry.filename):
bytes = file.read(entry.filename)
maj_version=ord(bytes[7])
if maj_version==45:
print "JDK 1.1"
elif maj_version==46:
print "JDK 1.2"
elif maj_version==47:
print "JDK 1.3"
elif maj_version==48:
print "JDK 1.4"
elif maj_version==49:
print "JDK 5.0"
elif maj_version==50:
print "JDK 6.0"
break


# Gdb Macros for R

When debugging R interactively, one hurdle to navigate is unwrapping SEXP objects to get at the inner data. Gdb has some useful macro functionality that allows you to wrap useful command sequences in reusable chunks. I recently put together the following macro that attempts to extract and print some useful info from a SEXP object.

It can be used as follows. For instance, given a SEXP called “e”:

(gdb) dumpsxp e Type: LANGSXP (6) Function:Type: SYMSXP (1) "< -" Args:Type: LISTSXP (2) (SYMSXP,LISTSXP)

We can see that e is a LANGSXP, and the operator is “< -". Functions have different components - here we can see the function representation (the SYMSXP) and the function arguments (the LISTSXP).

Some knowledge of LANGSXP structure is useful here. For instance, if we know that for a LANGSXP that CAR(x) gives us the function and CDR(x) gives us the arguments, we can view the components individually.

To see the first component:

(gdb) dumpsxp CAR(e) Type: SYMSXP (1) "< -"

The arguments are given by the CDR of e. We can then crack open the list and view the function arguments, recursively looking through the pairlist until we get to the end:

(gdb) dumpsxp CDR(e) Type: LISTSXP (2) (SYMSXP,LISTSXP) (gdb) dumpsxp CADR(e) Type: SYMSXP (1) "x" (gdb) dumpsxp CADDR(e) Type: LANGSXP (6) Function:Type: SYMSXP (1) "sin" Args:Type: LISTSXP (2) (REALSXP,NILSXP) (gdb) dumpsxp CADDDR(e) Type: NILSXP (0)

The NILSXP tells us that we’ve got to the end of the list.

The GDB macro is below. Put it in your .gdbinit to automatically load it when gdb starts up.

define dumpsxp
if $arg0==0 printf "uninitialized variable\n" return end set$sexptype=TYPEOF($arg0) # Typename printf "Type: %s (%d)\n", typename($arg0), $sexptype # SYMSXP if$sexptype==1
# CHAR(PRINTNAME(x))
print_char PRINTNAME($arg0) end # LISTSXP if$sexptype==2
printf "(%s,%s)\n", typename(CAR($arg0)), typename(CDR($arg0))
end

# CLOSXP
if $sexptype==3 dumpsxp BODY($arg0)
end

# PROMSXP
# Promises contain pointers to value, expr and env
# tmp = eval(tmp, rho);
if $sexptype==5 printf "Promise under evaluation: %d\n", PRSEEN($arg0)
printf "Expression: "
dumpsxp ($arg0)->u.promsxp.expr # Expression: (CAR(chain))->u.promsxp.expr end # LANGSXP if$sexptype==6
printf "Function:"
dumpsxp CAR($arg0) printf "Args:" dumpsxp CDR($arg0)
end

# SPECIALSXP
if $sexptype==7 printf "Special function: %s\n", R_FunTab[($arg0)->u.primsxp.offset].name
end

# BUILTINSXP
if $sexptype==8 printf "Function: %s\n", R_FunTab[($arg0)->u.primsxp.offset].name
end

# CHARSXP
if $sexptype==9 printf "length=%d\n", ((VECTOR_SEXPREC)(*$arg0))->vecsxp.length
#print_veclen $arg0 print_char$arg0
end

# LGLSXP
if $sexptype==10 set$lgl=*LOGICAL($arg0) if$lgl > 0
printf "TRUE\n"
end
if $lgl == 0 printf "FALSE\n" end end # INTSXP if$sexptype==13
printf "%d\n", *(INTEGER($arg0)) end # REALSXP if$sexptype==14
print_veclen $arg0 print_double$arg0
end

# STRSXP
if $sexptype==16 print_veclen$arg0
set $i=LENGTH($arg0)
set $count=0 while ($count < $i) printf "Element #%d:\n",$count
dumpsxp STRING_ELT($arg0,$count)
set $count =$count + 1
end
end

# VECSXP
if $sexptype==19 print_veclen$arg0
end

# RAWSXP
if $sexptype==24 print_veclen$arg0
end

end

define print_veclen
printf "Vector length=%d\n", LENGTH(arg0) end define print_char # this may be a bit dodgy, as I am not using the aligned union printf "\"%s\"\n", (const char*)((VECTOR_SEXPREC *) (arg0)+1)
end

define print_double
printf "%g\n", (double*)((VECTOR_SEXPREC *) (\$arg0)+1)
end