Density Estimation of High-Frequency Financial Data

Frequently we will want to estimate the empirical probability density function of real-world data and compare it to the theoretical density from one or more probability distributions. The following example shows the empirical and theoretical normal density for EUR/USD high-frequency tick data \(X\) (which has been transformed using log-returns and normalized via \(\frac{X_i-\mu_X}{\sigma_X}\)). The theoretical normal density is plotted over the range \(\left(\lfloor\mathrm{min}(X)\rfloor,\lceil\mathrm{max}(X)\rceil\right)\). The results are in the figure below. The discontinuities and asymmetry of the discrete tick data, as well as the sharp kurtosis and heavy tails (a corresponding interval of \(\approx \left[-8,+7\right]\) standard deviations away from the mean) are apparent from the plot.

tick density

Empirical and Theoretical Tick Density

We also show the theoretical and empirical density for the EUR/USD exchange rate log returns over different timescales. We can see from these plots that the distribution of the log returns seems to be asymptotically converging to normality. This is a typical empirical property of financial data.

density_scaled

Density Estimate Across Varying Timescales

The following R source generates empirical and theoretical density plots across different timescales. The data is loaded from files that are sampled at different intervals. I cant supply the data unfortunately, but you should get the idea.

# Function that reads Reuters CSV tick data and converts Reuters dates
# Assumes format is Date,Tick
readRTD <- function(filename) {
tickData <- read.csv(file=filename, header=TRUE, col.names=c("Date","Tick"))
tickData$Date <- as.POSIXct(strptime(tickData$Date, format="%d/%m/%Y %H:%M:%S"))
tickData
}

# Boilerplate function for Reuters FX tick data transformation and density plot
plot.reutersFXDensity <- function() {
filenames <- c("data/eur_usd_tick_26_10_2007.csv",
	"data/eur_usd_1min_26_10_2007.csv",
	"data/eur_usd_5min_26_10_2007.csv",
	"data/eur_usd_hourly_26_10_2007.csv",
	"data/eur_usd_daily_26_10_2007.csv")
labels <- c("Tick", "1 Minute", "5 Minutes", "Hourly", "Daily")

par(mfrow=c(length(filenames), 2),mar=c(0,0,2,0), cex.main=2)
tickData <- c()
i <- 1
for (filename in filenames) {
 tickData[[i]] <- readRTD(filename)
 # Transform: `$Y = \nabla\log(X_i)$`
 logtick <- diff(log(tickData[[i]]$Tick))
 # Normalize: `$\frac{(Y-\mu_Y)}{\sigma_Y}$`
 logtick <- (logtick-mean(logtick))/sd(logtick)
 # Theoretical density range: `$\left[\lfloor\mathrm{min}(Y)\rfloor,\lceil\mathrm{max}(Y)\rceil\right]$`
 x <- seq(floor(min(logtick)), ceiling(max(logtick)), .01)
 plot(density(logtick), xlab="", ylab="", axes=FALSE, main=labels[i])
 lines(x,dnorm(x), lty=2)
 #legend("topleft", legend=c("Empirical","Theoretical"), lty=c(1,2))
 plot(density(logtick), log="y", xlab="", ylab="", axes=FALSE, main="Log Scale")
 lines(x,dnorm(x), lty=2)
 i <- i + 1
}
par(op)
}

Binomial Pricing Trees in R

Binomial Tree Simulation

The binomial model is a discrete grid generation method from \(t=0\) to \(T\). At each point in time (\(t+\Delta t\)) we can move up with probability \(p\) and down with probability \((1-p)\). As the probability of an up and down movement remain constant throughout the generation process, we end up with a recombining binary tree, or binary lattice. Whereas a balanced binomial tree with height \(h\) has \(2^{h+1}-1\) nodes, a binomial lattice of height \(h\) has \(\sum_{i=1}^{h}i\) nodes.

The algorithm to generate a binomial lattice of \(M\) steps (i.e. of height \(M\)) given a starting value \(S_0\), an up movement \(u\), and down movement \(d\), is:


FOR i=1 to M
FOR j=0 to i
STATE S(j,i) = S(0)*u^j*d^(n-j)
ENDFOR
ENDFOR

We can write this function in R and generate a graph of the lattice. A simple lattice generation function is below:

# Generate a binomial lattice
# for a given up, down, start value and number of steps
genlattice <- function(X0=100, u=1.1, d=.75, N=5) {
X <- c()
X[1] <- X0
count <- 2

for (i in 1:N) {
 for (j in 0:i) {
  X[count] <- X0 * u^j * d^(i-j)
  count <- count + 1
 }
 }
 return(X)
}

We can generate a sample lattice of 5 steps using symmetric up-and-down values:

> genlattice(N=5, u=1.1, d=.9)
 [1] 100.000  90.000 110.000  81.000  99.000 121.000  72.900  89.100 108.900 133.100  65.610
[12]  80.190  98.010 119.790 146.410  59.049  72.171  88.209 107.811 131.769 161.051

In this case, the output is a vector of alternate up and down state values.

We can nicely graph a binomial lattice given a tool like graphviz, and we can easily create an R function to generate a graph specification that we can feed into graphviz:

function(S, labels=FALSE) {
    shape <- ifelse(labels == TRUE, "plaintext", "point")
    
    cat("digraph G {", "\n", sep="")
    cat("node[shape=",shape,", samehead, sametail];","\n", sep="")
    cat("rankdir=LR;","\n")
    
    cat("edge[arrowhead=none];","\n")
    
    # Create a dot node for each element in the lattice
    for (i in 1:length(S)) {
        cat("node", i, "[label=\"", S[i], "\"];", "\n", sep="")
    }
    
    # The number of levels in a binomial lattice of length N
    # is `$\frac{\sqrt{8N+1}-1}{2}$`
    L <- ((sqrt(8*length(S)+1)-1)/2 - 1)
    
    k<-1
    for (i in 1:L) {
        tabs <- rep("\t",i-1)
        j <- i
        while(j>0) {
            cat("node",k,"->","node",(k+i),";\n",sep="")
            cat("node",k,"->","node",(k+i+1),";\n",sep="")
            k <- k + 1
            j <- j - 1
        }
    }
    
    cat("}", sep="")
}

This will simply output a dot script to the screen. We can capture this script and save it to a file by invoking:

> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9)))
> cat(x, file="/tmp/lattice1.dot")

We can then invoke dot from the command-line on the generated file:

$ dot -Tpng -o lattice.png -v lattice1.dot 

The resulting graph looks like the following:

Binomial Lattice

Binomial Lattice (no labels)

If we want to add labels to the lattice vertices, we can add the labels attribute:

> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9), labels=TRUE))
> cat(x, file="/tmp/lattice1.dot")
Lattice (labels)

Binomial Lattice (labels)

Statistical Arbitrage II : Simple FX Arbitrage Models

In the context of the foreign exchange markets, there are several simple no-arbitrage conditions, which, if violated outside of the boundary conditions imposed by transaction costs, should provide the arbitrageur with a theoretical profit when market conditions converge to theoretical normality.

Detection of arbitrage conditions in the FX markets requires access to high-frequency tick data, as arbitrage opportunities are usually short-lived. Various market inefficiency conditions exist in the FX markets. Apart from the basic strategies outlined in the following sections, other transient opportunities may exist, if the trader or trading system can detect and act on them quickly enough.

Round-Trip Arbitrage
Possibly the most well-known no-arbitrage boundary condition in foreign exchange is the covered interest parity condition. The covered interest parity condition is expressed as:

\[ (1+r_d) = \frac{1}{S_t}(1+r_f)F_t \]

which specifies that it should not be possible to earn positive return by borrowing domestic assets at $\(r_d\) for lending abroad at \(r_f\) whilst covering the exchange rate risk through a forward contract \(F_t\) of equal maturity.

Accounting for transaction costs, we have the following no-arbitrage relationships:

\[ (1+r_d^a) \geq \frac{1}{S^a}(1+r_f^b)F^b \]
\[ (1+r_f^b) \geq S^b(1+r_d^b)\frac{1}{F^a} \]

For example, the first condition states that the cost of borrowing domestic currency at the ask rate (\(1+r_d^a\)) should be at least equal to the cost of converting said currency into foreign currency (\(\frac{1}{s^a}\)) at the prevailing spot rate \(S^a\) (assuming that the spot quote \(S^a\) represents the cost of a unit of domestic currency in terms of foreign currency), invested at \(1+r_f^b\), and finally converted back into domestic currency via a forward trade at the ask rate (\(F^a\)). If this condition is violated, then we can perform round-trip arbitrage by converting, investing, and re-converting at the end of the investment term. Persistent violations of this condition are the basis for the roaring carry trade, in particular between currencies such as the Japanese Yen and higher yielding currencies such as the New Zealand dollar and the Euro.

Triangular Arbitrage
A reduced form of FX market efficiency is that of triangular arbitrage, which is the geometric relationship between three currency pairs. Triangular arbitrage is defined in two forms, forward arbitrage and reverse arbitrage. These relationships are defined below.

$$ \left(\frac{C_1}{C_2}\right)_{ask} \left(\frac{C_2}{C_3}\right)_{ask} = \left(\frac{C_1}{C_3}\right)_{bid} \\
\left(\frac{C_1}{C_2}\right)_{bid} \left(\frac{C_2}{C_3}\right)_{bid} = \left(\frac{C_1}{C_3}\right)_{ask} $$

With two-way high-frequency prices, we can simultaneously calculate the presence of forward and reverse arbitrage.

A contrived example follows: if we have the following theoretical two-way tradeable prices: \(\left(\frac{USD}{JPY}\right) = 90/110\), \(\left(\frac{GBP}{USD}\right) = 1.5/1.8\), and \(\left(\frac{JPY}{GBP}\right) = 150/200\). By the principle of triangular arbitrage, the theoretical two-way spot rate for JPY/GBP should be \(135/180\). Hence, we can see that JPY is overvalued relative to GBP. We can take advantage of this inequality as follows, assuming our theoretical equity is 1 USD:

  • Pay 1 USD and receive 90 JPY ;
  • Sell 90 JPY ~ and receive \(\left(\frac{90}{135}\right)\) GBP ;
  • Pay \(\left(\frac{90}{135}\right)\) GBP, and receive \(\left(\frac{90}{135}\right) \times \) 1.5 USD = 1.32 USD.

We can see that reverse triangular arbitrage can detect a selling opportunity (i.e. the bid currency is overvalued), whilst forward triangular arbitrage can detect a buying opportunity (the ask currency is undervalued).

The list of candidate currencies could be extended, and the arbitrage condition could be elegantly represented by a data structure called a directed graph. This would involve creating an adjacency matrix \(R\), in which an element \(R_{i,j}\) contains a measure representing the cost of transferring between currency \(i\) and currency \(j\).

Triangular Graph

Estimating Position Risk
When executing an arbitrage trade, there are some elements of risk. An arbitrage trade will normally involve multiple legs which must be executed simultaneously and at the specified price in order for the trade to be successful. As most arbitrage trades capitalize on small mispricings in asset values, and rely on large trading volumes to achieve significant profit, even a minor movement in the execution price can be disastrous. Hence, a trading algorithm should allow for both trading costs and slippage, normally by adding a margin to the profitability ratio. The main risk in holding an FX position is related to price slippage, and hence, the variance of the currency that we are holding.

High-Frequency Statistical Arbitrage

Computational statistical arbitrage systems are now de rigeur, especially for high-frequency, liquid markets (such as FX).
Statistical arbitrage can be defined as an extension of riskless arbitrage,
and is quantified more precisely as an attempt to exploit small and consistent regularities in
asset price dynamics through use of a suitable framework for statistical modelling.

Statistical arbitrage has been defined formally (e.g. by Jarrow) as a zero initial cost, self-financing strategy with cumulative discounted value \(v(t)\) such that:

  • \( v(0) = 0 \),
  • \( \lim_{t\to\infty} E^P[v(t)] > 0 \),
  • \( \lim_{t\to\infty} P(v(t) < 0) = 0 \),
  • \( \lim_{t\to\infty} \frac{Var^P[v(t)]}{t}=0 \mbox{ if } P(v(t)<0) > 0 \mbox{ , } \forall{t} < \infty \)

These conditions can be described as follows: (1) the position has a zero initial cost (it is a self-financing trading strategy), (2) the expected discounted profit is positive in the limit, (3) the probability of a loss converges to zero, and (4) a time-averaged variance measure converges to zero if the probability of a loss does not become zero in finite time. The fourth condition separates a standard arbitrage from a statistical arbitrage opportunity.

We can represent a statistical arbitrage condition as $$ \left| \phi(X_t – SA(X_t))\right| < \mbox{TransactionCost} $$

Where \(\phi()\) is the payoff (profit) function, \(X\) is an arbitrary asset (or weighted basket of assets) and \(SA(X)\) is a synthetic asset constructed to replicate the payoff of \(X\).

Some popular statistical arbitrage techniques are described below.

Index Arbitrage

Index arbitrage is a strategy undertaken when the traded value of an index (for example, the index futures price) moves sufficiently far away from the weighted components of the index (see Hull for details). For example, for an equity index, the no-arbitrage condition could be expressed as:

\[ \left| F_t – \sum_{i} w_i S_t^i e^{(r-q_i)(T-t)}\right| < \mbox{Cost}\]

where \(q_i\) is the dividend rate for stock i, and \(F_t\) is the index futures price at time t. The deviation between the futures price and the weighted index basket is called the basis. Index arbitrage was one of the earliest applications of program trading.

An alternative form of index arbitrage was a system where sufficient deviations in the forecasted variance of the relationship (estimated by regression) between index pairs and the implied volatilities (estimated from index option prices) on the indices were classed as an arbitrage opportunity. There are many variations on this theme in operation based on the VIX market today.

Statistical Pairs trading is based on the notion of relative pricing - securities with similar characteristics should be priced roughly equally. Typically, a long-short position in two assets is created such that the portfolio is uncorrelated to market returns (i.e. it has a negligible beta). The basis in this case is the spread between the two assets. Depending on whether the trader expects the spread to contract or expand, the trade action is called shorting the spread or buying the spread. Such trades are also called convergence trades.

A popular and powerful statistical technique used in pairs trading is cointegration, which is the identification of a linear combination of multiple non-stationary data series to form a stationary (and hence predictable) series.

Trading Algorithms

In recent years, computer algorithms have become the decision-making machines behind many trading strategies. The ability to deal with large numbers of inputs, utilise long variable histories, and quickly evaluate quantitative conditions to produce a trading signal, have made algorithmic trading systems the natural evolutionary step in high-frequency financial applications. Originally the main focus of algorithmic trading systems was in neutral impact market strategies (e.g. Volume Weighted Average Price and Time Weighted Average Price trading), however, their scope has widened considerably, and much of the work previously performed by manual systematic traders can now be done by “black box” algorithms.

Trading algorithms are no different from human traders in that they need an unambiguous measure of performance – i.e. risk versus return. The ubiquitous Sharpe ratio (\(\frac{\mu_r – \mu_f}{\sigma}\)) is a popular measure, although other measures are also used. A measure of trading performance that is commonly used is that of total return, which is defined as

\[ R_T \equiv \sum_{j=1}^{n}r_j \]

over a number of transactions n, and a return per transaction \(r_j\). The annualized total return is defined as \(R_A = R_T \frac{d_A}{d_T}\), where \(d_A\) is the number of trading days in a year, and \(d_T\) is the number of days in the trading period specified by \(R_T\). The maximum drawdown over a certain time period is defined as \(D_T \equiv \max(R_{t_a}-R_{t_b}|t_0 \leq t_a \leq t_b \leq t_E)\), where \(T = t_E – t_0\), and \(R_{t_a}\) and \(R_{t_b}\) are the total returns of the periods from \(t_0\) to \(t_a\) and \(t_b\) respectively. A resulting indicator is the Stirling Ratio, which is defined as

\[ SR = \frac{R_T}{D_T} \]

High-frequency tick data possesses certain characteristics which are not as apparent in aggregated data. Some of these characteristics include:

  • Non-normal characteristic probability distributions. High-frequency data may have large kurtosis (heavy tails), and be asymmetrically skewed;
  • Diurnal seasonality – an intraday seasonal pattern influenced by the restrictions on trading times in markets. For instance, trading activity may be busiest at the start and end of the trading day. This may not apply so much to foreign exchange, as the FX market is a decentralized 24-hour operation, however, we may see trend patterns in tick interarrival times around business end-of-day times in particular locations;
  • Real-time high frequency data may contain errors, missing or duplicated tick values, or other anomalies. Whilst historical data feeds will normally contain corrections to data anomalies, real-time data collection processes must be aware of the fact that adjustments may need to be made to the incoming data feeds.

Financial Amnesia

The FT has a nice article on financial amnesia, which talks about the desire of the CFA UK discipline to incorporate some financial history into their curriculum, ostensibly to implant enough of a sense of deja vu in budding financial managers when they encounter potential disaster scenarios that they may avoid repeating them.

I think this is a great idea – the only problem is that it probably wont have any sort of meaningful impact other than provide some fascinating course material for CFA students. The problem with an industry that makes its living from markets that essentially operate on two gears – fear and greed – is that despite all of the prior knowledge in the world, we will willingly impose a state of amnesia upon ourselves if we can convince ourselves that what is happening now is in some way different, or just different enough from whatever disasters happened before. The Internet bubble, tulip-mania, and the various property boom and busts that we have seen over the last decade share at their core a common set of characteristics, but they are different (or “new”) enough that we were able to live in a state of denial. The “fear and greed” mentality also means that even if you know you are operating in a bubble that will eventually burst, you carry on regardless, as you plan to be out of the game before it all goes bad (The Greater Fool theory).

Incidentally, if you want to read a beautifully written and entertaining account of financial bubbles by one of the greatest writers on this topic, you should read this book (“A Short History of Financial Euphoria” by J.K. Galbraith). It packs a huge amount of wit and insight into its relatively small page count.

NOTE: Galbraith also wrote the definitive account on the Great Crash of 1929, which rapidly became required reading around three years ago…it’s also excellent. Galbraith has a beautiful turn of phrase.

R/Reuters Real-Time Data Extension

Last week, I posted an R extension DLL that downloads historical data from Reuters using the SFC API. This time around, I am posting an extension DLL that can subscribe to real-time updates using the RFA API. This extension allows you to specify a list of items and data fields, and subscribe for a specified amount of time to updates on those items.

Here is an example:

# Load the DLL
dyn.load("RfaClient")

# Low-level subscription function wrapper
rsub < - function(time, items, func, sessionName, config, debug) {
.Call("subscribe", as.integer(time), items, func , sessionName, config, debug)
}

# Define items and fields to subscribe to
items <- list()
fields <- c("BID","ASK","TIMCOR")
items[[1]] <- c("IDN_SELECTFEED", "GBP=", fields)
items[[2]] <- c("IDN_SELECTFEED", "JPY=", fields)

# Callback function (invoked when data items update)
callback <- function(df) {
print(paste("Received an update for",df$ITEM, ", update time=", df$TIMCOR))
}

# Subscribe for 5 seconds using the supplied config parameters
rsub(5000, items, callback, "clientSession", "rfa.cfg", FALSE)

A short introductory guide is supplied:

Introduction (PDF)

Introduction (PDF)

The functionality is pretty basic right now, and there may be issues with the code (e.g. memory leaks, performance issues). Any feedback is gratefully received.

The package, including source, can be downloaded here:

rfaclient.zip

It was built with R 2.8.0 and RFA C++ 6.0.2, using Visual C++ 2005.

R/Reuters Time Series Data Extension

This is an upload of the historical data retrieval component of the R/Reuters interface that I presented at the R user conference in August 2008 (see the presentation here).

The extension is a C++ DLL that uses the Reuters SFC API to retrieve limited time series data from the TS1 time series service. It is compiled under Visual Studio 2005 and Windows XP, but could be cross-compiled to Linux without too much difficulty.

The DLL comes with a .R file that contains commands to load the DLL, and set up some necessary initialization. If you are familiar with Reuters infrastructure and APIs, the following points are important:

  • This extension has been built with support for SSL infrastructure only;
  • You will need to provide appropriate parameters to the init() function (see the documentation for details).

The extension, plus the associated source and documentation can be downloaded here:

reuters_ts1.zip

The package makes retrieving time series data for analysis easy: I have provided a single entrypoint for retrieving data, called fetch(). This will return a data frame with a Date column and one or more data columns, depending on the data being requested.

Some examples follow:

First, an example that fetches historical daily price data for Microsoft stock and creates a price/volume chart (the historical data returned contains OHLC price data and volume data):

msft.vol < - fetch("MSFT.O",sd="1/1/2008",ed="10/31/2008")
layout(matrix(c(1,1,1,1,2,2), 3, 2, T))
plot(msft.vol$Date, msft.vol$CLS, main="MSFT Close Price", type="l", ylab="", xlab="Date")
plot(msft.vol$Date, msft.vol$VOL, main="Volume", type='h', ylab="", xlab="")

MSFT Chart

MSFT Chart

The next example retrieves historical price data for the highest and lowest-rated ABX subprime indices – sometimes called the barometers of the subprime crisis:

abx.aaa < - fetch("ABXAA38FGB=MP", sd="1/1/2007", ed="12/1/2008", t="d")
abx.bbb <- fetch("ABXBM38FGB=MP", sd="1/1/2007", ed="12/1/2008", t="d")
op <- par(mfcol=c(2,1),las=2)
plot(...)

ABX AAA and BBB- Indices

ABX AAA and BBB- Indices

Finally, an example that shows the yield spreads between AAA and BBB-rated benchmark corporate bonds, and a 10-year US treasury bond:

library(zoo)
aaa10y < - fetch("AAAUSD10Y=", n=600, t="d")
bbb10y <- fetch("BBBUSD10Y=", n=600, t="d")
ust10y <- fetch("US10YT=RR", n=600, t="d")

aaa10y.series <- zoo(aaa10y, order.by=aaa10y$Date)
bbb10y.series <- zoo(bbb10y, order.by=bbb10y$Date)
ust10y.series < - zoo(ust10y, order.by=ust10y$Date)

full.series <- merge(aaa10y.series, bbb10y.series, ust10y.series, all=FALSE)
...

Yield Spreads

Yield Spreads

There are many more examples, plus documentation of the required R functions and their usage, in the manual:

Introductory Manual

Introductory Manual

UPDATE: The above package was built in Visual C++ 2005 using debug mode. I have attached another package below that was built in release mode. This package also includes the msvcrt redistributable DLLs, in case you get linker issues when R is loading the extension (as it was built using the MSVCRT 8.0 runtime).

reuters_ts_release1.zip

NB: This extension also depends on the Reuters SFC DLLs – specifically it loads ssl45w3280.dll and sipc3280.dll. These are available in the Reuters SSL C++ SDK version 4.5.5 (the latest version at the time of writing). I cannot legally redistribute these DLLs, so I would suggest that you download them from the Reuters development support site. Once downloaded, these DLLs should be in your system PATH (or in the same directory as the extension), so they can be loaded on demand.

DISCLAIMER: This software is not developed by Thomson Reuters and Thomson Reuters are in no way responsible for support or malfunction of this software.

Presentation at UseR! 2008

Today is the second day of UseR! 2008 in Germany, and I will be giving a short talk on a market data interface I developed for R a while back. The confererence is apparently the largest R user conference yet, with over 400 participants, from all areas of industry and academia.

Here are the slides:

r_market_data

For some reason, I couldn’t get the Beamer package to number my contents correctly, so it looks a little strange.

Black-Scholes in R

Here is a simple implementation of the Black-Scholes pricing formula in R. This will return a two-element vector containing the calculated call and put price, respectively.

# Black-Scholes Option Value
# Call value is returned in values[1], put in values[2]
blackscholes <- function(S, X, rf, T, sigma) {
    values <- c(2)

    d1 <- (log(S/X)+(rf+sigma^2/2)*T)/sigma*sqrt(T)
    d2 <- d1 - sigma * sqrt(T)

    values[1] <- S*pnorm(d1) - X*exp(-rf*T)*pnorm(d2)
    values[2] <- X*exp(-rf*T) * pnorm(-d2) - S*pnorm(-d1)

    values
}

Example use:

> blackscholes(100,110,.05,1,.2)
[1] 6.040088 10.675325

Black-Scholes in Excel, Part II

Following on from the post a few weeks ago on a simple Black-Scholes Excel macro, here is a follow up, with a slightly updated version that also calculates the major greeks (vega, gamma and delta). The formulae are from Hull’s book, or see here for examples of the closed-form greeks.

Usage of the spreadsheet is identical to the previous example, so refer to that post. The only difference is the output, which now looks like this (click to enlarge):

black-scholes-greeks.png

The spreadsheet is available for download here, and the code is below for reference.

Function BlackScholes(SpotPrice As Double, ExercisePrice As Double,
        TimeToMaturity As Double, RiskFreeRate As Double, sigma As Double,
        Optional DividendYield As Double) As Double()

    Dim d1 As Double
    Dim d2 As Double
    Dim Nd1 As Double
    Dim Nd2 As Double
    Dim N_dash_d1 As Double
    Dim ResultArray() As Double
    Dim gamma As Double
    Dim vega As Double
    Dim theta As Double

    ReDim ResultArray(10) As Double

    If (IsMissing(DividendYield)) Then
        d1 = WorksheetFunction.Ln(SpotPrice / ExercisePrice)
        + ((RiskFreeRate + (0.5 * (sigma ^ 2))) * TimeToMaturity)
    Else
        d1 = WorksheetFunction.Ln(SpotPrice / ExercisePrice)
        + ((RiskFreeRate - DividendYield + (0.5 * (sigma ^ 2))) * TimeToMaturity)
    End If

    d1 = d1 / (sigma * (TimeToMaturity ^ (1 / 2)))
    d2 = d1 - (sigma * (TimeToMaturity ^ (1 / 2)))
    Nd1 = WorksheetFunction.NormSDist(d1)
    Nd2 = WorksheetFunction.NormSDist(d2)

    'Call Value
    If (IsMissing(DividendYield)) Then
        ResultArray(0) = (SpotPrice * Nd1) - (ExercisePrice * Exp(-RiskFreeRate * TimeToMaturity) * Nd2)
    Else
        ResultArray(0) = Exp(-DividendYield * TimeToMaturity)
        * (SpotPrice * Nd1) - (ExercisePrice * Exp(-RiskFreeRate * TimeToMaturity) * Nd2)
    End If

    'Call Delta
    ResultArray(1) = Nd1 * Exp(-DividendYield * TimeToMaturity)

    'Call Gamma
    N_dash_d1 = (1 / ((2 * WorksheetFunction.Pi) ^ (1 / 2)) * (Exp((-d1 ^ 2) / 2)))
    gamma = (N_dash_d1 * Exp(-DividendYield * TimeToMaturity)) / (SpotPrice * sigma * (TimeToMaturity ^ (1 / 2)))
    ResultArray(2) = gamma

    'Call Vega
    vega = SpotPrice * (TimeToMaturity ^ (1 / 2)) * N_dash_d1 * Exp(-DividendYield * TimeToMaturity)
    ResultArray(3) = vega

    'Call Theta
    theta = -(SpotPrice * N_dash_d1 * sigma * Exp(-DividendYield * TimeToMaturity)) / (2 * (TimeToMaturity ^ (1 / 2)))
    theta = theta + (DividendYield * SpotPrice * Nd1 * Exp(-DividendYield * TimeToMaturity))
    theta = theta - (RiskFreeRate * ExercisePrice * Exp(-RiskFreeRate * TimeToMaturity) * Nd2)
    ResultArray(4) = theta

    'Put Value
    If (IsMissing(DividendYield)) Then
        ResultArray(5) = Exp(-RiskFreeRate * TimeToMaturity) * ExercisePrice
        * (1 - Nd2) - SpotPrice * (1 - Nd1)
    Else
        ResultArray(5) = Exp(-RiskFreeRate * TimeToMaturity) * ExercisePrice
        * WorksheetFunction.NormSDist(-d2) - Exp(-DividendYield * TimeToMaturity)
        * SpotPrice * WorksheetFunction.NormSDist(-d1)
    End If

    'Put delta
    ResultArray(6) = (Nd1 - 1) * Exp(-DividendYield * TimeToMaturity)

    'Put Gamma
    ResultArray(7) = gamma

    'Put Vega
    ResultArray(8) = vega

    'Put Theta
    theta = -(SpotPrice * N_dash_d1 * sigma * Exp(-DividendYield * TimeToMaturity)) / (2 * (TimeToMaturity ^ (1 / 2)))
    theta = theta - (DividendYield * SpotPrice * WorksheetFunction.NormSDist(-d1) * Exp(-DividendYield * TimeToMaturity))
    theta = theta + (RiskFreeRate * ExercisePrice * Exp(-RiskFreeRate * TimeToMaturity) * WorksheetFunction.NormSDist(-d2))
    ResultArray(9) = theta

    BlackScholes = ResultArray
End Function