Thursday, November 29, 2012

The Importance of 2 (as Sharpe Ratio)

A reader ezbentley recently pointed out a little-noticed fact in the derivation of Kelly's formula: if we apply the optimal Kelly leverage, then the standard deviation of the annualized compounded growth rate of your equity is none other than the Sharpe ratio (Sdev=S). This fact is of mild interest in itself, but its implication has relevance to another interesting fact of behavioral finance, so I will reproduce our discussions here.

Suppose our strategy has an annualized Sharpe ratio of 2. According to the above result, Sdev=2 as well. This may startle some of us: a standard deviation of 200% of our compounded growth rate g - wouldn't ruin be very likely? But check out g itself: g=S^2/2, so g=2 when S=2, which means that g itself is exactly 200%. A Sdev of 200% here means that if the growth rate drops one standard deviation below its mean, we will still manage not to lose money for the year. Another way to put this is that there is a 84.1% chance that our annual return will be greater than 0, based on the Gaussian distribution.

It gets better if S goes above 2. For example, at S=3, g=4.5, but Sdev is just 3. So you can see that as S goes above 2, a 1 standard deviation fluctuation of g below the mean will still get you a positive number: profitable for the year.

This is a very interesting result: this means that S=2 is really an important threshold in more ways that I realized. From behavioral finance experiments, we already know that humans demands $2 profits for $1 risk. Given the universal desire of portfolio managers not to lose money on the year, it turns out that the demand of a Sharpe ratio of at least 2 is quite rational!

===

Now, time for a couple of public service announcements:

1) Those who are looking for a way to connect Matlab to Interactive Brokers should check out undocumentedmatlab.com. The creator of this product has an accompanying book, and the documentation for the product is excellent.

2) NAG sells high performance Matlab toolboxes for those who prefer alternatives to the native ones.

3) Here is the Twitter feed for FIXGlobal Online, the magazine from the creator of the FIX Protocol, an order submission standard. Interesting breaking news from the global finance scene.

83 comments:

Anonymous said...

Thanks for the post.

g=S^2/2

where is that result from?


Ernie Chan said...

Hi Anon,
You can deduce from equation 7.7 of Thorp's paper.
Ernie

Anonymous said...

The equation is a first order approximation for the special case that p = q = 0.5. Then, it assumes r = 0. Conclusion: useless. I don't understand all the excitement. Unless I do not know somethign you guys do. In that case, I concede in advance. Thank you.

Ernie Chan said...

Anon,
1) Regarding p=q=0.5, there is no such assumptions when you look at the continuous finance case (i.e. if you assume prices and returns are continuous variables.) The p and q parameters are only applicable if you assume discrete betting, which is not relevant when we are holding a security position continuously. Section 7 is the relevant section of the article.

2) If you do not assume r=0, then the only change is that g=S^2/2+r. So if you have S=2, you will still have a 84.1% chance that you annualized EXCESS return will be greater than 0.

Ernie

Anonymous said...

I still think that the result is for a continuous approximation with P(X = m + s) = P(X = m − s) = 0.5. See section 7.1. Note that his statement "Any bounded random variable with mean E(X) = m and
variance Var(X) = s2 will lead to the same result" applies if, and only if, P = 0.5 as in line 27 of page 22 he has taken the expected value of G(f) and has used the assumption to get equation 7.2.

Ernie Chan said...

Anon,
Of course, the whole discussion is based on the assumption that the returns distribution is Gaussian, which gives rise to P(X = m + s) = P(X = m − s). (But I don't believe this is equal to 2 for a Gaussian.)

If your distribution has a non-zero skew or kurtosis, none of these nice analytical results will hold true. Kelly formula consistently overestimates the optimal leverage if the distribution has fat tails, which is why we adopt the half-Kelly rule.

Ernie

Anonymous said...

After the 2008 crisis and the rare events it created, I am not sure half-Kelly is good. Imagine what would happen to you if you were invested half-Kelly just before the crisis with even zero leverage. Regardless, the result g = S^2/2 is an oversimplification that holds only for normally distributed returns (I haven't seen any yet) and overestimated Kelly by at least a fraction of 2. I woudl call it a useless result for all practical purposes. If you thinm otherwise, I would like to hear your point.

Ernie Chan said...

Anon,
I don't understand what you mean by "if you were invested half-Kelly just before the crisis with even zero leverage" -- if you are invested at half-Kelly, you won't be at zero leverage.

I find Kelly has practical use as an upper bound on what leverage you should deploy. For example, it tells you that holding a 3x leveraged ETF is not a good idea because 3 is above its Kelly leverage. Also, it is very useful in conjunction with CPPI (google my earlier articles for the discussion) to set a maximum drawdown on an account, and yet be able to maximize growth under this constraint.

Ernie

Anonymous said...

But holding any stock you do not know its inherent true leverage. If I have a trading method for a 3X ETF that is 60% profitable for given target and stop why should the ETF leverage make any difference? I mean Kelly ratio is expected return over average loss in essence. I do not see how this is that related to leverage. Do you have an article on that? Thanks

Ernie Chan said...

Anon,
When I said 3x is above the Kelly leverage for an ETF, I am referring a buy-and-hold strategy for that ETF. If instead you are day-trading it (for example), the Kelly leverage will be determined by the average return and volatility of that day-trading strategy instead, and it could well be over 3.
Ernie

Anonymous said...

Ernie,
It seems that optimal leverage in a buy-and-hold strategy is path dependent. You cannot know the optimal fraction but only after the end of the buy and hold period unless you assume perfect Gaussian returns and stationary mean and standard deviation. None of these are true or known and this discussion is in my opinion pointless. You may be overestimating or underestimating optimal leverage by a huge factor making such assumptions. Continuous re-leveraging violates the buy-and-hold assumption because it introduces timing and cannot account for sudden skewness in distributions such as the 2008 collapse and can lead to ruin with the wrong leverage. I think you were pointed to these problems as I just found out in a post you had in 2006
http://epchan.blogspot.gr/2006/10/how-much-leverage-should-you-use.html but you still pushing on these ideas. You do exactly what Taleb labels a grave mistake by using the normal distribution as a guide. Not even f/2 will save you because this is arbitrary and the correct number may be f/6 or f/8, you know that only after the fact. Thank you.


Ernie Chan said...

Anon,
1) Optimal anything (not just leverage) in finance, and indeed for any statistical systems (such as speech or image recognition), can be said to be path-dependent, since of course the true optimal is known only after the fact. But you are missing the point about probabilistic modeling in finance or any statistical systems. The point is what is the best you can do before the fact. If you propose a better model than Gaussian, then all the power to you, but it will still be sub-optimal after-the-fact. That, however, doesn't mean you won't benefit from the model, because you can use that to out-perform the next trader who has a worse model.

2) It is easy to say that we shouldn't believe in Gaussian model because it is wrong. But of course every model is "wrong" - they are all approximations! But in my new book I have performed calculations on a returns series of a FX trading strategy. It definitely has positive skew and kurtosis and is not perfectly Gaussian. It turns out that the Kelly optimal leverage is very close to the "after-the-fact" optimal leverage. You are invited to run the same calculations for your own model and see how far the Kelly leverage differs from the actual optimal. You may be surprised how accurate Kelly is. Thorp has made the same point in his paper. (The key requirement is that the backtest or simulation be long enough so that we get a good estimate of returns and volatility.)

3) I do not believe that assuming Gaussian distribution of returns will lead you to under-estimate the optimal leverage. I know of no real returns distributions that has negative kurtosis. So as I said in the previous comment, Kelly leverage is useful as an upper bound of what leverage you should deploy.

4) You also misunderstood the point about 3x leveraged ETF. Even though you are buying and holding that ETF without trading at all, the sponsor of that ETF is performing rebalancing for you near every day's market close. This is what keeps the ETF leveraged at 3. Hence you should not buy and hold this ETF since 3 is higher than the Kelly optimal.


Ernie

Tom said...

When exactly is your new book coming out??? The only mentioning I've seen is within your comments. I'm assuming 2013, but release date set yet? Thanks

Ernie Chan said...

Tom,
It should be out by mid-2013.
Ernie

Anonymous said...

Ernie,

I would like to first say that I appreciate your responses and the fact that you are here to answer question, something that other bloggers do not do.

Having said that, you wrote" "I do not believe that assuming Gaussian distribution of returns will lead you to under-estimate the optimal leverage. I know of no real returns distributions that has negative kurtosis. "

Although no known asset class has platykurtic distribution of returns, several trading systems do. Thus, your statement is not of general validity although in the buy-and-hold case it may be true in general.

You wrote: " But in my new book I have performed calculations on a returns series of a FX trading strategy. It definitely has positive skew and kurtosis and is not perfectly Gaussian. It turns out that the Kelly optimal leverage is very close to the "after-the-fact" optimal leverage."

But what do you mean by Kelly leverage for a FX strategy? Is this a buy-and-hold strategy? Or you mean the Kelly optimal risk ratio? These are different things as I understand them. If the mean and standard deviation of returns are stationary, then the after the fact Kelly ratio will be the same but what happens in the case that the statistics change values? I mean Kelly makes sense only with stationary statistics. I know of NO trading systems with stationary return statistics other than the buy-and-hold case over the longer-term. But there seems to be some confusion here. Trading is not buy-and-hold. Leverage does not mean fully invested. Position size determines leverage. These are not clear neither from your writings nor from Thorps writings.

Then, a 3x ETF may not be the optimal Kelly for a certain market but for a trading strategy it may optimal or sub-optimal depending on risk management.

The Kelly leverage for buy-and-hold is valid at the limit of large numbers and nobody invests that long to realize optimal returns from it. During an extended time frame the result may turn out to be either too much risk or very low risk.

Ernie Chan said...

Anon,
You are very welcome to my responses -- back-and-forth discussion is the purpose of this blog.

1) Often, trading systems appear to have platykurtic distribution in-sample, but has leptokurtic distribution out-of-sample. It is easy to tweak a strategy so that it avoids all the financial crises in-sample. If you know of a reference to an actual trading track record of a fund with daily returns over 10 years that has leptokurtic distribution, please let us know. I can certainly say that none of my strategies' live track record has that happy property.

2) Kelly formula can be applied to strategies as well as to assets. All it requires is the returns and volatilities of the strategy. I do not know what you mean by "Kelly optimal risk ratio". A google search of this phrase turns up no relevant entries. Perhaps you can read the chapter in my first book on money and risk management so that we are on the same page with regard to terminology.

3) With regard to the stationarity assumption: once again, all financial models assume this. Even if you have a hidden Markov model of distributions, you are still assuming constant switching probabilities between two or more stationary distributions. But of course in real financial time series nothing is stationary, not even buying and holding. Just because it is not stationary does not mean our models are useless, as long as it beats total ignorance and a uniform distribution.

4) Leverage *most definitely* means fully invested. In fact, it means you are invested many times your equity. I think you should re-read both Thorp's paper and my book to understand what we mean by leverage.

5) I repeat that I didn't say there is no trading strategy on a 3x ETF that can be suitable. I only said that a buy-and-hold strategy on a 3x ETF is doomed because it exceeds the Kelly optimal.

6) Your point about law of large numbers merits the same response as 3). In my opinion, if you have a strategy that trades 10 round trips a day, the law of large number will be realized in less than a year.

Ernie

Anonymous said...

Hi Ernie,

Academic papers typically focus on statistical significance of parameters, e.g. 10,5,1% levels.

But in trading, I guess your acceptable significance level is lower, right?

For example, finding a cointegration relationship that is significant on say the 20% level can still be profitable.

What are your thoughts about statistical significance when formulating strategies?

Thanks.

Anonymous said...

Hi again, this is the first anon. You said:

"I do not know what you mean by "Kelly optimal risk ratio". A google search of this phrase turns up no relevant entries."

I meant as in the Wikipedia article: http://en.wikipedia.org/wiki/Kelly_criterion

or what traders often refer to as:

%Kelly = W - (1-W)/R where W is the win rate of the trading system and R the ratio of average win to average loss.

This is the fraction of equity to risk for optimal (geometric) growth.

I understand that the f in the Thorp paper = (m-r)/s^2 is the optimal leverage.

I still do not see how the two are related. But in discussions the two are mixed.

Ernie Chan said...

Hi Anon,
Actually, I find that hypothesis testing in backtests is a slippery concept, because it really depends on what your null hypothesis is.

For example, if you adopt Andrew Lo's null hypothesis by randomising the trade entry dates, you will find your backtest to be quite significant (say at 1%), since random trade entry is unlikely to produce good returns. On the other hand, if your null hypothesis is generated by Monte Carlo simulation of some price series with similar returns and stddev (and perhaps skew and kurtosis) as the real return series, then the backtest may be significant to only 10% level.

As you see, the conclusions can differ a lot. But I would say that as a practical matter, 10% is the worst we should tolerate.

Ernie

Ernie Chan said...

Hi First Anon,
The Kelly ratio you referred to applies only to discrete betting.

Many trading strategies are not made of discrete bets. For example, a long-short stock portfolio that only partially rebalances each position slightly each day cannot be treated as discrete bets.

To avoid confusion, my discussions only refer to continuous finance. Similarly, you can read only section 7 onwards in Thorp's article.

Frankly, I have not found the discrete betting Kelly formula to be very helpful to me, since it is inapplicable to so many strategies.

Ernie

Anonymous said...

Hi Ernie,

Thanks for your answer regarding significance levels.

Another question: Why is it that running a Johansen test sometimes suggests that there are no cointegrating relationships but a stationarity test of the residuals based on some of cointegrating vectors suggest they are highly stationary?

In such a case, would you consider the residuals to be tradeable?

Thanks

Ernie Chan said...

Hi Anon,
Sure, if the residuals are stationary, that they are tradeable. But I don't see why the Johansen test would indicate otherwise.
Ernie

Anonymous said...

Ernie, you wrote:

"Many trading strategies are not made of discrete bets. For example, a long-short stock portfolio that only partially rebalances each position slightly each day cannot be treated as discrete bets."

I think most strategies are made of discrete bets. Managed futures funds notably, most retail trading strategies and options trading. Only buy-and-hold and long/short portfolios sound like they fit your definition but even long/short portfolios at the higher level involve some trading strategy to select the longs and the shorts. On the contrary, I find the optimal leverage f of having little use since (1) it overestimates the leverage by an unknown factor due to multiple simplifications in the derivation and unfounded assumptions in most cases (2) after all is said and done with all that math, leveraging more than 2x has been proven to be hazardous anyway.

Regardless, it seems to me that optimal leverage f* and optimal f (%Kelly) are related in the case that the average win is equal to s^2 for large samples. What could this mean?

Ernie Chan said...

Anon,
I disagree with your assertion that a leverage of over 2 is inherently hazardous. We have always deployed leverage higher than that since July, 2008, and we are still in business.

In my opinion, the Kelly formula for continuous finance is easier to apply and has wider applicability than the discrete formula. But if you are uncomfortable with the Gaussian assumption, by all means use the discrete version. I am sure there are many books and articles on the discrete version, so I am afraid I don't have more to say on this.

Ernie

Anonymous said...

Ernie,

Thanks for the discussion. Just if you get a chance and it is not a secret

"We have always deployed leverage higher than that since July, 2008, and we are still in business."

what are you trading specifically and in generla terms what method do you use?



Ernie Chan said...

Anon,
Thanks for your discussion as well.

From 2008-2010, we focused on mean-reverting equities strategies. In 2011, we moved on to short-term mean reverting FX strategies, and this year we added futures momentum models.
Ernie

Anonymous said...

Hi Ernie,

When we backtest an intraday trading strategy, do we have to do close-to-open gap (overnight gap) adjustment first to our intraday data?

Just like we do dividends splits adjustment to stock prices.

Do you know any papers which discuss it?

Thank you.

Ernie Chan said...

Hi Anon,
In most cases, you shouldn't adjust the data to eliminate the gap, since if we are holding a position overnight, we actually will suffer that gap.

There may be some exceptional circumstances when you need to concatenate many days' intraday data. But we should discuss those on a case-by-case basis.

Ernie

Anonymous said...

Hi Ernie,

Thank you for quick response.

For example, we back test Day trading strategy and we do not hold positions overnight. Assume we use intraday 1 min bars (Open, high, low, close volume) to calculate some technical indicators, such as moving averages, %K, RSI etc. to generate buy, sell signals.

Shall we do overnight gap adjustment to intraday data?

Thank you.

Ernie Chan said...

Hi Anon,
Yes, I think you can try gap adjustment for this type of backtesting.
Ernie

Anonymous said...

Hi Ernie,

I have two somewhat detailed questions.

1. In cointegration analysis, we get the number of shares to buy/sell through OLS or cointegration vectors.

When computing your z-score, do you use estimated coefficients or rounded coeffs which are used in practice to form the portfolio? I have noticed that the difference is not always trivial.

2. When you see that a spread has diverged and offer an opportunity, do you always check first if any of the stocks/ETFs in the spread pays dividend that day? If that's the case, I guess there is no real opportunity, correct?

Thanks a lot for a great blog

Ernie Chan said...

Hi Anon,
1) If your portfolio has a large market cap, then you should not use rounded coefficients. Rounded number of shares, yes, but not coefficients.

2) All data should be split/dividend adjusted, including when the ex-date is the trade entry/exit date.

Ernie

Anonymous said...

Hi Ernie,

This has no relation to this particular post. I am reading your book and noticed that you use Interactive Brokers for making the algorithmic trades. Is it still the one you use and I want to know why. Is it just that their commission is low? Actually, I noticed that E-Trade has better commission. ($7.99 vs $0.005 per share but flat rate is convenient). Is there any other advantages with IB that I am missing? I want to open an account and am evaluating all options. Thank you in advance.

Naresh.

Ernie Chan said...

Hi Naresh,
I do not believe that Etrade offers an API for automated trading, whereas IB offers a wide range of such interfaces. At least, that's true when I last checked Etrade. Are things different now?
Ernie

Anonymous said...

Hi Ernie,

Thanks for your super fast response. Yes, E-Trade is offering an API now: https://us.etrade.com/active-trading/api

So, how do you compare them now?

Naresh.

Ernie Chan said...

Hi Naresh,
Thanks for the update about Etrade.
It looks quite easy to use, but may not offer the full range of the functionalities that IB API provides (such as the ability to send spread orders).
Nevertheless, well worth a try!
Ernie

Anonymous said...

Hi Ernie,

Setting up a strategy involves a number of arbitrary choices such as look-back period, weighting scheme etc.

Would you recommend to trade for example an equal weighted combination of some parameter choices, rather than only going for say a 2-year look-back period?

I guess a combo would give you desired diversification and robustness to data snooping.

Thanks

Ernie Chan said...

Hi Anon,
Yes, averaging over various parameter choices is one way to become parameterless.

Another way is to find a model of the time series such that the parameter is a result of the time series model and not a free parameter of your trading model. For e.g. the halflife of mean reversion is a property of the time series, and can be used as a lookback.

Ernie

Anonymous said...

Hi Ernie,

Refering to your strategies, short term FX mean reversion must have made heaps in 2012.

Do you mind to share your unlevered returns for this strategy in 2012?

I am currently running a different model, currency volatility which trades on breakouts and it roughly makes 10% this year unlevered.

Ernie Chan said...

Hi Anon,
Yes, short term mean reverting FX strategies do quite well this year. I haven't calculated the unlevered returns, since our returns calculations are levered and aggregated.
Ernie

Nedzad said...

Hello Ernie,

Would you or some of your sophisticated readers explain the rationale behind this?

On 12/12/20120 around 12:40PM, the AXTWEN index which tracks long term treasury bond, experienced a sharp drop in value - from 1781 to 1760 or so. I presume this was due to the US Fed announcement to continue keeping the interest rates low until the unemployment decreases to 6.5%. I don't understand this as I would expected such a drop to occur had the Fed announced an increse in the interest rates. In that case, people would be shifting towards equities, no? As a consequence, the TLT ETF sharply dropped too as the fund managers rebalanced their positions.

Would you be kind enough to share your insight about this?

Thanks!

Ernie Chan said...

Hi Nedzad,
I agree with your reasoning and don't understand the drop in prices either. Perhaps other readers have more insight on this?
Ernie

astrotau said...

Hi, I am new; just read your book. Regarding MATLAB (that is your preferred choice); is there some newer option? What about MATHEMATICA?
G. Tonali

Ernie Chan said...

Hi astrotau,
Mathematica is more useful for symbolic computations, not as much for the numerical, array-oriented computations used often in backtesting.

Python and R are good alternatives to Matlab.
Ernie

SPECULARI said...

Hi! Sorry, for non topic question: Do you have expirience with Blackwood API (FUSION)? Or where I can find examples for it? Find google but no info about using it. Thank you!

Ernie Chan said...

Hi x777,
No, I haven't heard of Blackwood either.
Ernie

Ex-Hedge fund said...

Hi Ernie and Nedzad,

I believe that once we hit the new year, the 85 billion a month purchases of bonds will now be financed through quantitative easing instead of selling short term bonds and buying long term ones), which of course will devalue the currency and make nominal interest rate, i , go higher, which is why you see the bond prices for longer term bonds getting priced down. Now you would argue that they are very long term bonds but to do so till 6.5% unemployement, it might take a while and have very permanent impact on the long term bonds, thus investors are asking for more.

I might not be totally correct but would love to see more comments on this.

Pi said...

Hi Ernie,

I wanted to hear your thoughts on choosing between lower volatility and higher per trade profits. Which one should one consider as more important?

Ernie Chan said...

Hi Pi,
If your goal is to maximize long term growth, the important number to consider is the Sharpe ratio, assuming that you are using a leverage recommended by the Kelly formula. In other words, you should consider the ratio between returns and risks, not each number separately.
Ernie

Andrew said...

hi Ernie,

A question regarding historical data. After obtaining the data from a website such as Yahoo Finance do you then store it in a database system such as SQL, or do you just paste the data as is and feed it into your backtesting? I'm interested as I'm close to getting my setup but I'm not sure how much time I should invest in building a historical database- I mean in mind at least it doesn't seem to require too many search functions as a trader would mainly be interested in the time series and not so much searching through a 'database' of historical prices? Anyway, wanted to hear your thoughts on how you organize your data. Help much appreciated!

Ernie Chan said...

Hi Andrew,
I typically store my data as text files so that any software can retrieve them. There is no need for database since the structure is straightforward and we typically do not need to search for specific items in a backtest.
Ernie

Andrew said...

hi Ernie,

Thanks for the quick reply. I thought a fully-fledged SQL database was overdoing it as well. Does this mean that you have a different text file for each instrument, would you also break things up according to time periods? As I think the text files could get pretty large if the price data is downloaded into one file.

Ernie Chan said...

Hi Andrew,
Yes, I do have separate files for each symbol. The files are not large if you are using only daily data. For intraday data, you may have to break them up into shorter periods.
Ernie

J.F. said...

Ernie,
Great blog.
Since the topic of co-integration has crept into this thread, here is another question regarding this topic.
If you look at an ETF, and its largest component (or a linear combination of its largest components), you could imagine they might be co-integrated, as the variations in the largest component would tend to make the ETF vary in a similar manner. The behavior of the smaller components would just be providing "random noise". Do you see any issue with basing a spread trading strategy on an ETF and one or more of its components?
Regards,
JCF

Ernie Chan said...

J.F.
Yes, that is a possible strategy. I have discussed a similar one in http://epchan.blogspot.ca/2007/02/in-looking-for-pairs-of-financial.html
Ernie

Unknown said...

A quick question for you (I Think). I read your first book and enjoyed it very much by the way and am looking forward to the second.
We have developed what we feel is a pretty robust intraday strategy that identifies short term oversold/bought opportunities. With that said I felt our team may have over fit a bit and decided to run it against a portfolio of completely different instruments and the results were still impressive on better than 70% of the sample.
Is it reasonable for me to conclude that this a good out of sample test? Or is it absolutely essential to do a walk forward on this. Additionally we ran a series of Monte Carlo simulations that had favorable results.

Thanks
Greg

Ernie Chan said...

Hi Greg,
The fact that you have run your strategy profitably on on out-of-sample stocks is encouraging. However, this does not obviate the need for walk-forward testing. There can be correlations between stocks during the same time period that allow your strategy work well on many of them, but not necessarily in a different period.
Ernie

Unknown said...

Ernie, thanks for the reply. We are going ahead with the walk forward as you suggest.
Appreciate the response.

Greg

Erik said...

Hi Ernie,

Great blog! I love how you are promoting dialogue in an industry as secretive as the CIA.

A quick question: if you implement a daily futures trading strategy, how are the returns calculated? The returns on how much money you've set aside for margin calls? For example, buying 1 ES future for 1550 doesn't mean you've sunk $1550 * 50 notional into the trade.

Ernie Chan said...

Hi Erik,
If you want to calculate unlevered returns (suitable for input to Kelly formula), then the denominator should be the market value of the contract. In ES, that's $1550*50:
Ernie

Anonymous said...

Mr Chan,
What if I am calculating annualized shape ratio and I am using more than 252 days of returns. It seems in this case my standard deviation would be to large since it comes from more than 252 days of returns. So would I multiply by sqrt(days), where days could be >252?
thanks

Ernie Chan said...

Anon,
You should be calculating standard deviation of daily returns, so it doesn't matter what size your backtest sample is. In order to annualize the standard deviation, you just need to multiply the daily SD by sqrt(252).
Ernie

Anonymous said...

Hi Ernie.

In your book, Quantitative Trading, you said we should use 252 for N to annualize a Sharpe ratio because there are 252 trading days per year on average. Does this assume the mean return and the standard deviation of returns are measured in trading days?

What if you measure returns in calendar days? For example, if you long a stock and it rises 5% after a month, that means it took 30 calendar days to rise 5%, even if there were only 22 trading days in that month, because the stock cannot move on non-trading days. Then, the daily return for this trade would be 0.05 / 30, rather than 0.05 / 22. Can I use 365 for N in this case to annualize the Sharpe?

Thanks!

Ernie Chan said...

Hi Anon,
Yes, if you average returns on trading days only, we should multiply by 252 to annualize. If you average returns on calendar days, you should multiply by 365.
Ernie

cheerful said...

Dr Ernie,

I remember you will want to trade a strategy with Sharpe ratio > 2 after transaction costs. I am using yr USDCAD data. How should I insert the transaction cost? ie each trade, the price increase from 1.0000 to 1.0012. What is the practical capital I should put in and how much should I subtract for the transaction cost. Using dollar or percentage is better?

Thank you Leo

Ernie Chan said...

Hi Leo,
You can generally assume a transaction cost of about 5bps, maybe a bit lower for FX. See Quantitative Trading example 3.7 for the general way to insert transaction cost into a backtest.
Ernie

Ernie Chan said...

Hi Leo,
You can generally assume a transaction cost of about 5bps, maybe a bit lower for FX. See Quantitative Trading example 3.7 for the general way to insert transaction cost into a backtest.
Ernie

cheerful said...

Dear Dr Ernie,

It is another awesome book. In the book, you wrote 5bps for one way cost. 1) Do you consider a round trip ie the transaction cost associated with closing the position. 2) The transaction cost of 5bps = an estimate of real transaction cost + slippage + bid/ask spread?

Thank you for your teachings.
Leo

Ernie Chan said...

Hi Leo,
5 bps is the one way transaction cost estimate that includes everything: commissions, bid-ask spread, and slippage.

Of course, if one trades much large size than the typical bid/ask size, then the slippage and total cost will be larger.

Ernie

cheerful said...

Dear Dr Ernie,

If we put in that transaction cost, maybe all the examples in your second book will have negative sharpe ratio?

Thank you for your teachings
Leo

Ernie Chan said...

Hi Leo,
You can try it, and you will conclude otherwise.
Ernie

cheerful said...

Dear Dr Chan,

May I know have you look at smaller time frames than the currency daily data you used in your book? When I used smaller time frame, sharpe ratio is high but too much trades. When and how should we used smaller time frames?

Thank you for your teachings
Leo

Ernie Chan said...

Hi Leo,
We have done research on FX data down to 1 ms.

The data frequency required depends on your strategy, but generally speaking there is no harm in using high frequency data: just because your data is high frequency does not mean that you have to trade as frequently.

Ernie

cheerful said...

Dear Dr Ernie,

1ms resolution is challenging. =) I am using 1min data of yr USDCAD to build my signal. You are right, we can use high frequency data but need not trade frequently. My trading signal oscillates between long and short too frequently. May I know what methods do you use not to trade frequently? I feel that using just bollinger or linear mean reversion + half-life is not enough.

Ernie Chan said...

Hi Leo,
Generally speaking, the longer the lookback, the longer the average holding period.
Ernie

cheerful said...

Dr Ernie,

I read you did short term Mean Reversion for FX in 2012. How do you avoid too many short duration trades? If I use the longer look-back, my sharpe drops alot too and become unprofitable.

Thank you
Leo

Ernie Chan said...

Leo,
Short duration trades are not bad in themselves. They are only bad if the profit per trade is too small. You would have to adjust your entry criteria so that the expected profit is large before you enter.
Ernie

cheerful said...

Dr Ernie,

May I know what could be the criteria? I think it is not possible to know the expected return ahead. I working on yr USDCAD data.

Ernie Chan said...

Leo,
If you are using a mean reversion strategy, you may want to enter only when it is, say, 2 standard devations away from the mean.
Ernie

cheerful said...

Dear Dr Ernie,

May I know what is the estimated transaction cost for US small cap ie Russell 2000 for one round trip for long and short strategy? If open and close for a long ticker is 5bps, it will be the same for a short ticker. With borrowing cost at average 4% annually for short, it will be 1.6bps a day for a short position.

Then total estimated transaction cost will be 5bps + 5bps +1.6bps=11.6bps for a combine long and short open and close. This is headache for daily rebalancing.

Ernie Chan said...

Hi cheerful,

By transaction cost, I assume you meant that of transacting just 100 shares? The 5bps one way tcost applies only to large cap stocks. Small cap stocks have incur up to 10 bps or even higher cost, especially if you transact more than 100 shares.

Transaction costs do not include margin costs. You will have to pay margin cost whether or not your transact.

Ernie

cheerful said...

Hi Dr Chan,

So the 10bps is for two-way for a small cap ticker? If I have long and short, would it means that I would have 20bps + margin cost?

Thank you

Ernie Chan said...

Hi Cheerful,
No, 10 bps is one-way.
A roundtrip for a long/short pair may incur 40bps.
Ernie