This is the final post in a four-part series exploring the Kelly criterion:

In a previous post, we looked at the Kelly formula, which maximizes
earnings in a series of gambles where winnings are constantly re-invested. Is
this equivalent to maximizing the expected return in each
game? It turns out that the answer is "no". In this post we'll look into the
reasons for this and discover the pitfalls of expected values.

We will look at the same game as in the previous post:

$$ \frac{V_1}{V_0} = (1+lr_W)^W(1+lr_L)^{(1-W)} $$

with the variables:

- \(V_0, V_1\): the available money before and after the first round
- \(l\): fraction of available money to bet in each round (the variable to optimize)
- \(r_W, r_L\): return on win and loss, 0.4 and -1 in our example (i.e. 40% of
wager awarded on win, otherwise 100% of wager lost)
- \(W\): Random variable describing our chances to win; valued \(1\) with \(p=0.8\), \(0\) with \(p=0.2\)

The Kelly formula obtained from maximizing \(\log V_1/V_0\) tells us to invest
30% of our capital in such a gamble. Let's see what the result is if we
maximize the expected value \(E[V_1/V_0]\) instead.

This is trivial by hand, but we'll use SymPy, because we can:

import sympy as sp
import sympy.stats as ss
sp.init_printing()
l = sp.symbols('l') # Define the symbol/variable l
W = ss.Bernoulli('W', 0.8) # Random variable, 1 with p=0.8, else 0
def f1(W): # Define f1 = V_1/V_0
return (1 + 0.4*l)**W * (1 - l)**(1-W)
ss.E(f1(W)) # Calculate the expected value

Evaluating this gives:

$$
E[\frac{V_1}{V_0}] = 1 + 0.12l
$$

Uh-hum... so the expected return has no maximum, but grows linearly with
increasing \(l\). Essentially, this approach advises that you should bet all
your money, and more if you can borrow it for negligible interest rates.

Could the problem be that we only look at a single round? Let's examine the
expected return after playing 10 rounds:

# Note: We can not do f10 = f1(W)**10, since we need independent samples
W_list = [ss.Bernoulli('W_%d' % i, 0.8) for i in range(10)]
f10 = sp.prod([f1(Wi) for Wi in W_list])
sp.expand(ss.E(f10))

$$
E[\frac{V_{10}}{V_0}] = 6 \cdot 10^{-10} l^{10} + 5 \cdot 10^{-8} l^9 + ... + 1.2 l + 1.0
$$

All the coefficients of the polynomial are positive, there are no maxima for
\(l >= 0\). What's going on?

Time to dig deeper. Let's say we bet all of our money each round. If we lose
just once, all of our money is gone. After 10 rounds playing this strategy, the
probability of total loss is:

$$ p({\text{one loss in 10 games}}) = 1 - 0.8^{10} = 0.89 $$

So 89% of the time we would lose all our money. However, the expected return
after 10 rounds at \(l=1\) is:

$$ E[\frac{V_{10}}{V_0}]_{l=1} = 3.1 $$

So on average we'd have \$3.10 after 10 rounds for every Dollar initially bet,
but 90% of the time we'd lose everything. Strange.

## The big reveal

Things become clearer when we look at in more detail at the calculation of
\(E[V_{10}/V_0]\):

\begin{align*}
E[\frac{V_{10}}{V_0}]_{l=1} && = && 1.4^0 0^{10} && \times && 0.8^0 0.2^{10} && + \\
&& && 1.4^1 0^9 && \times && 0.8^1 0.2^{9} && + \\
&& && 1.4^2 0^8 && \times && 0.8^2 0.2^{8} && + \\
&& && ... && && \\
&& && 1.4^9 0^1 && \times && 0.8^9 0.2^{1} && + \\
&& && 1.4^{10} 0^0 && \times && 0.8^{10} 0.2^{0} \tag{*} \\
\end{align*}

The expected value is the sum of probability-weighted outcomes(\(1.4\) and
\(0\) are the per-round outcomes for win and loss). Since a single loss results
in loss of *all* money, the only non-zero term in the sum is the starred one,
that occurs with about 11% probability, at a value gain of \(1.4^{10} = 28\).
This high gain is enough to drag the expected return up to 3.1. When more then
10 rounds are played, these numbers become more extreme: the winning
probability plummets, but the winning payoff skyrockets, dragging the expected
return further upwards.

This is a bit reminiscent of the St. Petersburg paradox, in that
arbitrarily small probabilities can drag the expected return to completely
different (read "unrealistic") values.

## Different kinds of playing

The Kelly approach builds on the assumption that you play with all your
available wealth as base capital, and tells you what fraction of that amount to
invest. It requires reinvestment of your winnings. Obviously, investing
everything in one game (\(l=1\)) is insane, since a single loss would brankrupt
you. However, following Kelly's strategy is the fastest way to grow total
wealth.

The expected-value approach of "invest everything you have" is applicable in a
different kind of situation. Let's say you can play only one game per day, have
a fixed gambling budget each day, and thus are barred from reinvesting your
wins. If you invest your full daily gambling budet, you may win or lose, but
over the long run you will average a daily return of \(1.4 \cdot 0.8 = 1.12\)
for every dollar invested. The more you can invest per day, the higher your
wins, thus the pressure towards large \(l\) values.

In a way, Kelly optimizes for the highest probability of large returns when
re-investing winnings, while the expected value strategy optimizes for large
returns, even if the probability is very low.

## A dubious game

Should you play a game where the winning probability \(p\) is \(10^{-6}\), but
the winning return \(r_W\) is \(2\cdot 10^6\)? Mathematically it seems like a
solid bet with a 100% return on investment in the long run. The question is
whether you can reach "the long run". Can you afford to play the game a million
times? If not, you'll most likely lose money. If you can afford to play a few
million times, it becomes a nice investment indeed.

Kelly would tell you to invest only a very small fraction of your total wealth
into such a game. The expected-value formalism advises to invest as much as
possible, which for most people is bad advice even when playing with a fixed
daily budget and no reinvestment (i.e. the expected-value play style).

This is an interesting example for two reasons:

- It demonstrates one of the ways how "rich become richer" - the game has high
returns, but also a significant barrier to entry.
- It demonstrates a downside of both the Kelly- and the expected-value
approach. The two strategies are optimal in their use cases in the limit of
infinitely many games, however for finitely many games they may give bad
advice, especially regarding to very low-probability winning scenarios.

## Conclusion

So, a brief discussion of the relationship between the Kelly strategy
and the expected return. For me, it was striking how two seemingly
similar approaches ("maximize the moneys") lead to so different results and how
unintuitively the expectation value can be in the face of outliers.

If you're interested in a Jupyter Notebook containing several interactive plots
that really helped me understand this material, you can find it here.

## The Kelly criterion

Over the course of this blog post series, we looked at the
classical Kelly criterion in the first post, and how it can be extended to situations such as
stock buying, with multiple parallel investment opportunities, in the second
post. Next, we investigated the origin of the logarithm in the
Kelly formula in the third post, before finishing
up with the current discussion about expected values.

Surely, there's more to say about the Kelly criterion. If you want to leave
your thoughts, please do so in the comments below!