This is the first post in a four-part series exploring the Kelly criterion:

The Kelly criterion is a formula used to determine the optimal size of a series of bets in order to maximize wealth. It is often described as optimizing the logarithm of wealth, and will do better than any other strategy in the long run.

Let's look at a simple gamble: when you play, you win with probability \(p\) and lose with probability \(q = 1-p\). If you win, you get back your initial bet plus a fraction \(r\) of that bet. If you lose, your bet is forfeited. The Kelly formula calculating the optimal fraction of your available wealth to bet is:

$$ l_{opt} = \frac{pr - q}{r} = \frac{pr - (1-p)}{r} = p - \frac{1-p}{r} $$

Here is a small demo to explore the formula:

Kelly calculation demo


Probability of winning: {{p}} %
Profit on win, as percentage of placed bet: {{r}} %
Optimal fraction of wealth bet in this game: {{ (100*computeKelly(p/100, r/100)).toFixed(1) }} %

The main takeaways are:

  • In each game you invest a fraction \(l\) of the total amount of money you have. If you play the game from the demo (\(p=0.8, r=0.4\)) and you have $100 available, you should invest $30 (since \(l=0.3\)). If you win, you will have $112 (\(100 + 30 \times 0.4\)) and invest $33.60 (\(112 \times 0.3\)) in the second round. If you lost the first round you'd have $70, and invest $21 in the second round. If you can not reinvest your earnings, Kelly does not apply.
  • This formula holds when a large number of games are played. If you play only few rounds of a game this may not apply.
  • When it applies, the Kelly strategy will generate the most wealth out of your initial starting capital.

Derivation of the Kelly criterion

We will take a short dive into mathematics to understand how Kelly came up with this formula. Let's compute the wealth \(V_1\) after the first round of gaming, when starting from an initial wealth \(V_0\):

$$ V_{1} = V_{0} (1 + Wlr_W + (1-W)lr_L) $$

Here, \(l\) is the fraction of money bet, \(r_W\) is the profit on win, \(r_L\) is the fraction of the bet forfeited on loss (in the above scenario, \(r_L=-1\)). \(W\) is a random variable that's \(1\) with probability \(p\) and \(0\) with \(1-p\). So with probability \(p\) we'll have \(W=1\), giving \(V_{1,win} = V_0 (1+lr_W)\), with probability \(1-p\) we'll get \(V_{1,loss} = V_0 (1+lr_L)\).

We can reformulate the expression in a way that will show an important generalization:

$$ V_1 = V_0 (1+lr_W)^W(1+lr_L)^{(1-W)} $$

The two expressions look very different, but they are equivalent since \(W\) can only take on the values \(0\) and \(1\). If \(W=1\) the second term will vanish since \(x^0 = 1\), and we will get the same result for \(V_{1,win}\). The same consideration holds for \(V_{1,loss}\).

If we repeat this game \(n\) times, we arrive at

$$ V_n = V_0 (1+lr_W)^{W_n}(1+lr_L)^{(n-W_n)} $$

with \(W_n\) being the number of wins in those \(n\) games. Essentially, each won round adds a factor \((1+lr_W)\) to the expression, each lost round \((1+lr_L)\).

Now comes a crucial step: how many wins do we expect after playing many games? Asked mathematically, what value will \(W_n\) have for large \(n\)? Because of the law of large numbers we expect \(W = pn\). So for \(p=0.8\) and \(n=10000\) we would expect to win 8000 out of the 10000 games.

Let's plug this into our formula:

$$ V_n = V_0 (1+lr_W)^{pn}(1+lr_L)^{(1-p)n} $$

The law of large numbers allowed us to eliminate the random variables \(W\), \(W_n\) and instead obtain an explicit formula in terms of \(p, n, r_W, r_L\). Since we are interested in maximizing our profit, we are looking for \(l_opt\) — the \(l\) value that maximizes \({V_n}/{V_0}\).

$$ l_{opt} = \operatorname*{argmax}_l \frac{v_n}{v_0} = \operatorname*{argmax}_l \log \frac{v_n}{v_0} = \operatorname*{argmax}_l p \log(1 + l r_W) + (1-p) \log(1 + l r_L) $$

We are free to introduce the logarithm and drop the \(n\) factor because this does not change the value of \(l_{opt}\).1 Differentiating the logarithmic expression and solving for \(l\) gives:

$$ l_{opt} = -\frac{p}{r_L} - \frac{1-p}{r_W} $$

This is identical with the formula given in the introduction after substituting \(r_L = -1\).

Extension to games with more than two possible outcomes

We can extend the Kelly argument to games that have a greater number of possible outcomes. Such games could be:

  • A game with 6 different rewards selected by roll of a 6-sided die, where rewards may be negative (money lost)
  • An investment in a stock, here we have a continuous range of outcomes.

If we have multiple outcomes \(r_i\), each happening with probability \(p_i\), we obtain:

$$ l_{opt} = \operatorname*{argmax}_l \sum_i p_i \log(1 + l r_i) $$

While this equation is not easily solvable analytically, it is trivial to solve numerically. This allows us to extend the Kelly approach to a new kind of game with some interesting practical applications.

If you're interested in a Jupyter Notebook containing several interactive plots that really helped me understand the material outlined in this post, you can find it here.

The Kelly formalism can be generalized further still, taking into account multiple parallel investment opportunities. We'll look at this in the next post.

1 The logarithm is a strictly monotonically increasing function, so \(\alpha \gt \beta \Leftrightarrow \log \alpha \gt \log \beta\). Thus it doesn't change the location of any maxima: \(\operatorname*{argmax} f(x) = \operatorname*{argmax} \log f(x)\). See here for more information about properties of \(\operatorname*{argmax}\).