# Tau-leaping

In probability theory, tau-leaping, or τ-leaping, is an approximate method for the simulation of a stochastic system. It is based on the Gillespie algorithm, performing all reactions for an interval of length tau before updating the propensity functions. By updating the rates less often this allows for more efficient simulation and thus the consideration of larger systems.

Cao et al. improved the method to prevent the generation of negative populations.

## Algorithm

The algorithm is analogous to the Euler method for deterministic systems, but instead of making a fixed change

the change is

1. Initialise the model with initial conditions $\mathbf {x} (t_{0})=\{X_{i}(t_{0})\}$ .
2. Calculate the event rates $R_{j}(\mathbf {x} (t))$ .
3. Choose a time step $\tau$ . This may be fixed, or by some algorithm dependent on the various event rates.
4. For each event $E_{j}$ generate $K_{j}\sim {\text{Poisson}}(R_{j}\tau )$ , which is the number of times each event occurs during the time interval $[t,t+\tau )$ .
5. Update the state by
$\mathbf {x} (t+\tau )=\mathbf {x} (t)+\sum _{j}K_{j}v_{ij}$ where $v_{ij}$ is the change on state variable $X_{i}$ due to event $E_{j}$ . At this point it may be necessary to check that no populations have reached unrealistic values (such as a population becoming negative due to the unbounded nature of the Poisson variable $K_{j}$ ).
6. Repeat from Step 2 until some desired condition is met (e.g. a particular state variable reaches 0, or time $t_{1}$ is reached).

## Algorithm for efficient step size selection

This algorithm is described by Cao et al. The idea is to bound the relative change in each event rate $R_{j}$ by a specified tolerance $\epsilon$ (Cao et al. recommend $\epsilon =0.03$ , although it may depend on model specifics). This is achieved by bounding the relative change in each state variable $X_{i}$ by $\epsilon /g_{i}$ , where $g_{i}$ depends on the rate that changes the most for a given change in $X_{i}$ .Typically $g_{i}$ is equal the highest order event rate, but this may be more complex in different situations (especially epidemiological models with non-linear event rates).

This algorithm typically requires computing $2N$ auxiliary values (where $N$ is the number of state variables $X_{i}$ ), and should only require reusing previously been calculated values $R_{j}(\mathbf {x} )$ . An important factor in this since $X_{i}$ is an integer value, then there is a minimum value by which it can change, preventing the relative change in $R_{j}$ being bounded by 0, which would result in $\tau$ also tending to 0.