renewal theory

{{Short description|Branch of probability theory}}

Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite expectation. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.

A renewal process has asymptotic properties analogous to the strong law of large numbers and central limit theorem. The renewal function m(t) (expected number of arrivals) and reward function g(t) (expected reward value) are of key importance in renewal theory. The renewal function satisfies a recursive integral equation, the renewal equation. The key renewal equation gives the limiting value of the convolution of m'(t) with a suitable non-negative function. The superposition of renewal processes can be studied as a special case of Markov renewal processes.

Applications include calculating the best strategy for replacing worn-out machinery in a factory; comparing the long-term benefits of different insurance policies; and modelling the transmission of infectious disease, where "One of the most widely adopted means of inference of the reproduction number is via the renewal equation".{{cite journal |doi=10.1098/rsif.2021.0429 |title=Inferring the reproduction number using the renewal equation in heterogeneous epidemics |date=2022 |last1=Green |first1=William D. |last2=Ferguson |first2=Neil M. |last3=Cori |first3=Anne |journal=Journal of the Royal Society Interface |volume=19 |issue=188 |pmid=35350879 |pmc=8965414 }} The inspection paradox relates to the fact that observing a renewal interval at time t gives an interval with average value larger than that of an average renewal interval.

Renewal processes

=Introduction=

The renewal process is a generalization of the Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent exponentially distributed holding times at each integer i before advancing to the next integer, i+1. In a renewal process, the holding times need not have an exponential distribution; rather, the holding times may have any distribution on the positive numbers, so long as the holding times are independent and identically distributed (IID) and have finite mean.

=Formal definition=

File:Renewal process.reetep.png

Let (S_i)_{i \geq 1} be a sequence of positive independent identically distributed random variables with finite expected value

: 0 < \operatorname{E}[S_i] < \infty.

We refer to the random variable S_i as the "i-th holding time".

Define for each n > 0 :

: J_n = \sum_{i=1}^n S_i,

each J_n is referred to as the "n-th jump time" and the intervals [J_n,J_{n+1}] are called "renewal intervals".

Then (X_t)_{t\geq0} is given by random variable

: X_t = \sum^\infty_{n=1} \operatorname{\mathbb{I}}_{\{J_n \leq t\}}=\sup \left\{\, n: J_n \leq t\, \right\}

where \operatorname{\mathbb{I}}_{\{J_n \leq t\}} is the indicator function

:\operatorname{\mathbb{I}}_{\{J_n \leq t\}} = \begin{cases}

1, & \text{if } J_n \leq t \\

0, & \text{otherwise}

\end{cases}

(X_t)_{t\geq0} represents the number of jumps that have occurred by time t, and is called a renewal process.

=Interpretation=

If one considers events occurring at random times, one may choose to think of the holding times \{ S_i : i \geq 1 \} as the random time elapsed between two consecutive events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does.

The Poisson process is the unique renewal process with the Markov property,{{sfnp|Grimmett|Stirzaker|1992|p=393}} as the exponential distribution is the unique continuous random variable with the property of memorylessness.

Renewal-reward processes

File:Renewal-reward process.reetep.png

Let W_1, W_2, \ldots be a sequence of IID random variables (rewards) satisfying

:\operatorname{E}|W_i| < \infty.\,

Then the random variable

:Y_t = \sum_{i=1}^{X_t}W_i

is called a renewal-reward process. Note that unlike the S_i, each W_i may take negative values as well as positive values.

The random variable Y_t depends on two sequences: the holding times S_1, S_2, \ldots and the rewards

W_1, W_2, \ldots These two sequences need not be independent. In particular, W_i may be a function

of S_i.

=Interpretation=

In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards" W_1,W_2,\ldots (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.

An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as S_i. Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards" W_i are the successive (random) financial losses/gains resulting from successive eggs (i = 1,2,3,...) and Y_t records the total financial "reward" at time t.

Renewal function

We define the renewal function as the expected value of the number of jumps observed up to some time t:

:m(t) = \operatorname{E}[X_t].\,

=Elementary renewal theorem=

The renewal function satisfies

:\lim_{t \to \infty} \frac{1}{t} m(t) = \frac 1 {\operatorname{E}[S_1]}.

:

class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Proof

The strong law of large numbers for renewal processes implies

:\lim_{t \to \infty} \frac {X_t} t = \frac{1}{\operatorname{E}[S_1]}.

To prove the elementary renewal theorem, it is sufficient to show that \left\{\frac{X_t}{t}; t \geq 0 \right\} is uniformly integrable.

To do this, consider some truncated renewal process where the holding times are defined by \overline{S_n} = a \operatorname{\mathbb{I}}\{S_n > a\} where a is a point such that 0 < F(a) = p < 1 which exists for all non-deterministic renewal processes. This new renewal process \overline{X}_t is an upper bound on X_t and its renewals can only occur on the lattice \{na; n \in \mathbb{N} \} . Furthermore, the number of renewals at each time is geometric with parameter p. So we have

:

\begin{align}

\overline{X_t} &\leq \sum_{i=1}^{[at]} \operatorname{Geometric}(p) \\

\operatorname{E}\left[\,\overline{X_t}^2\,\right] &\leq C_1 t + C_2 t^2 \\

P\left(\frac{X_t}{t} > x\right) &\leq \frac{\operatorname E\left[X_t^2\right]}{t^2x^2} \leq \frac{\operatorname E\left[\overline{X_t}^2\right]}{t^2x^2} \leq \frac{C}{x^2}.

\end{align}

=Elementary renewal theorem for renewal reward processes=

We define the reward function:

:g(t) = \operatorname{E}[Y_t].\,

The reward function satisfies

:\lim_{t \to \infty} \frac{1}{t}g(t) = \frac{\operatorname{E}[W_1]}{\operatorname{E}[S_1]}.

=Renewal equation=

The renewal function satisfies

:m(t) = F_S(t) + \int_0^t m(t-s) f_S(s)\, ds

where F_S is the cumulative distribution function of S_1 and f_S is the corresponding probability density function.

:

class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Proof{{sfnp|Grimmett|Stirzaker|1992|p=390}}

We may iterate the expectation about the first holding time:

:m(t) = \operatorname{E}[X_t] = \operatorname{E}[\operatorname{E}(X_t \mid S_1)]. \,

From the definition of the renewal process, we have

:\operatorname{E}(X_t \mid S_1=s) = \operatorname{\mathbb{I}}_{\{t \geq s\}} \left( 1 + \operatorname{E}[X_{t-s}] \right). \,

So

:

\begin{align}

m(t) & = \operatorname{E}[X_t] \\[12pt]

& = \operatorname{E}[\operatorname{E}(X_t \mid S_1)] \\[12pt]

& = \int_0^\infty \operatorname{E}(X_t \mid S_1=s) f_S(s)\, ds \\[12pt]

& = \int_0^\infty \operatorname{\mathbb{I}}_{\{t \geq s\}} \left( 1 + \operatorname{E}[X_{t-s}] \right) f_S(s)\, ds \\[12pt]

& = \int_0^t \left( 1 + m(t-s) \right) f_S(s)\, ds \\[12pt]

& = F_S(t) + \int_0^t m(t-s) f_S(s)\, ds,

\end{align}

as required.

Key renewal theorem

Let X be a renewal process with renewal function m(t) and interrenewal mean \mu. Let g:[0,\infty) \rightarrow [0,\infty) be a function satisfying:

  • \int_0^\infty g(t)\, dt < \infty
  • g is monotone and non-increasing

The key renewal theorem states that, as t\rightarrow \infty:{{sfnp|Grimmett|Stirzaker|1992|p=395}}

:\int_0^t g(t-x)m'(x) \, dx \rightarrow \frac{1}{\mu}\int_0^\infty g(x)\, dx

=Renewal theorem=

Considering g(x)=\mathbb{I}_{[0, h]}(x) for any h>0 gives as a special case the renewal theorem:{{sfnp|Feller|1971|p=347–351}}

:m(t+h)-m(t)\rightarrow \frac{h}{\mu} as t\rightarrow \infty

The result can be proved using integral equations or by a coupling argument.{{sfnp|Grimmett|Stirzaker|1992|p=394–5}} Though a special case of the key renewal theorem, it can be used to deduce the full theorem, by considering step functions and then increasing sequences of step functions.{{sfnp|Grimmett|Stirzaker|1992|p=395}}

Asymptotic properties

Renewal processes and renewal-reward processes have properties analogous to the strong law of large numbers, which can be derived from the same theorem. If (X_t)_{t\geq0} is a renewal process and (Y_t)_{t\geq0} is a renewal-reward process then:

: \lim_{t \to \infty} \frac{1}{t} X_t = \frac{1}{\operatorname{E}[S_1]} {{sfnp|Grimmett|Stirzaker|1992|p=394}}

: \lim_{t \to \infty} \frac{1}{t} Y_t = \frac{1}{\operatorname{E}[S_1]} \operatorname{E}[W_1]

almost surely.

:

class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Proof

First consider (X_t)_{t\geq0}. By definition we have:

:J_{X_t} \leq t \leq J_{X_t+1}

for all t \geq 0 and so

:

\frac{J_{X_t}}{X_t} \leq \frac{t}{X_t} \leq \frac{J_{X_t+1}}{X_t}

for all t ≥ 0.

Now since 0< \operatorname{E}[S_i] < \infty we have:

:X_t \to \infty

as t \to \infty almost surely (with probability 1). Hence:

:\frac{J_{X_t}}{X_t} = \frac{J_n}{n} = \frac{1}{n}\sum_{i=1}^n S_i \to \operatorname{E}[S_1]

almost surely (using the strong law of large numbers); similarly:

:\frac{J_{X_t+1}}{X_t} = \frac{J_{X_t+1}}{X_t+1}\frac{X_t+1}{X_t} = \frac{J_{n+1}}{n+1}\frac{n+1}{n} \to \operatorname{E}[S_1]\cdot 1

almost surely.

Thus (since t/X_t is sandwiched between the two terms)

:

\frac{1}{t} X_t \to \frac{1}{\operatorname{E}[S_1]}

almost surely.{{sfnp|Grimmett|Stirzaker|1992|p=395}}

Next consider (Y_t)_{t\geq0}. We have

:\frac{1}{t}Y_t = \frac{X_t}{t} \frac{1}{X_t} Y_t \to \frac{1}{\operatorname{E}[S_1]}\cdot\operatorname{E}[W_1]

almost surely (using the first result and using the law of large numbers on Y_t).

Renewal processes additionally have a property analogous to the central limit theorem:{{sfnp|Grimmett|Stirzaker|1992|p=394}}

:\frac{X_t-t/\mu}{\sqrt{t\sigma^2/\mu^3}} \to \mathcal{N}(0,1)

{{anchor|The inspection paradox}}

Inspection paradox

File:Inspection paradox.reetep.png{{See also|List of paradoxes#Mathematics}}

A curious feature of renewal processes is that if we wait some predetermined time t and then observe how large the renewal interval containing t is, we should expect it to be typically larger than a renewal interval of average size.

Mathematically the inspection paradox states: for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval. That is, for all x > 0 and for all t > 0:

: \operatorname{P}(S_{X_t+1} > x) \geq \operatorname{P}(S_1>x) = 1-F_S(x)

where FS is the cumulative distribution function of the IID holding times Si. A vivid example is the bus waiting time paradox: For a given random distribution of bus arrivals, the average rider at a bus stop observes more delays than the average operator of the buses.

The resolution of the paradox is that our sampled distribution at time t is size-biased (see sampling bias), in that the likelihood an interval is chosen is proportional to its size. However, a renewal interval of average size is not size-biased.

:

class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Proof

Observe that the last jump-time before t is J_{X_t}; and that the renewal interval containing t is S_{X_t+1}. Then

:

\begin{align}

\operatorname{P}(S_{X_t+1}>x) & {} = \int_0^\infty \operatorname{P}(S_{X_t+1}>x \mid J_{X_t} = s) f_{J_{X_t}}(s) \, ds \\[12pt]

& {} = \int_0^\infty \operatorname{P}(S_{X_t+1}>x | S_{X_t+1}>t-s) f_{J_{X_t}}(s)\, ds \\[12pt]

& {} = \int_0^\infty \frac{\operatorname{P}(S_{X_t+1}>x \, , \, S_{X_t+1}>t-s)}{\operatorname{P}(S_{X_t+1}>t-s)} f_{J_{X_t}}(s) \, ds \\[12pt]

& {} = \int_0^\infty \frac{ 1-F(\max \{ x,t-s \}) }{1-F(t-s)} f_{J_{X_t}}(s) \, ds \\[12pt]

& {} = \int_0^\infty \min \left\{\frac{ 1-F(x) }{1-F(t-s)},\frac{ 1-F(t-s) }{1-F(t-s)}\right\} f_{J_{X_t}}(s) \, ds \\[12pt]

& {} = \int_0^\infty \min \left\{\frac{ 1-F(x) }{1-F(t-s)},1\right\} f_{J_{X_t}}(s) \, ds \\[12pt]

& {} \geq \int_0^\infty (1-F(x)) f_{J_{X_t}}(s) \, ds = 1-F(x) = \operatorname{P}(S_1>x),\\[12pt]

\end{align}

since both \frac{ 1-F(x) }{1-F(t-s)} and 1 are greater than or equal to 1-F(x) for all values of s.

Superposition

Unless the renewal process is a Poisson process, the superposition (sum) of two independent renewal processes is not a renewal process.{{sfnp|Grimmett|Stirzaker|1992|p=405}} However, such processes can be described within a larger class of processes called the Markov-renewal processes.{{cite journal | last1 = Çinlar | first1 = Erhan | author-link1 = Erhan Cinlar | year = 1969 | title = Markov Renewal Theory | journal = Advances in Applied Probability | volume = 1 | issue = 2 | pages = 123–187 | publisher = Applied Probability Trust | jstor = 1426216| doi = 10.2307/1426216 }} However, the cumulative distribution function of the first inter-event time in the superposition process is given by{{cite journal | last1 = Lawrence | first1 = A. J. | year = 1973 | title = Dependency of Intervals Between Events in Superposition Processes | journal = Journal of the Royal Statistical Society. Series B (Methodological) | volume = 35 | issue = 2 | pages = 306–315 | jstor=2984914| doi = 10.1111/j.2517-6161.1973.tb00960.x }} formula 4.1

:R(t) = 1 - \sum_{k=1}^K \frac{\alpha_k}{\sum_{l=1}^K \alpha_l} (1-R_k(t)) \prod_{j=1,j\neq k}^K \alpha_j \int_t^\infty (1-R_j(u))\,\text{d}u

where Rk(t) and αk > 0 are the CDF of the inter-event times and the arrival rate of process k.{{cite thesis | url = http://hal.inria.fr/hal-00676735 | title = Analysis of TTL-based Cache Networks | first1= Nicaise | last1= Choungmo Fofack | first2 = Philippe | last2 = Nain | first3 = Giovanni | last3= Neglia | first4= Don | last4=Towsley | author-link4=Don Towsley (computer scientist) | journal = Proceedings of 6th International Conference on Performance Evaluation Methodologies and Tools | date = 6 March 2012 | access-date = Nov 15, 2012| type = report }}

Example application

Eric the entrepreneur has n machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.

What is his optimal replacement policy?

:

class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Solution

The lifetime of the n machines can be modeled as n independent concurrent renewal-reward processes, so it is sufficient to consider the case n=1. Denote this process by (Y_t)_{t \geq 0}. The successive lifetimes S of the replacement machines are independent and identically distributed, so the optimal policy is the same for all replacement machines in the process.

If Eric decides at the start of a machine's life to replace it at time {{nowrap begin}}0 < t < 2{{nowrap end}} but the machine happens to fail before that time then the lifetime S of the machine is uniformly distributed on [0, t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is:

:

\begin{align}

\operatorname{E}[S] & = \operatorname{E}[S \mid \text{fails before } t] \cdot \operatorname{P}[\text{fails before } t] + \operatorname{E}[S \mid \text{does not fail before } t] \cdot \operatorname{P}[\text{does not fail before } t] \\[6pt]

& = 0.5t(\frac{t}{2}) + t(\frac{2-t}{2})

\end{align}

and the expected cost W per machine is:

:

\begin{align}

\operatorname{E}[W] & = \operatorname{E}[W \mid \text{fails before } t] \cdot \operatorname{P}(\text{fails before } t) + \operatorname{E}[W \mid \text{does not fail before } t]\cdot \operatorname{P}(\text{does not fail before } t) \\[6pt]

& = 2600(\frac{t}{2}) + 200(\frac{2-t}{2}) = 1200t + 200.

\end{align}

So by the strong law of large numbers, his long-term average cost per unit time is:

:

\frac{1}{t} Y_t \simeq \frac{\operatorname{E}[W]}{\operatorname{E}[S]}

= \frac{ 4(1200t + 200) }{ t^2 + 4t - 2t^2 }

then differentiating with respect to t:

:

\frac{\partial}{\partial t} \frac{ 4(1200t + 200) }{ t^2 + 4t - 2t^2 } = 4\frac{ (4t - t^2)(1200) - (4 - 2t)(1200t + 200) }{ (t^2 + 4t - 2t^2)^2 },

this implies that the turning points satisfy:

:

\begin{align}

0 & = (4t - t^2)(1200) - (4 - 2t)(1200t + 200) = 4800t - 1200t^2 -4800t - 800 + 2400t^2 + 400t \\[6pt]

& = -800 + 400t + 1200t^2,

\end{align}

and thus

:

0 = 3t^2 + t - 2 = (3t -2)(t+1).

We take the only solution t in [0, 2]: t = 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity as t tends to zero, meaning that the cost is decreasing as t increases, until the point 2/3 where it starts to increase.

See also

Notes

{{Reflist}}

References

  • {{cite book |title=Renewal Theory|last=Cox|first=David|author-link=Sir David Cox (statistician)|year=1970|publisher=Methuen & Co.|location=London|isbn=0-412-20570-X|pages=142|ref=cox}}
  • {{cite journal|title=Renewal Theory From the Point of View of the Theory of Probability

|first=J. L. |last= Doob

|journal=Transactions of the American Mathematical Society

|volume=63 |issue=3 |year= 1948| pages= 422–438

|jstor=1990567

|url=https://www.ams.org/journals/tran/1948-063-03/S0002-9947-1948-0025098-8/S0002-9947-1948-0025098-8.pdf |doi=10.2307/1990567

|doi-access=free}}

  • {{cite book |title=An introduction to probability theory and its applications |volume=2 |edition=second |year=1971 |last=Feller|first=William|author-link=William Feller |publisher=Wiley}}
  • {{cite book | title=Probability and Random Processes | first2=D. R. | last2=Stirzaker | first1=G. R. | last1=Grimmett | author-link=Geoffrey Grimmett | year=1992 |edition=second | publisher=Oxford University Press|isbn=0198572220 }}
  • {{cite journal|title=Renewal Theory and Its Ramifications

|first=Walter L. |last= Smith

|journal=Journal of the Royal Statistical Society, Series B

|volume=20 |issue= 2 |year=1958 |pages= 243–302

|doi=10.1111/j.2517-6161.1958.tb00294.x |jstor=2983891 }}

  • {{cite journal|title=Renewal theory with fat-tailed distributed sojourn times: Typical versus rare

|first=Weihua Deng, and Eli Barkai |last=Wanli Wang, Johannes H. P. Schulz

|journal=Phys. Rev. E

|volume=98 |issue=4 |year=2018| pages=042139

|doi=10.1103/PhysRevE.98.042139

|arxiv=1809.05856|bibcode=2018PhRvE..98d2139W |s2cid=54727926 }}

{{Stochastic processes}}

{{Authority control}}

{{DEFAULTSORT:Renewal theory}}

Category:Point processes