Talk:Compound Poisson distribution

{{WikiProject banner shell|class=Start|

{{WikiProject Mathematics|importance = low}}

{{WikiProject Statistics| importance = low }}

}}

I assume that E[Y] = λ * E[X]

What is Var[Y] in terms of the distribution of X? Say, if X has a gamma distribution.

Some properties

E[Y]=E[E[Y|N]]=\lambda E[X]

Var[Y]=Var[E[Y|N]]+E[Var[Y|N]]=\lambda \{E^2[X]+Var[X]\} = \lambda \{E^2[X]+E[X^2]-E^2[X]\} = \lambda E[X^2]

The cumulant generating function

K_Y(t)=\mbox{ln} E[e^{tY}]=\mbox{ln} E[E[e^{tY}|N]]=\mbox{ln} E[e^{NK_X(t)}]=K_N(K_X(t))

:One could add to the above, that if N has a Poisson distribution with expected value 1, then the moments of X are the cumulants of Y. Michael Hardy 20:39, 23 Apr 2005 (UTC)

::The cumulant generating function treatment above is now in the article. Melcombe (talk) 15:26, 6 August 2008 (UTC)

Why is <math> Y(0) \ne 0 </math>?

I would have thought that the process should be started in zero?

Just thinking in terms of (shudder) actuarial science,

a claims process would make very little sense if it started with

a claim at time zero?

What I'm proposing is to change the definition to the one given

on the page for 'Compound Poisson Process'. — Preceding unsigned comment added by Fladnaese (talkcontribs) 18:54, 26 May 2011 (UTC)

:Fixed-up for this point. JA(000)Davidson (talk) 08:40, 27 May 2011 (UTC)

I see that a citation is needed for the relationship between the cumulants of the compound Poisson distribution Y, and the moments for the random variables Xi. Back in 1976, I proved this result, that is:

For j > 0, K(j) = lambda * m(j), where:

- K(j) are the cumulants of Y

- m(j) are the moments for the Xi

- lambda is the parameter of the Poisson distribution

I made use of the characteristic function in thís proof. Is my proof of interest as a citation? If so, I can send the reference number and a pdf of the paper I wrote.

David LeCorney (talk) 12:03, 8 June 2012 (UTC)

Notation

In the development of the properties section, the notation E_N(.) appears which indicates with which distribution the expectation is to be calculated.

I suggest to continue to use this explicit notation throughout the demonstration, for clarity.

Would this be right (my expertise is in construction):

\varphi_Y(t) = \operatorname{E}_Y(e^{itY})= \operatorname{E}_N ( \left(\operatorname{E}_X (e^{itX}) )^N \right)= \operatorname{E}_N ((\varphi_X(t))^N), \,

What should be clarified is what justified the second equal sign moving from E_Y(.) to E_N(.), which I believe is the law of total expectation.

Also, this sentence could be clarified: “and hence, using the probability-generating function of the Poisson distribution,”

Does it mean that you use the PGF to manipulate the previous equation to get the result? That I don’t see how, maybe is should be a bit more explicit.

I do get to the result by applying de definition of E(.) and after identifying terms get the result.

P.S. this could/should be done also in pages such as « law of total variance » etc.

Scharleb (talk) 22:41, 18 December 2020 (UTC)