Power law distribution

Pocket

By using a parameter \gamma, the power law distribution can be defined as follows:

f(k)=P(X=k)=Ck^{-\gamma}\propto k^{-\gamma}

 

Here, C is a Normalization constant.

One of the feature of the power-law distribution is that if you plot this distribution in a graph in which both axes are in log-scale, then the distributions displays a polynomial function of degree 1 (i.e. straight line).

There are several subtle points related to normalization constant, average value and variance.

First, let’s consider the normalization constant. Let m be the Minimum value of the variable k of distribution f(k) and M
be the Maximum value of the variable k of the distribution f(k).

\int_m^M f(k) dk = \frac{C}{\gamma-1}\Bigl(\frac{1}{m^{\gamma-1}}-\frac{1}{M^{\gamma-1}}\Bigr)

 

By using the normalization condition, the value of the above equation should be equal to one. Here, we note that the value of M is important for the normalization. If \gamma is more than one, it is not possible to normalize the function f(k) when the value M goes to infinity.

By using the previous equation, the normalization constant can be computed as follows:

C = (\gamma-1)\Bigl(\frac{1}{m^{\gamma-1}}-\frac{1}{M^{\gamma-1}}\Bigr)^{-1}

 

Let’s compute the expectation value (mean value) and variance. The computation is very similar to the case of the normalization constant.

E[X]=\int_m^M kf(k) dk = \frac{C}{\gamma-2}\Bigl(\frac{1}{m^{\gamma-2}}-\frac{1}{M^{\gamma-2}}\Bigr)  

V[X] = \int_m^M k^2f(k) dk - (E[X])^2 \\ ~~~~~~~= \frac{C}{\gamma-3}\Bigl(\frac{1}{m^{\gamma-3}}-\frac{1}{M^{\gamma-3}}\Bigr)- (E[X])^2  

From the two above equations, we have to remark the following. When M goes to infinity, we also need that \gamma are above 2 and above 3 to have finite expectation value (mean value) and variance, respectively.

As a summary, the behavior of the tail of the distribution is important for the proper normalization of the distribution as well as for the existence of finite values for expectation and variance. When \gamma becomes small progressively (i.e., the tail becomes fat), first variance diverges, then further decreasing of \gamma leads to divergence of the expectation value and finally, even the normalization factor diverges, which means that the distribution itself is not well-defined. In summary:

(1) when the exponent is \gamma > 3 , expectation and variance are finite (no problem).

(2) when the exponent is 3> \gamma > 2 , expectation is finite but variance diverges.

(3) when the exponent is 2> \gamma > 1 , both expectation and variance diverge.

(4) when the exponent is 1> \gamma , normalization constant diverges and distribution itself is not well-defined.

Comments are closed.