ProofComplete

Random Variables and Distributions - Key Proof

We present a rigorous derivation of the transformation formula for PDFs, a fundamental result for computing distributions of functions of random variables.

Transformation of PDFs

Theorem

Let XX be a continuous random variable with PDF fXf_X, and let Y=g(X)Y = g(X) where gg is strictly monotonic (either strictly increasing or strictly decreasing) and differentiable. Then YY has PDF: fY(y)=fX(g1(y))ddyg1(y)f_Y(y) = f_X(g^{-1}(y)) \left|\frac{d}{dy} g^{-1}(y)\right|

on the range of YY, and fY(y)=0f_Y(y) = 0 elsewhere.

Proof

We consider two cases based on whether gg is increasing or decreasing.

Case 1: gg strictly increasing

Since gg is strictly increasing and continuous, it has an inverse g1g^{-1}, and for any yy: P(Yy)=P(g(X)y)=P(Xg1(y))P(Y \leq y) = P(g(X) \leq y) = P(X \leq g^{-1}(y))

The CDF of YY is: FY(y)=P(Xg1(y))=FX(g1(y))F_Y(y) = P(X \leq g^{-1}(y)) = F_X(g^{-1}(y))

Differentiating both sides with respect to yy using the chain rule: fY(y)=ddyFY(y)=ddyFX(g1(y))f_Y(y) = \frac{d}{dy} F_Y(y) = \frac{d}{dy} F_X(g^{-1}(y))

=fX(g1(y))ddyg1(y)= f_X(g^{-1}(y)) \cdot \frac{d}{dy} g^{-1}(y)

Since gg is increasing, g1g^{-1} is also increasing, so ddyg1(y)>0\frac{d}{dy} g^{-1}(y) > 0. Therefore: fY(y)=fX(g1(y))ddyg1(y)f_Y(y) = f_X(g^{-1}(y)) \left|\frac{d}{dy} g^{-1}(y)\right|

Case 2: gg strictly decreasing

For strictly decreasing gg: P(Yy)=P(g(X)y)=P(Xg1(y))=1FX(g1(y))P(Y \leq y) = P(g(X) \leq y) = P(X \geq g^{-1}(y)) = 1 - F_X(g^{-1}(y))

The CDF of YY is: FY(y)=1FX(g1(y))F_Y(y) = 1 - F_X(g^{-1}(y))

Differentiating: fY(y)=fX(g1(y))ddyg1(y)f_Y(y) = -f_X(g^{-1}(y)) \cdot \frac{d}{dy} g^{-1}(y)

Since gg is decreasing, g1g^{-1} is also decreasing, so ddyg1(y)<0\frac{d}{dy} g^{-1}(y) < 0. Therefore: fY(y)=fX(g1(y))ddyg1(y)f_Y(y) = f_X(g^{-1}(y)) \left|\frac{d}{dy} g^{-1}(y)\right|

In both cases, we obtain the same formula. □

Alternative Form Using Change of Variables

An equivalent formulation uses x=g1(y)x = g^{-1}(y), so y=g(x)y = g(x):

If we write h(x)=g1(x)h(x) = g^{-1}(x), then by the inverse function theorem: dhdy=1g(h(y))\frac{dh}{dy} = \frac{1}{g'(h(y))}

Therefore: fY(y)=fX(x)dxdy=fX(x)g(x)x=g1(y)f_Y(y) = f_X(x) \left|\frac{dx}{dy}\right| = \frac{f_X(x)}{|g'(x)|} \bigg|_{x = g^{-1}(y)}

Example

Linear Transformation: Let Y=aX+bY = aX + b where a0a \neq 0.

Here g(x)=ax+bg(x) = ax + b, so g1(y)=ybag^{-1}(y) = \frac{y-b}{a} and ddyg1(y)=1a\frac{d}{dy} g^{-1}(y) = \frac{1}{a}.

Therefore: fY(y)=fX(yba)1af_Y(y) = f_X\left(\frac{y-b}{a}\right) \cdot \frac{1}{|a|}

Verification for Normal: If XN(μ,σ2)X \sim \mathcal{N}(\mu, \sigma^2) and Y=σX+μY = \sigma X + \mu, then: fY(y)=12πσexp((yμ)22σ2)f_Y(y) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(y-\mu)^2}{2\sigma^2}\right)

Indeed, YN(μ,σ2)Y \sim \mathcal{N}(\mu, \sigma^2) as expected. ✓

Example

Square Transformation: Let Y=X2Y = X^2 where XX has PDF fXf_X supported on (0,)(0, \infty).

Here g(x)=x2g(x) = x^2 (increasing on (0,)(0,\infty)), so g1(y)=yg^{-1}(y) = \sqrt{y} and: ddyg1(y)=ddyy=12y\frac{d}{dy} g^{-1}(y) = \frac{d}{dy} \sqrt{y} = \frac{1}{2\sqrt{y}}

Therefore: fY(y)=fX(y)12y,y>0f_Y(y) = f_X(\sqrt{y}) \cdot \frac{1}{2\sqrt{y}}, \quad y > 0

Special Case: If XN(0,1)X \sim \mathcal{N}(0,1), then Y=X2χ12Y = X^2 \sim \chi^2_1 (chi-squared with 1 degree of freedom).

Remark

The transformation formula is fundamental in probability theory and statistics. It generalizes to multivariate settings via the Jacobian determinant and underpins many theoretical results. The absolute value ensures the PDF remains non-negative regardless of whether the transformation is increasing or decreasing.