参数有关的三大检测

LRT

似然比检验(likelihood ratio test, LRT)是一种检验参数能否反映真实约束的方法
1 LRT for Normal distribution with known $\sigma$
$L(\mu)=\left(2 \pi \sigma_{0}^{2}\right)^{-n / 2} \exp \left(-\frac{1}{2 \sigma_{0}^{2}} \sum\left(x_{i}-\mu\right)^{2}\right)$
Under $H_{0}, L\left(\mu_{0}\right)=\left(2 \pi \sigma_{0}^{2}\right)^{-n / 2} \exp \left(-\frac{1}{2 \sigma_{0}^{2}} \sum\left(X_{i}-\mu_{0}\right)^{2}\right)$
The denominator, we have to maximize $L(\mu) .$ We know $L(\mu)$ is maximized at $\bar{X}$
which gives $L(\hat{\mu})=\left(2 \pi \sigma_{0}^{2}\right)^{-n / 2} \exp \left(-\frac{1}{2 \sigma_{0}^{2}} \sum\left(X_{i}-\bar{X}\right)^{2}\right)$
$$\Lambda=\frac{L\left(\mu_{0}\right)}{L(\hat{\mu})}$$
p-d =1 $\rightarrow$$\sum_{i}\left(X_{i}-\mu\right)^{2}=\sum_{i}\left(X_{i}-\bar{X}\right)^{2}+n(\bar{X}-\mu)^{2}$
$\begin{aligned} \Lambda &=\exp \left(-\frac{1}{2 \sigma_{0}^{2}}\left[\sum\left(X_{i}-\mu_{0}\right)^{2}-\sum\left(X_{i}-\bar{X}\right)^{2}\right]\right) \ \Longrightarrow-2 \ln \Lambda &=\frac{1}{\sigma_{0}^{2}}\left[\sum\left(X_{i}-\mu_{0}\right)^{2}-\sum\left(X_{i}-\bar{X}\right)^{2}\right] \ &=\frac{1}{\sigma_{0}^{2}} n\left(\bar{X}-\mu_{0}\right)^{2} \ &=\left(\frac{\bar{X}-\mu_{0}}{\left.\sigma_{0} / \sqrt{(} n\right)}\right)^{2} \sim \chi_{(d f=1)}^{2} \end{aligned}$
2 example fpr LRT for exponential distribution
$X_{i} \sim \operatorname{Exp}(\theta), E(X)=\theta$
n =100 , $\bar{x} =75$
The hypothesis we wish to test is $H_{0}: \theta=60$ $H_{1}: \theta \neq 60$
$L(\theta | \mathbf{x})=\frac{1}{\theta^{n}} \exp \left(-\frac{1}{\theta} \sum_{i=1}^{n} x_{i}\right)$. MLE for $\theta$ is $\bar{x}$ , we plug in $\theta_0$ and $\bar{x}$ into likelihood ratio we get:
$\Lambda=\left(\frac{\bar{x}}{\theta_{0}}\right)^{n} \exp \left(n\left(1-\frac{\bar{x}}{\theta_{0}}\right)\right)$ [ remember $\sum x_i$ = n$\bar{x}$]
plug in exact data we get:
$-2 \log \Lambda=-2(100)\left(\log 75-\log 60+1-\frac{75}{60}\right)=5.37$
$\chi_{1,0.95}^{2}=3.84$ we reject $H_0$
3 Constructing Confidence Interval using LRT:
We reject $H_{0}$ if $-2 \ln \Lambda>\chi_{1-\alpha,(d f=p-d)}^{2}$
Conversely, we will fail to reject if $-2 \ln \Lambda<\chi_{1-\alpha,(d f=p-d)}^{2}$
we will have , $95 %$ CI for $\theta$ is the solution of $-2(100)\left(\log 75-\log \theta+1-\frac{75}{\theta}\right)<3.84$
which is (62.037,91.841)

WALD test

1$$\frac{\hat{\theta}-\theta_{0}}{S E[\hat{\theta}]} \stackrel{D}{\rightarrow} N(0,1)$$
Wald(inventor) proposed the use of observed-fisher information to estimate $S E[\hat{\theta}]$
second-derivative of the negative log-likelihood replace θ by ˆθ
2 example :
Suppose $X_{1}, X_{2}, \ldots X_{n} \stackrel{i i d}{\sim} \operatorname{Bern}(\theta)$
$l(\theta)=\sum X_{i} \log \theta+\left(n-\sum X_{i}\right) \log (1-\theta)$
$l^{\prime}(\theta)=S(\theta)=\frac{\sum X_{i}}{\theta}-\frac{n-\sum X_{i}}{1-\theta} \Longrightarrow \hat{\theta}=\bar{X}$
$l^{\prime \prime}(\theta)=-\frac{\sum X_{i}}{\theta^{2}}-\frac{n-\sum X_{i}}{(1-\theta)^{2}}$
Obs. Fisher Info $=-\left.l^{\prime \prime}(\theta)\right|{\theta=\bar{X}}=\frac{n}{\bar{X}}+\frac{n}{1-\bar{X}}=\frac{n}{\bar{X}(1-\bar{X})}$
Wald Test Stat, $\frac{\bar{X}-\theta
{0}}{\sqrt{\frac{\bar{X}(1-\bar{X})}{n}}} \stackrel{D}{\longrightarrow} N(0,1)$

Score test

1$$\frac{S\left(\theta_{0}\right)}{\sqrt{n I\left(\theta_{0}\right)}} \stackrel{D}{\rightarrow} N(0,1)$$
2 example:
Suppose $X_{1}, X_{2}, \ldots X_{n} \stackrel{i i d}{\sim} \operatorname{Bern}(\theta)$
$l(\theta)=\sum X_{i} \log \theta+\left(n-\sum X_{i}\right) \log (1-\theta)$
$l^{\prime}(\theta)=S(\theta)=\frac{\sum X_{i}}{\theta}-\frac{n-\sum X_{i}}{1-\theta}$
$S\left(\theta_{0}\right)=\frac{\sum X_{i}}{\theta_{0}}-\frac{n-\sum X_{i}}{1-\theta_{0}}=\frac{n\left(\bar{X}-\theta_{0}\right)}{\theta_{0}\left(1-\theta_{0}\right)}$
$l^{\prime \prime}(\theta)=-\frac{\sum X_{i}}{\theta^{2}}-\frac{n-\sum X_{i}}{(1-\theta)^{2}}$
Fisher Info $=-E[l^{\prime \prime}(\theta)]|{\theta=\theta{0}}=\frac{n}{\theta_{0}}+\frac{n}{1-\theta_{0}}=\frac{n}{\theta_{0}(1-\theta_{0})}$
Score Test Stat, $\frac{\bar{X}-\theta_{0}}{\sqrt{\frac{\theta_{0}(1-\theta_{0})}{n}}} \stackrel{D}{\longrightarrow} N(0,1)$

Conclusion
从计算上说,wald test更容易
如果n足够大的话,三个test没有太大差别,如果n小的话LRT是最优先的。

有名的分布

泊松部分:Poison
test statistics =2 $n\left[\theta_{0}-\bar{x}+\bar{x} \log \left(\frac{\bar{x}}{\theta_{0}}\right)\right]$ $\rightarrow$ $\chi_{0.95, d f}^{2}$

GOF(test看数据的分布是不是符合某一分布)

1.Example with uniform simple:
Suppose we have 10000 random numbers generated from a Uniform[0,1] distribution. After dividing them into 10 equal length bins we have these following counts.
xi:1 2 3 4 5 6 7 8 9
sample size:993 1044 1061 1021 1017 973 975 965 996 955

$\textbf{SOL:}$
(1)If the numbers are really from a Uniform[0,1] distribution then expected counts for each cell is $10000 * \frac{1}{10}=1000$
(2)test stat, $X^{2}=\left(\frac{(993-1000)^{2}}{1000}+\frac{(1044-1000)^{2}}{1000}+\ldots+\frac{(955-1000)^{2}}{1000}\right)=11.056$
(3)$\mathrm{p}$ -value $=1$ -pchisq $(11.056, \mathrm{df}=9)=0.27189$
We don’t have any evidence to reject the statement that these number are from a Uniform[0,1] distribution.