Suppose that , and are estimators of the parameter

February 12, 2018 | Author: Anonymous | Category: Math, Statistics And Probability, Normal Distribution
Share Embed Donate


Short Description

Download Suppose that , and are estimators of the parameter...

Description

FORMULAS FOR EXAM 3 Rules of the expected value and the variances: (i) For constants a, b and c and the random variables X and Y, E(aXbYc)=aE(x)bE(Y) c Var(aXbYc)=a2Var(x)+b2Var(Y) 2abCov(X,Y) Var(X)=E(X2)-[E(X)]2 (ii) For constants a1 to an and the random variables X1 to Xn, n n  n  n Var  ai X i    ai2  Var ( X i )  2   ai  a j  Cov( X i , X j ) i 1 j 1  i 1  i 1 i j

Random Sample:The random variables X1, X2, ….,Xn are said to form a random sample of size n if (i) The Xi's are independent random variables. (ii) Every Xi's has the same probability distribution. _

If X1, X2, ….,Xn are said to form a random sample of size n with the mean  and the variance 2, the sampling distribution of x has the mean n

 and the variance 2/n, the sampling distribution of

x i 1

i

has the mean n and the variance n2, and so on.

Central Limit Theorem: Let X1, X2,….,Xn be a random sample from a distribution with mean  and variance 2. Then if n is sufficiently n

_

large (n>30), x has approximately a normal distribution with mean  and variance 2/n.

x i 1

i

has approximately a normal distribution with

mean n and variance n2. The larger the value of n, the better the approximation. Point estimate of a parameter : single number that can be regarded as the most plausible value of . A point estimator, estimation. ^

Unbiased estimator:



^

^



=  + error of ^

is an unbiased estimator of  if E(  )=  for every possible value of . Otherwise, it is biased and Bias = E(  )- .

Minimum Variance Unbiased Estimator (MVUE): Among all estimators of  that are unbiased, choose the one that has minimum variance. ^

The resulting



is MVUE. ^

The Invariance Principle: Let

^

^

 1 .,  2 ,...,  m be the MLE's of the parameters  1 ,  2 ,...,  m . Then the MLE of any function h(  1 ,  2 ,...,  m ) ^

^

^

of these parameters is the function h(  1 .,  2 ,..., 

m

) of the MLE's

(1) Let X1,…,Xn be a random sample of normally distributed random variables with the mean  and the standard deviation . _

x is the method of moment and the maximum likelihood estimator of  n

_

 ( xi  x ) 2 i 1

n



(n  1) s 2 is the method of moment and the maximum likelihood estimates of 2. n

(2) Let X1,…,Xn be a random sample of exponentially distributed random variables with parameter . _

1 / x is the method of moment and the maximum likelihood estimator of . (3) Let X1,…,Xn be a random sample of binomial distributed random variables with parameter p. X/n is the method of moment and the maximum likelihood estimator of p. (4) Let X1,…,Xn be a random sample of Poisson distributed random variables with parameter . _

x is the method of moment and the maximum likelihood estimator of .

The Method of Moments (MME) (one unknown parameter case) _

Calculate E(X) then set it equal to x . Solve this one equation, for the unknown parameter . The Method of Maximum Likelihood (MLE) (one unknown parameter case) Likelihood function is the joint pmf or pdf of X which is the function of unknown  values when x's are observed. The maximum likelihood estimates are the  values which maximize the likelihood function. First determine the likelihood function. Then take the natural logarithm of the likelihood function. After this, take a first derivative with respect to each unknown  and equate it to zero. Solve this one equation, one unknown for the unknown parameter . One-sided (One-tailed) test: Lower tailed (Left-sided) Upper tailed (Right-sided) H0: population characteristics  claimed constant value H0: population characteristics  claimed constant value Ha: population characteristics < claimed constant value Ha: population characteristics > claimed constant value Two-sided (Two-tailed) test: H0: population characteristics = claimed constant value Ha: population characteristics  claimed constant value Significance level,  = P(Type I error) = P(reject H0 when it is true)  = P(Type II error) = P(fail to reject H0 when it is false) Power=1- = 1-P(Type II error) = P(reject H0 when it is false)

Hypothesis testing and Confidence Intervals for Population mean,  _

0 is the claimed constant, x is the sample mean,



and

n Characteristics

 is known, normal distribution

Test statistics

Confidence interval

_

are the population standard deviation of x and its estimator, respectively.

n

 is unknown for a large sample (n >40), unknown distr.

_

z

s

 is unknown for a normal population distribution with small sample, n40 and degrees of freedom, v=n-1

_

x  0

z

/ n

_  _    x  z / 2 , x  z / 2  n n 

x  0 s/ n

_ s _ s   x  z / 2 , x  z / 2  n n 

_

t

s/ n

_ s _ s   x  t / 2;n 1 , x  t / 2;n 1  n n 

 z    2z   Sample size: n=   / 2  =   / 2  where the width is w=2B.  B   w  2

x  0

2

Decision: (i) In each case, you reject H0 if P-value   and fail to reject H0 (accept H0) if P-value >  if the computed test statistics is z* if the computed test statistics is t* Lower tailed test P-value = P(zt*) Two tailed test P-value = 2P(z>|z*|)=2P(z |t*| )=2P(t
View more...

Comments

Copyright � 2017 NANOPDF Inc.
SUPPORT NANOPDF