Generalized Minimum Bias Models By Luyang Fu, Ph. D.

June 20, 2018 | Author: Anonymous | Category: Math, Statistics And Probability, Statistics
Share Embed Donate


Short Description

Download Generalized Minimum Bias Models By Luyang Fu, Ph. D....

Description

Generalized Minimum Bias Models

By Luyang Fu, Ph. D. Cheng-sheng Peter Wu, FCAS, ASA, MAAA

Agenda 

History and Overview of Minimum Bias Method



Generalized Minimum Bias Models



Conclusions



Mildenhall’s Discussion and Our Responses



Q&A

History on Minimum Bias 

A technique with long history for actuaries: – – – – –



Bailey and Simon (1960) Bailey (1963) Brown (1988) Feldblum and Brosius (2002) In the Exam 9.

Concepts: –

Derive multivariate class plan parameters by minimizing a specified “bias” function



Use an “iterative” method in finding the parameters

History on Minimum Bias 



Various bias functions proposed in the past for minimization Examples of multiplicative bias functions proposed in the past: Balanced Bias   wi , j (ri , j  xi y j ) i, j

Squared Bias   wi , j (ri , j  xi y j ) 2 i, j

Chi  Squared Bias   i, j

wi , j (ri , j  xi y j ) 2 wi , j xi y j

History on Minimum Bias  

Then, how to determine the class plan parameters by minimizing the bias function? One simple way is the commonly used “iterative” method for root finding: – –





Start with a random guess for the values of xi and yj Calculate the next set of values for xi and yj using the root finding formula for the bias function Repeat the steps until the values converge

Easy to understand and can program in almost any tools

History on Minimum Bias 

For example, using the balanced bias functions for the multiplicative model: Balanced Bias   wi , j (ri , j  xi y j )  0 i, j

Then,

w

r

i, j i, j

xˆi ,t 

j

w

i, j

yˆ j ,t 1

j

w  w

r

i, j i, j

yˆ j ,t

i

i, j

i

xˆi ,t 1

History on Minimum Bias 

Past minimum bias models with the iterative method: xˆi ,t 

 wi , j ri , j

2 i, j i, j

xˆi ,t

j

w

i, j

yˆ j ,t 1

xˆi ,t

xˆi ,t

j ,t 1

j

2 i, j

2 j ,t 1

j

j

  wi , j ri 2,j yˆ j ,1t 1   j   wi , j yˆ j ,t 1    j    ri , j 1   ˆ j ,t 1 n j y

 w r yˆ   w yˆ

1/ 2

 w r yˆ   w yˆ i, j i, j

xˆi ,t

j ,t 1

j

i, j

j

2 j ,t 1

Issues with the Iterative Method 

Two questions regarding the “iterative” method: – –



Answers: – –



How do we know that it will converge? How fast/efficient that it will converge? Numerical Analysis or Optimization textbooks Mildenhall (1999)

Efficiency is a less important issue due to the modern computation power

Other Issues with Minimum Bias  What

is the statistical meaning behind these models?  More models to try?  Which models to choose?

Summary on Minimum Bias A non-statistical approach  Best answers when bias functions are minimized  Use of “iterative” method for root finding in determining parameters  Easy to understand and can program in many tools 

Minimum Bias and Statistical Models 

Brown (1988) –





Show that some minimum bias functions can be derived by maximizing the likelihood functions of corresponding distributions Propose several more minimum bias models

Mildenhall (1999) –



Prove that minimum bias models with linear bias functions are essentially the same as those from Generalized Linear Models (GLM) Propose two more minimum bias models

Minimum Bias and Statistical Models 

Past minimum bias models and their corresponding statistical models

w r   w yˆ

xˆi ,t

j

i, j

 Poisson

xˆi ,t

j ,t 1

j

xˆi ,t

j ,t 1

j

2 i, j

2 j ,t 1

 Normal

j

  wi , j ri 2,j yˆ j ,1t 1   j  2    wi , j yˆ j ,t 1    j    ri , j 1    Exponential n j yˆ j ,t 1 1/ 2

xˆi ,t

 w r yˆ   w yˆ 2 i, j i, j

i, j i, j

 w r yˆ   w yˆ i, j i, j

xˆi ,t

j ,t 1

j

i, j

j

2 j ,t 1

 Least Squared

Statistical Models - GLM 

Advantages include: – – –



Commercial softwares and built-in procedures available Characteristics well determined, such as confidence level Computation efficiency compared to the iterative procedure

Issues include: – –

Required more advanced knowledge for statistics for GLM models Lack of flexibility:    

Rely on the commercial softwares or built-in procedures Assume the distribution of exponential families. Limited distribution selections in popular statistical software. Difficult to program yourself

Motivations for Generalized Minimum Bias Models  





Can we unify all the past minimum bias models? Can we completely represent the wide range of GLM and statistical models using Minimum Bias Models? Can we expand the model selection options that go beyond all the currently used GLM and minimum bias models? Can we improve the efficiency of the iterative method?

Generalized Minimum Bias Models 

Starting with the basic multiplicative formula

ri , j  xi y j 

The alternative estimates of x and y:

xˆ i , j  ri , j / y j ,

j  1, 2, to n

yˆ j ,i  ri , j / xi , i  1, 2, to m , 

The next question is – how to roll up Xi,j to Xi, and Yj,i to Yj

Possible Weighting Functions 



First and the obvious option - straight average to roll up ˆi  x

ri , j 1 1 ˆ x    i, j   ˆ n j y j  n j

ˆj  y

ri , j 1  1 ˆ   y j ,i    ˆi m m x   i i

Using the straight average results in the Exponential model by Brown (1988)

Possible Weighting Functions 

Another option is to use the relativity-adjusted exposure as weight function     wi , j ri , j   wi , j yˆ j   wi , j yˆ j  ri , j j xˆi     xˆi , j     ˆj  ˆ j  yˆ j wi , j yˆ j j   wi , j y j   wi , j y  j  j   j      wi , j ri , j   wi , j xˆi   wi , j xˆi  ri , j yˆ j     i  yˆ j ,i     ˆi  ˆi  xˆi wi , j xˆi i   wi , j x i   wi , j x  i  i   i 



This is Bailey (1963) model, or Poisson model by Brown (1988).

Possible Weighting Functions 

Another option: using the square of relativityadjusted exposure   wi2, j yˆ 2j xˆi    2 ˆ 2j j   wi , j y  j   wi2, j xˆi2 yˆ j    2 ˆi2 i   wi , j x  i



  xˆi , j       yˆ j ,i   

 w r yˆ  w yˆ 2 i, j i, j

j

j

2 i, j

2 j

j

 w r xˆ  w xˆ 2 i, j i, j

i

i

2 i, j

2 i

i

This is the normal model by Brown (1988).

Possible Weighting Functions 

Another option: using relativity-square-adjusted exposure   wi , j y ˆ 2j xˆi    ˆ 2j j   wi , j y  j   wi , j xˆi2 ˆ j   y ˆi2 i   wi , j x  i



  xˆi , j      ˆ j ,i  y  

 w r yˆ  w yˆ i, j i, j

j

j

i, j

2 j

j

 w r xˆ  w xˆ i, j i, j

i

i

i, j

2 i

i

This is the least-square model by Brown (1988).

Generalized Minimum Bias Models  

 

So, the key for generalization is to apply different “weighting functions” to roll up Xi,j to Xi and Yj,i to Yj Propose a general weighting function of two factors, exposure and relativity: WpXq and WpYq Almost all published to date minimum bias models are special cases of GMBM(p,q) Also, there are more modeling options to choose since there is no limitation, in theory, on (p,q) values to try in fitting data – comprehensive and flexible

2-parameter GMBM 

2-parameter GMBM with exposure and relativity adjusted weighting function are:   wip, j y ˆ qj xˆi    p ˆ qj j   wi , j y  j   wip, j xˆiq ˆ j   y p ˆiq i   wi , j x  i

  xˆi , j      ˆ j ,i  y  

w r w

p i, j i, j

yˆ qj 1

j

p i, j

ˆ qj y

j

 w r xˆ  w xˆ p i, j i, j

q 1 i

i

p i, j

i

q 1 i

2-parameter GMBM vs. GLM

p

q

GLM

1

-1

Inverse Gaussian

1

0

Gamma

1

1

Poisson

1

2

Normal

2-parameter GMBM and GLM 

GMBM with p=1 is the same as GLM model with the variance function of

V (  )   2 q



Additional special models: – – –



0
View more...

Comments

Copyright � 2017 NANOPDF Inc.
SUPPORT NANOPDF