Home > Practical Applications Of Statistics And Operational Research For Actuaries

Page 1 |

(I) - A nontechnical introduction to Applied Statistics and Operations Research (2) - Practical applications and examples for financial reporting and asset management (3) - Overview of the potential use in actuarial applications of these m

Page 2 |

viewed as an introduction to selected topics in a number of disciplines with emphasis on fundamental principles and concepts. The educational goal should be to present a wide range of topics in mathematics which have, or potentially have, useful applications to practical actuarial problems, or which help the actuary to communicate more effectively with those in allied professions. . . We wish to stress that our work was guided by the principle that topics included on the syllabus should have demonstrable applicability to actuarial problems. Moreover, our recom- mendations concerning the extent to which any topic is developed in the course of reading, recognize our belief that the Society's educational role relating to the development of mathematical skills is to train gen- eralists and not specialists." The point is that a practicing actuary may face a number of problems. The syllabus, particularly as it relates to Applied Statistics and Operations Research, does not necessarily make you an expert in these areas but it gives you an awareness of tile tools that are available. So if you encounter a problem which has to do with maximization or projections, etc., you will know that an area of expertise exists a

Page 3 |

and techniques to the various and specialized areas of actuar- ial practice. (5) To develop the actuary's sense of inquisitiveness so as to en- cour

Page 4 |

leads to cohere

Page 5 |

Fit = Overall mean + Effect of a factor(s) which exists at several levels Example: Yijk = M + Li + B.j+ Eijk Claims = Overall + Deviations + Deviations + Error incurredby mean for for driver insured auto K territories classes in a fixed period Chang, L., and Fairley, W. (1979), "Pricing Automobile Insurance Under a Multivariate Classification," Journal of Risk and Insurance, 46, 75-93. As an example, suppose that the response function that you are interested in was claims incurred by insured auto "K". K is a label that we will put on a particular auto and let's suppose that the factors for determining the re- sponse function may be territorial division or driving record divisions. The analysis of variance is a device, as in this example, for analyzing the response variable that is made up of a constant and other terms that can bump around at several different levels. C. Regression The second of those models is regression which is probably a non-informative name. It goes back to Francis Galton, one of those remarkable Victorians who seemed to be able to dabble successfully in everything. The difference between regression and analysis of variance is really very small. The main point is that those explanatory variables no longer have to be confined to just a few levels. Those explanatory variables can take on a continuum of values. An example is loss ratios for autos in various groups which might be a constant, plus terms that have to do with horsepower of the automobile, the size of the hometown and the driving record. And, as a matter of fact, there is a paper by Hilary Seal in the eighteenth Transactions of the International Congress of Actuaries addressing this topic. Fit = Function of several variables where the value of the variables need not be confined to a few levels

Page 6 |

Example: Loss Ratio = Constant + Terms for horsepower, + Error of Auto mileage, size of home Insurance city, driving record, etc. for Groups Seal, H. L. (1964), "The Use of Multiple Regression in Risk Classification Based on Pro- portionate Losses," 18th TICA, 2, 659-669. or transformed: Yi = logf _= BO +_BjX j + ei Loss ratios do not fit well into the regression model but that is alright since you can transfo_n them. Transfo_nation is one of the ways that you can expand the range

Page 7 |

Example: Average paid

quarter T quarters Cummins, J. D. and Powell, A. (1980) "The Performance of Alternative Models for Forecasting Automobile Insurance Paid Claim Costs," ASTIN Bulletin, iI, 91-106. If you are interested in forecasting average paid claims, you will want to read a paper by Dave Cummins and Alyn Powell. In that paper they examine several time series models in which the fit is the time series type. They also examine some regression models where the response variable is the av- erage paid cl

Page 8 |

the Council of Economic Advisors, we might use different models to try to control the economy than the models we could use if we were interested in simply short-term prediction to protect ourselves. The idea in this coherent process is first of all to tentatively identify a model. As you might guess, the tricks to doing that are partially graphic. You will want to plot your data in many different ways to try to get an idea as to what model might work. There are also analytic tools, some of which you already know: computing coefficents of correlations, computing coeffi- cients of auto-correlation, and correlation of successive or lagged items of the same series. There are other tricks, both graphic and analytic, to help you tentatively identify a model. The next thing to do is to estimate that model. That is the black box of this process. That is what the computer does, and does well. Part of the growth in Applied Statistics in recent years occurred because we can now estim

Page 9 |

D. A. (1973) "Discussion of Time Series Analysis and Forecasting," TSA XXV Part 2, D210-D223.) We could use this same technique to estimate sales. Frank Reynolds, almost eleven years ago, had a discussion which shows the use of time series models in forecasting life insurance sales, but those models are a little bit more complicated than the ones I have shown you. They have a seasonal component. (Reynolds, F. G. (1973) "Discussion of Time Series Analysis and Forecasting," TSA XXV Part I, 303-321.) I used to work for a Midwestern company and our sales did bunch up in the fall because that is when the crops were harvested. Such facts can be built into the time series models very easily. We can explore society and nature itself in the insurance world. For ex- ample, there is a paper by Peter Ellis that uses time series models to esti- mate monthly auto deaths in the United States. The purpose of Mr. Ellis' paper was to measure the effectiveness or possible noneffectiveness of the 55 m.p.h, speed limit. He constructed a time series model of monthly data and found out, as you probably already would have guessed, that the 55 m.p.h. speed limit was a roaring success in reducing auto deaths. (Ellis, P. M. (1977) "Motor Vehicle Mortality Reductions since the Energy Crisis," Journal of Risk and Insurance, 44.) Other examples are the use of regression anal- ysis in the analysis of causes of mortality or transition probabilities. For example, the transition rate from one state to another from having or not having cancer may be influenced by a lot of explanatory variables. Re- gression may be the way that we can learn more about what influences the response rate. You will see another example of that in the case of radia- tion in a book published by the National Academy of Sciences, where the ex- planatory variables are the dose of radiation, the age of the person at the time of treatment and the sex of the person. We have tried to build a case based on competition, educational coherence and staying in the intellectual mainstream, as to why actuaries need to un- derstand up-to-date applied statistics tools. Karl Pearson, one of the founders of statistics, entitled his most famous book, The Grammar of Science. Statistics is the grammar or structure of science. After you mas- ter these three tools, are you done? No. Because, new tools are coming and will continue to come. Anytime we have data to analyze, that is the busi- ness of statistics. Since, as actuaries we analyze data, then for good or for ill, we are involved with statistics. MR. EDWARD L. ROBBINS: I was asked to speak on the use of statistical tools for auditing with an emphasis on sampling and modeling because I am employed by an accounting firm and am familiar with the subject. I emphasize famili- arity and not expertise for two reasons. The first reason is that I have not been at my company that long and, secondly, we have a division ex- clusively devoted to the statistical applications of the audit function. They are called Statistical Audit Specialists and they belong to the audit division of the firm. They are professionals like the other accountants and actuaries of the firm. They are required to undergo continuing education like the other professionals of the firm and they basically perform four functions: (I) gi

Page 10 |

Page 11 |

representative sample in order to estimate a number or an array of numbers. Put into statistical terminology, modeling is an attempt to get the expected value of something given an extremely stratified sample. What is a strati- fied sample? Leaving precise definitions aside, it is a sample where pains are taken to be sure that the sample is representative of each major cate- gory in the population. Typically, when an actuary does modeling, he makes an effort to select sample elements so that the sample is drawn proportion- ately from the major population categories. Efforts are then made to gross up the model into total portfolio numbers. If, for example, the model is to be used for projections or for GAAP reserves it is important for the actuary to have a high comfort level with the model. By tying into portfolio to- tals, such as reconciling with statutory reserves, the actuary can achieve that comfort level. I have discussed auditing and modeling as two prime applications of good statistical sampling techniques. Let's jump into the techniques themselves. There has been little quantifying of the statistical sampling processes employed by most actuaries. The sampling has mostly been designed by feel instead of by more formal techniques. Now this is not all bad. Research is expensive and an actuary who is well grounded in some of the principles in- volved may have gut feelings as to the sample selection which may give pret- ty good results. It certainly saves a lot of time and there are many con- cepts which defy quantification. Otherwise you might be doing too much re- search for the accuracy you get. Certain rules of thumb of good sampling are the following: (i) Cost and time should be minimized while representativeness of the sample should be maximized. This is something like smoothness and fit. They tend to pull in opposite directions. (2) Once an element is being sampled, as many attri- butes of that element as possible should be tested. In other words, once you have an application folder out, test as many relevant things in that application folder as would be useful to you. (3) Over-emphasize your sam- piing effort in categories or blocks where the degree of internal variation is likely to be high. For example, if you are doing an actuarial model of two blocks where one block of business has a issue year range of 20 years and the other has a issue year range of 5 years, then you want to sample the first of the two blocks more. There are some basic mathematical formulas that bear out these precepts and that also give you an actual quantification of these three precepts. First, let me define stratified sampling; it is breaking your population into two or more components, each of which is more homogeneous than the population at large. A proportionate stratified sample is the selection from each com- ponent of a number of sample elements in the same proportion that the com- ponent bears to the total population. For example_ you have a stratum A and a stratum B that make up the entire population. If the population consists 60% of stratum A and 40% of stratum B, then your sample should also be cho- sen in that proportion. Generally, the degree of imprcvement going from a random sample of the popu- lation to a stratified sample can be very significant if two things occur: (I) the strata have stratum means which are significantly apart from each other and (2) the sampling is random within each stratum and proportionate to the size of the population strata. Expressed mathematically, the degree

Page 12 |

of improvement in the variance of the estimate, moving from random sampling to stratified sampling is: 1 _E

= nL' h_ (E

, where: _h = Sample mean in stratum h. n = Number of elements in entire sample. E(x) = Expected value of x. Thus the improvement is proportionate to the variance of the stratum sample means about the total sample mean. This expression is the variance of your stratum means around your estimate of the population mean multiplied by I

_hNhSh

n = Total sample size nh = Sample size of stratum h. Thus, _hnh = n. N = Total population Nh = Total size of stratum h.

Thus,_N h = N.

What this shows is that your stratum sample size varies with the number of the population in your stratum, "Nh". It also varies with the standard deviation of an element in a particular stratum, stratum h. Your total sam- ple size, "n", times that weighting factor, gives your stratum sample size "nh"- You could go a step further and say, "Some things cost me more than others to sample." One example is a complicated reserve vs. a simple reserve in the audit selection situation. You want to minimize your total cost and you

Page 13 |

also want to maximize your precision. Then how do you calculate your opti- mum stratum sample size? Suppose that your total cost is both an out-of- pocket cost and an implicit cost. Then Total Cost = a +_chn h + Var(est.) where a = Fixed cost ch = Marginal cost of sampling stratum h Vat(est.) = Variance of the estimator of the implicit cost If this is your cost function, a neat expression for your optimum stratum sample size falls out. You add one additional term to the formula above. Total cost is minimized when

nh= INhSh+ hSh

where Sh = _ ch = Cost of sampling an element in stratum h. Thus, nh is proportionate to NhSh and NhSh _h is the weighting factor. See Exhibit 3 for the proof. So now you can see that the larger the stratum size, the larger the vari- ation

within the stratum, and

Page 14 |

A. If total out-of-pocket cost is given: C = a + _ chn NhSh ?--_- NhSh

h

I NNSh Thus n = (C - a) h _h _" NNEN_ h B. If the variance of the estimator is given:

n

Page 15 |

In the last practical example, let's take a typical modeling process, in which the actuary has gone through his in force and decided what central points are representative of plans, issue ages and issue years. Then he does a few more things to "true" this model up. Using the example of a moment ago, the actuary might take more elements from those blocks of business with greater issue year ranges. He will also reclassify the non-model plans into the model plans. In other words, a plan that has six policies in force might be classified into a similar plan that is in the model. In doing that, he is going to eventually recreate the population in force. His final step is to try to get as close as possible to the total in force, statutory reserve, and annualized premiums. Finally he will want to see that the model, when properly grossed up, reproduces all three of these things in a reasonable manner. What has he done in this process? He has made sure that every plan is represented. That is pretty good stratification. He has made sure that plans that encompass a wider category are better represented by more plan groups and he has eliminated an early bias by reproducing the aggregate in force, reserve, and annualized premium figures. I have a feeling that I have not told many of you anything new. I have been redundant but I wanted to tie some of these tried and true traditional tech- niques to specific formulas, because a lot of actuarial feel is born out by these formulas. In any case, whatever the application, certain items are really not going to be worth the trouble to precisely pin down, such as the total cost function or the implicit cost of a larger variance of your esti- mator. So, sometimes it is not worth the trouble to be too precise as long as you know you are going in the right direction. I will give you one additional reference, if you are interested. It is George C. Campbell's, "Problems With Sampling Procedures for Reserve Valuation", Journal of the American Statistical Association, Volume 43, 1948. MR. ROBERT P. CLANCY: I will admit that it is pretty ironic that I am here speaking to you today about actuarial applications of Operations Research topics that are on the Part 3 syllabus. It is really all my wife's fault. A few years ago when Operations Research was placed on the syllabus, my wife was in charge of recruiting instructors for the actuarial classes sponsored by the Actuaries Club of Boston. At the time, she was having a very diffi- cult time trying to find any instructors who were both qualified and inter- ested in teaching. About that time I made a foolish slip of the tongue that ten years earlier I had taken an Operations Research course in college. The next thing I knew, I had been recruited and was struggling to come up with some course materials. Finally I had to learn it well enough to teach. Little did I realize that this was going to turn out to be one of the better moves of my life. Since that time, I have moved into the investment oper- ation of my company; investment strategy provides a number of applications for Operations Research techniques. I firmly believe that such investment strategy applications of Operations Research techniques are going to become quite important as our industry becomes more involved in the development and risk management of interest sensitive products. At any rate, I hope that you will see something here today which piques your interest so that you will want to go back and get a recent Part 3 student to investigate it in greater detail. Let me start with a simple definition of Operations Research. Operations Research is using quantitative techniques in the decision making process. I hope this sounds natural for actuaries. The topics currently on the Part 3 syllabus include decision analysis, linear programming, dynamic programming,

Page 16 |

project sche

Page 17 |

>

Profit from "

Page 18 |

Product

PRODUCT

ForIt Loses Pr

Results Go

J

"

No Promotion for Long Time

Design Losses

, "

Page 19 |

One final comment on decision analysis relates to a subject called utility theory. Now, utility theory is very useful for quantifying results that are not easily quantifiable. For example, consider an _bitious young actuary who has been assigned to develop a new product. The actuary is debating with an aggressive or conservative product design. The actuary might con- struct a decision tree to choose whether to go with an aggressive product design assuming they would monitor the early results of promoting of the product. (See Figure 2.) If the early results come back looking bad for the aggressive product then the company can decide at that point to cut their losses. Or they could decide to "go for it" by marketing the product extensively, and hope that it turns out to be a winner as opposed to a loser. If the early results are good, chances are the actuary would decide to "go for it" and hope that the product turns out to be a winner. Note that the actuary has computed the outcomes, not so much in terms of profitability for the company, but in terms of career ramifications. In this case, the career ramifications are getting a big promotion, getting fired, getting no promotion for a very long time, or getting a nominal pro- motion. Now these outcomes may not seem very quantifiable but they are quantifiable relative to one another. Utility theory achieves this by asking a series of questions such as: would you rather have a job in which you have a guarantee of getting a nominal promotion or would you rather have a high risk job that gives you a 50% chance of getting a big promotion and a 50% chance of getting fired? By playing around with the probability of getting fired, utility theory can allow an individual to quantify his feeling about a nominal pro_:,otion relative to the other possible outcomes. Once these outcomes are quantified relative to one another they can be en- tered into the decision tree and an optimal decision strategy can be de- rived. In short, utility theory can be combined with decision analysis to determine optimal decisions even when the results of such decisions are not easily quantifiable. Some of the potential wide range of decisions for which this technique could be used include the use by doctors to help people decide whether or not to undergo certain medical procedures such as surgery. Now clearly, one's feelings about potential surgical complications are not easy to quantify. This technique presumably would have plenty of potential applications in the insurance business where little emotion is involved. Linear programming is another Operations Research topic with widespread ap- plications especially for determining investment strategies. Consider an analysis for a new interest sensitive product. Here we consider three pos- sible investments: a 5 year bond, a 15 year bond or a 30 year mortgage with three possible interest rate scenarios.

Page 20 |

SURPLUS POSITION (30 YEARS) PER $I OF INITIAL INVESTMENT Initial Investment 81% 5 Yr. and 5 Year 15 Year 30 Year 19% 15 Yr. Scenario Bond Bond Mortgage Bonds I 5.85 5.85 5.85 5.85 2 13.92 5.78 7.30 12.37 3 -.33 1.41 -2.20 0.00 This represents a simple type of C-3 risk analysis. For each combination of investment and scenario we compute the surplus position. In this case, it happens to be at the end of 30 years. All results are computed per dollar of initial investment. Suppose that we would like to find a good combination of investments and we would like not to have to do a lot of trial and error analysis. In par- ticular, we would like to investigate an initial investment strategy whereby we do not lose any money under any scenario. Since the surplus numbers are expressed per dollar of initial investment, an initial investment strategy which is a combination of th_ investments shown will be a linear combination of those surplus results. For example, an initial investment strategy that places 30% of the initial investment in five year bonds, 30% in 15 year bonds and 40% in 30 year mortgages will produce a surplus at the end of 30 years under scenario number two of 30% of $13.92 plus 30% of $5.78 plus 40% of $7.30. This totals to $8.83 per dollar of initial investment. This problem can be formulated as a linear programming problem. Note that the only scenario in which there is any chance of losing money is the third scenario. Let X5 = Fraction Invested in 5 Yr. Bond _15 Fraction Invested in 15 Yr. Bond x30 Fraction Invested in 30 Yr. Mortgage Then we want:

(1) x5 + x15+ X30= 1

(2) Surplus Under Scenario # 3 _ 0, or -.33 . X5 + 1.41 • XI5 - 2.20 - X30 _ 0 We can come up with some expcessions to impose the realistic constraints and also the goals we are trying to achieve. For example, we want the sum of our investments to equal the whole which gives rise to the first constraint. We also want the surplus under each scenario to be equal to or greater than zero. As I pointed out, we only have to worry about that in scenario number three. So the second expression constrains our surplus under scenario num- ber three to be equal to or greater than zero. This begins to look like a linear programming problem.

Page 21 |

GENERAL LINEAR PROGRAMMING PROBLEM i. Find XI, X2, • . . XN such that: C1 • X1 + C2 • X2 + . CN" XN is Minimized Subject to a Set of Linear Constraints, Such as: al,I X1 + al,2 X2 + . al,N XN _ b1 a2,1 XI + a2,2 • X2 + . a2,N • XN _ b2 a X1 + • X2 • XN > bm m,l am,2 + " am,N -- Xi_ 0 i = 1,2, . . . N 2. In Equivalent Notation: Min C • Subject to A - X > b We have N decision variables here, X , X2 to XN, such that we want some • • 1 .... linear combination of those decision variables to be minimized subject to a number of linear constraints. The linear expression that we are trying to minimize is called the objective function. The linear expressions with the inequalities are referred to as the constraints. For those of you who are more comfortable thinking in terms of matrix notation, a shorthand statement of the same problem is given as number 2 above. Linear programming techniques allow us to solve these types of problems. The investment strategy problem that we are trying to solve begins to look like a simple linear programming problem with three decision variables• In the • i i! paper_ "The Matching of Assets and Liabilit es , in the 1980 Transactions, Jim Tilley presented the framework for solving these types of problems by formulating the objective function and the constraints as a linear program- ming problem. He also provided the software for solving the problem. Using the techniques from that paper, we find that there is an investment strategy that meets our goals. An initial investment strategy that places 81% of our initial investment in five year bonds and 19% in 15 year bonds will not lose money under any of the three scenarios and, in particular, under scenario number three. Now, this problem may seem fairly simple and you could probably do it with- out some knowledge of Operations Research techniques. However, if the num- ber of scenarios were more realistic and if the number of possible invest- ments were much greater, the problem would not be a trivial one to do by hand and you would definitely want some software capabilities to handle it. For the record, if these scenarios were such that every investment strategy entailed a loss under at least one scenario then we could still use a linear programming approach where we would alter the objective function so as to

Page 22 |

use maxi-min criteria. All this means is that we could select the invest- ment strategy which minimized our loss assuming that the worst possible scenario actually happened

Page 23 |

different. Hopefully the AIM triangle can agree on some graph that they feel represents a reasonable compromise and suits their objectives. That graph would represent some compromise of design and investment strategy features. Another use of simulation arises with pension plan sponsor's asset alloca- tion decisions. Simulation can be used to show the pension plan sponsor how effectively a synthetic put option strategy can be applied to portfolios of stocks and bonds. At least one large eastern company is marketing a syn- thetic put option strategy now on stock portfolios for pension funds. Sim- ulation can also be used to help a pension plan sponsor decide how to allo- cate money between stocks, bonds, real estate, etc. The simulation can pro- ject a random scenario separately for each of several different types of in- vestments while still allowing for appropriate levels of correlation among the returns. At least one consulting firm has marketed this service for pension funds in recent years. I close my remarks by touching briefly on one more Operations Research topic. You may already be familiar with project scheduling. Any of you that have already worked on a big project involving many operating areas of the company have probably already seen a PERT chart. (See Figure 4.) In this PERT chart, the arrows represent activities that need to be performed and also the order in which certain activities need to be performed. Pro- jects such as Whole Life policy enhancement programs, introduction of new products and EDP design and implementation schedules are ideal PERT chart applications. Also by using suggested techniques, one can determine how to efficiently speed up a project by scheduling overtime, extra personnel, etc. Well, I hope that I have mentioned something which aroused your curiosity. If I have mentioned some Operations Research application that you now want to ask your students to investigate in greater detail, then I have accom- plished everything that I had hoped for. MR. HOLLAND: We would like to take a few minutes now for questions and answers or for you to talk about practical applications of any of these topics in your practice. MR. JOHN THOMPSON: I was interested in Mr. Robbins' mention of the applica- tion of sampling and modeling techniques in determining the aggregate de- ferred premium. It has been some time since I was involved in this kind of valuation problem, but, say thirty years ago, our approach was to classify annualized premiums with respect to anniversary month and mode of premium payment. That gives you directly the aggregate deferred premiums. Why should sampling techniques be necessary or desirable? MR. ROBBINS: It really depends on the error found. If an error of prin- ciple has been found in the calculation of the deferred premiums, it can be any type of error. I think you are saying that as long as policies are ciassified as to anniversary and frequency and you have the annualized pre- mium, you can get deferred premiums directly if the totals that you are working with are correct. But if they are not correct, you have to go back into the degree of incorrectness and the source of the incorrectness and determine the cause of the problem and make an estimate of the error.

Page 24 |

DISTRIBUTIONS OF E

Percent

11.00% INITIAL GUARANTEE RATE >

_

Page 25 |

Liability Year Flow 1 $1.38 Million 2 $2.26 Million Then we want: Min X1 + X2 Subject to: 1.12 • XI + .13 • X2 _ 1,380,000 1.13 • X 2 _ 2,260,000 Then: X2 = $2,000,000 X 1 $i,000,000 We would like to be able to pay off a liability of $1.38 million at the end of one year and $2.26 million at the end of two years. We would like to find the cheapest portfolio which will generate at least $1.38 million of cash at the end of one year and $2.26 million of cash at the end of two years. Well, that gives rise to the linear programming problem shown above. In solving this problem, we find that a $2 million investment in two year bonds and a $i million investment in one year bonds does the trick. Now this is an extremely simple problem and you may have been able to do it in your heads. But if we were trying to match a large number of liability flows and if we were looking at a large number of possible investments then the problem is no longer trivial. Again, you would want a linear program- ming software package to handle it. Even for a more complicated problem the format would be exactly the same. We would want to find the portfolio that has the least cost which will generate cash flows of at least the specified amounts at each specified point in time. Another topic of growing interest in this era of interest sensitive products is the use of simulation techniques. Many problems, including actuarial ones, do not have readily available analytical solutions. Simulation models can help provide a solution in such circumstances. In addition, rapid ad- vances in computer science in recent years have further enhanced the use- fulness of simulation models. At the New York meeting last spring, we heard time and time again that simulation was an important part of the product pricing, product design and investment strategy process for developing in- terest sensitive products. Simulation provides important input to the AIM triangle where AIM stands for Actuarial, Investment and Marketing. In order to do a simulation for an interest sensitive product, we will want an inter- est rate model. One will probably want to use some time series or regres- sion analysis similar to those described here to estimate the form and parameters of the model. The results of a set of simulations can be sum- marized in chart or graph form. (See Figure 3.) Shown in Figure 3 are the summarized results for two synthetic option strat- egies supporting a single premium deferred annuity product. I do not want to get into synthetic option strategies, but let me say that if we ware to change the investment strategy, or change the interest guarantee, or change some of the product design features we would end up with graphs that look

Page 26 |

S T

Page 27 |

MR. HOLLAND: I think a possible situation would be if a plan was misclassi- fied. It was listed as an annual frequency and it actually could be a monthly premium. You want to find the error and straighten it out. MR. BARRY SAVAGE: Again, a question for Mr. Robbins. It seems the best possible sample is a 100% sample. Ultimately, by setting up our systems to run on modern electronic systems, will all our data be such that we cannot always sample 100%? Moreover, the cost in extending the sample is essenti- ally negligible. Why, th

Page 28 |

that the reserves are good and sufficient or for a consulting actuary who comes in to look over the work that has been done. It may not he econom- ically feasible for the individual based on the time constraints to do a 100% sample of all the reserve calculations. A chief actuary in a large company may have hundreds of people involved in putting the reserves to- gether. He needs to be sure that based on his professional responsibility that the reserves have not only been calculated correctly, but that they are the correct reserves. I think that you can discover a lot of things by sam- piing or by using various other techniques such as regression analysis to see how a reserve would be expected to progress for a certain block. Then, as senior actuary, you can look at the results of the sample or of the re- gression and say "This is really unusual", or "This is more of a deviation than I would have expected. So let's go in and audit this particular block very carefully." There are a lot of statistical tools that will help you even if all of the work is done on a 100% basis. MR. HICKMAN: Let me make a historical comment. Bob gave you a little of the history of matching and immunization ideas and one of his illustr

Page 29 |

MR. HOLLAND: I agree. I think that we want to talk about where on the syl- labus the practical aspects belong. The syllabus, to some extent, is de- voted to the theory in the early exams, particularly Associateship exams, and to practice related issues in the Fellowship exams. We want to bring our actuaries along in their development. It may be that the applications will appear in a later exam than the theory. MR. DAVE WILLIAMS: I think we are all familiar with the widely used econo- metric indicators. Considering the leading indicators which are supposed to foreshadow changes in the economy I am surprised that there has apparently been no statistical study to determine how valid they are, how relatively important they are, or the precision with which they do foretell what is going to take place in the economy in the next 6 to 12 months. Are you aware of any OR or statistical studies which show just how valid the leading economic indicators are? MR. HICKMAN: Yes, there have been such studies. In the United States, the Department of Commerce regularly publishes an index of leading indicators. Now, what is a leading indicator? What you are looking for in time series terms are current values at time t that will help you forecast those at time t + i. Victor Zurnowitz from the University of Chicago, while at the Commerce Department, was largely responsible for the selection and develop- ment of that set of leading indicators. There have been many studies of their effectiveness. Levels and trends are fairly easy to forecast, but, the timing of turning points is very difficult. There have been evaluations of the index of leading indicators. The Bureau of the Department of Commerce does have a continuous project to not only monitor how well that index does, but also to look for other elements that might become one of the indic

Page 30 |

certain level. One other application that I left out entirely is dynamic programming. That has a very wide range of applications also, but the ex- amples tend to get fairly complicated. I left it out because of the com- plicated examples, not because it was an inappropriate topic for discus- si

Page 31 |

C

X

Ra

VS.

Pr

_L

mean.

Standard error o

,

_

" _

Page 32 |

i

_V'_ * N=

. _

Source: "Samp

Page 33 |

a

[bid,pp.7

Page 34 |

7

--

Ibid, p 76.

- In Chapter One, I model the interaction between the government of a sub-Saharan African country and foreign providers of financ
- gongs~3~resurrections~7~stinking~3~thresholds~3~venerable~3~wimpiest~3~workplace~3~writhed~3~yearbooks~3~yum~3~zigzagged~3~zips~3~zoom~3~zoos~3~zucchi
- heAnarchismאנרכיזם
- ZOLA - ''L'oeuvre''
- Full text of "The Negro Motorist Green Book, 1949"
- SARAH KENDREW
- FCO JUDGE OF THE EUROPEAN COURT OF HUMAN RIGHTS WITH RESPECT TO THE UK GUIDE TO NOMINATION AND ELECTION PROCEDURES The post of judge
- The Economic Theory of Agency: The Principal's Problem
- VÁŠ ZLATOČERNÝ BOURÁK
- bigbrochure_November 2012.ai
- HIGH PERFORMANCE ALGORITHMS IN FEEDFORWARD NEURAL NETWORKS BACKPROPAGATION TRAINING Lumini а Giurgiu, Assistent Professor, Land
- GRASP - Graphical Representation and Analysis of Surface Properties___________________________________________________________________________________
- Developing a General Method to Assess Task-Technology Fit
- PHOTOSYNTHESIS QUESTIONS 1. The diagram shows the main stages in the light-independent reactions in photosynthesis. (a) Write in the boxes i
- PAPER: European option prices under an explicitly solvable multi-scale stochastic volatility model and the analysis of the implied values of
- My Pill
- forestplot: Advanced Forest Plot Using 'grid' Graphics
- Journal of Literacy and Technology
- Title Subheading Date
- TEMA 7: EL PATRIMONIO DE LA EMPRESA
- INSTRUCTIONS FOR OP FORMS

- Human Capital
- Human capital
- Economic growth
- Economic models
- The development of the western region
- Provincial governments
- Government affairs
- Right to Know
- Logistics Information System
- Three tier client / server architecture
- Competence
- System Optimization
- Financial power
- Administrative power
- Monitoring mechanism
- Government level meeting
- The scope of balance of payments
- Financial Management
- Tax planning
- Taxes
- antimicrobial peptide

All Rights Reserved Powered by Free Document Search and Download

Copyright © 2011This site does not host pdf,doc,ppt,xls,rtf,txt files all document are the property of their respective owners. complaint#nuokui.com