

FlashChat  Actuarial Discussion  Preliminary Exams  CAS/SOA Exams  Cyberchat  Around the World  Suggestions 
ShortTerm Actuarial Math Old Exam C Forum 

Thread Tools  Search this Thread  Display Modes 
#11




Thank you anyhow, for answering my questions, that solely reflect my own level of incapacity.
__________________
German ______________ Prelims: VEE: 
#12




1) I understand that 1/40 = 2.5% and 1/20 = 5%.
2) These are type of questions that compare 2 possible distributions, and the loglikelihood of given some samples , assume that the Ho is the distribution with less parameters fitted, against H1 of a different distribution, in this case with 1 parameter fitted. then we go to the chisquare table and figure if the 2 the difference of loglikelihoods fit where in the table for the given dof being the difference in parameters. In this case I find in Excel that the "exact" probability is 2.624%, that in the tables is somewhere between 2.5% and 5%. What is the meaning of 2.624%? 3) Again the answers are such that they ask whether we reject or not at 2 significance levels: " Reject at the .050 sig. level, but not at the .025 sig. level", etc. my question is what's the meaning of "reject at .050 "sig. level" and "not reject at .025 sig level". Is it that with the data given and the Ho of accepting the nonfitted distribution as true , vs the H1 of the fitted distribution being true, will be that it may be a true representation 2.5% of the time or less, but not 5% of the time or more. YES  NO If the nonfitted distribution is a good model, Ho true, we may be rejecting this model 5% or more of the runs and not rejecting 2.5% or less. YESNO If what I said was wrong what would you change in the questions? 4) I assume there is a population where the samples are extracted. And then we use techniques like method of moments or MLE to model the population, or just past history in similar situations with larger samples. With different samples or runs, we get different models. In this case one model is not based on the sample, and gives 2 parameters out of the blue, probably a larger population (our Ho). The other model has one parameter given and one fitted from the sample (H1). OK, we compare the loglikelihoods of the samples in these models. And use the llratio method to compare double the difference with a chisquare table. We know and verified that the fitted distribution will have better loglikelihood that the nonfitted. The point is whether this is significant enough to justify fitting one parameter. So far, so good. And what I am trying to understand is the meaning of this significance. When I say significant at 2.624% level means that if I run the sampling 100 times....what?
__________________
German ______________ Prelims: VEE: 
#13




I am sorry for all the noise due to my lack of understanding. I guess a possible explanation based on Colymbosathon ecplecticos answer would be that we may not know when we are wrong in the assumption, but that on the average, if Ho is true, the nonfitted distribution represents the population, we will be correct in not rejecting Ho as true less than 2.5% of the time, so there is a distinct possibility that is correct, but not as high as 5%. Or something like that. Some people may be happy with that significance level in order not to go into credibility calculations.
__________________
German ______________ Prelims: VEE: 
#14




Quote:
We have one statement, H0, which is our default assumption. Then we obtain a body of data and we ask, "Do we still believe this?" Suppose our base assumption was, "Gaucho is an axemurderer" We then go and obtain some data: 57% of people in our survey (N= 100) drink orange juice with breakfast. Now, under H0 (which means assuming that H0 is true) the likelihood that we observed that extreme an event is, say, 60% (because this has NOTHING to do with H0). With this data, we fail to reject H0. Very different than accepting H0. Or do you disagree, Mr. Axemurderer?
__________________
"What do you mean I don't have the prerequisites for this class? I've failed it twice before!" "I think that probably clarifies things pretty good by itself." "I understand health care now especially very well." 
#15




Quote:
If the underlying is truly continuous (see below), then significant at the 2.624% level means (because of the Law of Large Numbers) that in the limit as sample sizes tend to infinity, about 2,624 out of 100,000 random samples from the population will have a statistic as unusual as this or more unusual. If the underlying is not continuous, we need to be more careful. For example, the probability of HHHH on a fair coin is 1/16, about 7%, so we might reject at 10% and fail to reject at 5%, but neither is 5/100 or 10/100. PM me if you want references to the literature.
__________________
"What do you mean I don't have the prerequisites for this class? I've failed it twice before!" "I think that probably clarifies things pretty good by itself." "I understand health care now especially very well." 
#16




Thank you special swimmer! But I still don't get even the example of the orange juice...
you mention: "We then go and obtain some data: 57% of people in our survey (N= 100) drink orange juice with breakfast. Now, under H0 (which means assuming that H0 is true) the likelihood that we observed that extreme an event is, say, 60% (because this has NOTHING to do with H0). With this data, we fail to reject H0. Very different than accepting H0." If I understand it, what you are saying is as the condition of Ho being true does not play a role in people drinking OJ at breakfast. The figure of 60% I assume it's the significance level of the OJ drinking, you called it likelihood, and some other people may call it probability, is this correct? So you took a sample of 100, found 57 drinking OJ, and you can attest that this figure is correct 60% of the time by whatever test you divise. Even here I find difficult to understand this significance level, and why it's an extreme event, as no criterion was established a priori. So, to me the example is as clears as mud, sorry for my shortcomings, that I know are many. I find hypothesis testing easier to understand like when comparing 2 means, but not when we are comparing whole distributions. You compare the difference to how many std distributions mean, and then based on a z test or t test, you establish a significance level and you "don't reject" that they are the same unless they are way off. And then you can go on to other areas of Hypothesis testing, but it's off subject here. But let's say that you would the figure of 57% being correct if the likelihood is less than 30%, and you find 60%, that's outside the range. Is this what you are saying? I am failing to follow the example.
__________________
German ______________ Prelims: VEE: 
#17




Regarding your second comment, Mr Fossilized Swimmer (lol) and thank you for posting it, you mention:
"If the underlying is truly continuous (see below), then significant at the 2.624% level means (because of the Law of Large Numbers) that in the limit as sample sizes tend to infinity, about 2,624 out of 100,000 random samples from the population will have a statistic as unusual as this or more unusual." what's the meaning of a statistic as unusual as this or more unusual? What's usual? I am lost again. Which statistic? What is it if I find one? (it reminds me of the elephant described by blind people, don't know why) I have 2 models for the population. One gives me 2 parameters out of the blue, Ho. The other fits a parameter to a sample, H1. Why one is unusual. I have been wondering if this situation may be explained better on why the chisquare is used for the test. Perhaps related to the underlying Normal distributions used. But my knowledge there is very limited. Thank you again.
__________________
German ______________ Prelims: VEE: 
#18




Keep in mind that in a "large population," we'll never know the "true" parameter values of whatever you're looking at; the nature of H0 is simply stating a guess we think might work.
Since we can never know those true values, we really can never accept that H0 is true (how can we?). If our guess (i.e., H0) happens to be "sufficiently close," the best we'll ever be able to conclude is that there is not enough evidence to reject our guess as being right. 
#19




After all my ranting and a good night sleep, first I apologize for my stupidity, and then I think I found my answer. What I believe is that if I run the experiment/sampling 10,000 times I will find that the difference between the 2 likelihoods of the Ho distribution and the H1 distribution will not be significantly different in less than 2.6% of the times, on the average. So it's not as significant as in 5% of the runs, but it is at least 2.5% of the runs.
I was missing what was different, and of course it had to be what started it all. Unless you have a different explanation disregard all my postings. I am leaving them because it's just leaves a trail of my train of thought and it may help clarify somebody else's doubts. Yes, I could've had a V8. Thank you again to all who answered. They were all very insightful, including the last one, about never be knowledgeable of what is really true, so we should never accept it.
__________________
German ______________ Prelims: VEE: Last edited by gauchodelpaso; 03052019 at 09:32 AM.. 
#20




It might also help to remember that your testing on the parameters involves not the underlying distribution so much as the distribution of values of the parameter based on samples (of the same size) from the population. (Which is what your last post does allude to with "experiment/sampling 10k times".)

Thread Tools  Search this Thread 
Display Modes  

