I was running some regressions at work just now and I realized my overdependence on computers had made me forgotten how to calculate certain statistics manually. Modern regression softwares automatically calculate various statistics less than a second and I hardly think of what happens in that virtual blackbox.

But just now, I was following up on a technical economic debate which revolved around some statistics where the report reported its t-stats but not its probability. I was curious about its probability and so, I had to translate the t-stats into probability manually by reading the t-stats distribution table. I struggled at first. I found myself embarrassed at my inability to read the table after 6 years worth of education in economics, and another 3 or 4 years in econometrics. But I managed. I guess, it is like riding a bicycle. Once you learned it, you know it. It may take some stimulus to remember if you have not been riding, but you can really do it.

One thought came to my mind after I was done with that.

I know there is a criticism about whether the critical values—the 10%, the 5%, etc—means anything. Indeed, the critical values are rules arbitarily made up out of convenience. It is highly possible that if the calculated value breaks a particular critical value, a hypothesis can still be true despite rejection. It is all a matter of probability and probability does not work so discretely as the typical critical value rejection rule suggests. If there is a 99% possibility of a hypothesis is untrue, that 1% can still pan out to be true however unlikely. (Let us not get into the Error I and II debate)

Too many people like yes and no answer. The rejection-rule gives them that, rightly or wrongly.

But I am thinking, why, throughout the economics and econometrics world, are the critical values always the same numbers? It either 1%, 5% or 10% (I have seen 25% but… ehem). Why not 4.7%, or 7.1%?

I think I found an answer to that after looking at the t-stats table for the first time in at least 2 years.

Powerful and cheap computers were only available in the last decade of the 20th century. Because of this, many students in the olden days relied on tables for their rejection rules. Tables being tables on pieces of papers, space was at a premium. So, publishers of tables could only print sexy numbers and obviously not too many numbers over the natural number space, never mind real numbers. Either you use the tables, or calculate the critical values yourself, which is a pain.

So, that convention sticks after awhile. From early econometricians to students of econometrics, the same tables get used over and over again. It becomes a tradition.

Maybe?

2 Responses to “[2619] Why are critical values always at 1%, 5% and 10%?”

  1. on 28 Oct 2012 at 01:46 Will Smith

    It’s convenient to have standardization.

    If one paper rejected some hypothesis with a 3% confidence interval, and another similar paper with similar data accepted a hypothesis at the 7% interval, where would be be?

    At least if one rejected at 5% and the other accepted at 5%, we could go and work out whether it was the methodology or the data that differed.

  2. on 28 Oct 2012 at 22:43 Hafiz Noor Shams

    True that but if the paper publish the statistics, it really doesn’t matter if the paper reject or not reject the hypothesis. A critical reader will be able to make that judgment by looking at the statistics and compare it to the reader’s preferred critical values.

Trackback URI | Comments RSS

Leave a Reply

*