It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
A null hypothesis is a hypothesis (within the frequent context of statistical hypothesis testing) that might be falsified using a test of observed data. Such a test works by formulating a null hypothesis, collecting data, and calculating a measure of how probable those data were assuming the null hypothesis were true. If the data appear very improbable (usually defined as a type of data that should be observed less than 5% of the time) then the experimenter concludes that the null hypothesis is false. If the data look reasonable under the null hypothesis, then no conclusion is made. In this case, the null hypothesis could be true, or it could still be false; the data give insufficient evidence to make any conclusion.
The null hypothesis typically proposes a general or default position, such as that there is no relationship between two quantities, or that there is no difference between a treatment and the control.
The term was originally coined by English geneticist and statistician Ronald Fisher. In some versions of hypothesis testing using statisti(such as developed by Jerzy Neyman and Egon Pearson), the null hypothesis is tested against an alternative hypothesis.
***This alternative may or may not be the logical negation of the null hypothesis. The use of alternative hypotheses was not part of Ronald Fisher's formulation of statistical hypothesis testing, though alternative hypotheses are standard today in practc.
For instance, one might want to test the claim that a certain drug reduces the chance of having a heart attack. One would choose the null hypothesis "this drug does not reduce the chances of having a heart attack" (or perhaps "this drug has no effect on the chances of having a heart attack").
One should then collect data by observing people both taking the drug and not taking the drug in some sort of controlled experiment. If the data are very unlikely under the null hypothesis one would reject the null hypothesis, and conclude that its negation is true.
That is, one would conclude that the drug does reduce the chances of having a heart attack. Here "unlikely data" would mean data where the percentage of people taking the drug who had heart attack was significantly (according to statistical standards) less than the percentage of people not taking the drug who had heart attacks.
Of course one should use a known statistical test to decide how unlikely the data were and hence whether or not to reject the null hypothesis.
***One must take care in choosing a null hypothesis, as different choices lead to different answers. This is demonstrated in the following example: You are asked to decide if the coin is fair (i.e. that on average it will come up heads 50% of the time). You flip it 5 times and it comes up heads all 5 times. Do you conclude it is not a fair coin? Well, you might say your alternate hypothesis is "this coin is biased towards heads". The null hypothesis would be "this coin is not biased towards heads", which is to say it is at least as likely to come up tails as heads. Under this null hypothesis, the data are indeed unlikely (it should happen about 3% of the time). You would reject the null hypothesis and conclude the coin was biased. However, you could instead choose the alternate hypothesis "this coin is biased", and the null hypothesis, "this coin is fair". Then the data are not so unlikely; similar data should happen about 6% of the time, where 3% of the time you get all heads and 3% of the time you get all tails. You would then not reject the null hypothesis, so you would make no conclusion. In this case, the second null hypothesis would be correct: you were originally asked to decide if the coin is fair, not if it is biased towards heads. You would want more data to make any such conclusion (and really you should have wanted more data to begin with). This second example illustrates one hazards of hypothesis testing: if one tests a given set of data with respect to a large number of null hypotheses, all of which are true, one is nonetheless likely to reject some of them, making false conclusions. However, if one follows the scientific method and formulates the null hypothesis before collecting data, one only makes a small number of type 1 errors (i.e. one only rejects a true null hypothesis a small percentage of the time). Of course, even if used carefully and correctly, any statistical test gives some incorrect conclusions***.