So, this:

In any study's conclusion, you would either accept the null hypothesis and reject the alternate hypothesis, or you would accept the alternate hypothesis and reject the null hypothesis.

Therefore, you have a 50% chance of your study being right or wrong. The end. You're welcome for this explanation of all of statistics.

Just kidding.

So, let's say you reject the null hypothesis and accept the alternative hypothesis.

If you are wrong, you have accepted the alternative hypothesis when it is false. This is called either a

**false positive** or

**Type I **error. The probability that you have done this is called α, which is the same thing as the

p-value. There are two letters that denote the same value because it makes statistics more funner. (Grammar joke!)

Or, you could accept the null hypothesis and reject the alternate hypothesis.

If you are wrong, you have rejected the alternative hypothesis when it is true. This is called a **false negative** or **Type II** error. The chance that you have done this is β. To have a good study, you want the β to be ≤ 0.2, or less than a 20% chance that a true alternative hypothesis is rejected.

So to review:

*These values represent the probability of errors occurring: *

__α or p-value__: probability that false alternative hypothesis is accepted (Type I error)

__β__: probability that true alternative hypothesis is rejected (Type II error)

But what if this happens:

And you are right? A false null hypothesis has been rejected. The probability of this happening is called the **power**, and is 1-β. Instead of talking about the β of a study, people usually talk about the power since it provides the same information. The power of a study should be ≥ 0.8, or provide a greater than 80% chance of rejecting a false null hypothesis, as this will also provide an acceptable β.

So there must be one more value to talk about. You do this:

And you are correct. The true null hypothesis has been accepted. This is 1-α. In studies, no one cares about this value so it doesn't have a name that is used frequently in conclusions of papers. People simply talk about the p-value because it *is* α and relates closely to 1-α. Womp, womp.

So in conclusion:

*These values represent the probability of errors occurring: *

__α or p-value__: probability that false alternative hypothesis is accepted (Type I error)

__β__: probability that true alternative hypothesis is rejected (Type II error)

*These values represent the probability of errors not occurring:*

__power__: probability that false null hypothesis is rejected (1-β)

_____: probability that true null hypothesis is accepted (1-α)

Studies usually use only p-value and power to describe all four of these probabilities. A better p-value will mean a better 1-α, and a better power will mean a better β. A good p-value is < 0.05, and a good power is > 0.8 .