Critical Value Approach in Hypothesis Testing

by Nathan Sebhastian

Posted on Jun 05, 2023

Reading time: 5 minutes

critical value approach in hypothesis testing

The critical value is the cut-off point to determine whether to accept or reject the null hypothesis for your sample distribution.

The critical value approach provides a standardized method for hypothesis testing, enabling you to make informed decisions based on the evidence obtained from sample data.

After calculating the test statistic using the sample data, you compare it to the critical value(s) corresponding to the chosen significance level ( α ).

The critical value(s) represent the boundary beyond which you reject the null hypothesis. You will have rejection regions and non-rejection region as follows:

Two-sided test

A two-sided hypothesis test has 2 rejection regions, so you need 2 critical values on each side. Because there are 2 rejection regions, you must split the significance level in half.

Each rejection region has a probability of α / 2 , making the total likelihood for both areas equal the significance level.

Critival regions in two-sided test

In this test, the null hypothesis H0 gets rejected when the test statistic is too small or too large.

Left-tailed test

The left-tailed test has 1 rejection region, and the null hypothesis only gets rejected when the test statistic is too small.

Critival regions in left-tailed test

Right-tailed test

The right-tailed test is similar to the left-tailed test, only the null hypothesis gets rejected when the test statistic is too large.

Critival regions in right-tailed test

Now that you understand the definition of critical values, let’s look at how to use critical values to construct a confidence interval.

Using Critical Values to Construct Confidence Intervals

Confidence Intervals use the same Critical values as the test you’re running.

If you’re running a z-test with a 95% confidence interval, then:

  • For a two-sided test, The CVs are -1.96 and 1.96
  • For a one-tailed test, the critical value is -1.65 (left) or 1.65 (right)

To calculate the upper and lower bounds of the confidence interval, you need to calculate the sample mean and then add or subtract the margin of error from it.

To get the Margin of Error, multiply the critical value by the standard error:

Let’s see an example. Suppose you are estimating the population mean with a 95% confidence level.

You have a sample mean of 50, a sample size of 100, and a standard deviation of 10. Using a z-table, the critical value for a 95% confidence level is approximately 1.96.

Calculate the standard error:

Determine the margin of error:

Compute the lower bound and upper bound:

The 95% confidence interval is (48.04, 51.96). This means that we are 95% confident that the true population mean falls within this interval.

Finding the Critical Value

The formula to find critical values depends on the specific distribution associated with the hypothesis test or confidence interval you’re using.

Here are the formulas for some commonly used distributions.

Standard Normal Distribution (Z-distribution):

The critical value for a given significance level ( α ) in the standard normal distribution is found using the cumulative distribution function (CDF) or a standard normal table.

z(α) represents the z-score corresponding to the desired significance level α .

Student’s t-Distribution (t-distribution):

The critical value for a given significance level (α) and degrees of freedom (df) in the t-distribution is found using the inverse cumulative distribution function (CDF) or a t-distribution table.

t(α, df) represents the t-score corresponding to the desired significance level α and degrees of freedom df .

Chi-Square Distribution (χ²-distribution):

The critical value for a given significance level (α) and degrees of freedom (df) in the chi-square distribution is found using the inverse cumulative distribution function (CDF) or a chi-square distribution table.

where χ²(α, df) represents the chi-square value corresponding to the desired significance level α and degrees of freedom df .

F-Distribution:

The critical value for a given significance level (α), degrees of freedom for the numerator (df₁), and degrees of freedom for the denominator (df₂) in the F-distribution is found using the inverse cumulative distribution function (CDF) or an F-distribution table.

F(α, df₁, df₂) represents the F-value corresponding to the desired significance level α , df₁ , and df₂ .

As you can see, the specific formula to find critical values depends on the distribution and the parameters associated with the problem at hand.

Usually, you don’t calculate the critical values manually as you can use statistical tables or statistical software to determine the critical values.

I will update this tutorial with statistical tables that you can use later.

The critical value is as a threshold where you make a decision based on the observed test statistic and its relation to the significance level.

It provides a predetermined point of reference to objectively evaluate the strength of the evidence against the null hypothesis and guide the acceptance or rejection of the hypothesis.

If the test statistic falls in the critical region (beyond the critical value), it means the observed data provide strong evidence against the null hypothesis.

In this case, you reject the null hypothesis in favor of the alternative hypothesis, indicating that there is sufficient evidence to support the claim or relationship stated in the alternative hypothesis.

On the other hand, if the test statistic falls in the non-critical region (within the critical value), it means the observed data do not provide enough evidence to reject the null hypothesis.

In this case, you fail to reject the null hypothesis, indicating that there is insufficient evidence to support the alternative hypothesis.

Take your skills to the next level ⚡️

I'm sending out an occasional email with the latest tutorials on programming, web development, and statistics. Drop your email in the box below and I'll send new stuff straight into your inbox!

Hello! This website is dedicated to help you learn tech and data science skills with its step-by-step, beginner-friendly tutorials. Learn statistics, JavaScript and other programming languages using clear examples written for people.

Learn more about this website

Connect with me on Twitter

Or LinkedIn

Type the keyword below and hit enter

Click to see all tutorials tagged with:

What is a critical value?

A critical value is a point on the distribution of the test statistic under the null hypothesis that defines a set of values that call for rejecting the null hypothesis. This set is called critical or rejection region. Usually, one-sided tests have one critical value and two-sided test have two critical values. The critical values are determined so that the probability that the test statistic has a value in the rejection region of the test when the null hypothesis is true equals the significance level (denoted as α or alpha).

critical value approach in hypothesis testing

Critical values on the standard normal distribution for α = 0.05

Figure A shows that results of a one-tailed Z-test are significant if the value of the test statistic is equal to or greater than 1.64, the critical value in this case. The shaded area represents the probability of a type I error (α = 5% in this example) of the area under the curve. Figure B shows that results of a two-tailed Z-test are significant if the absolute value of the test statistic is equal to or greater than 1.96, the critical value in this case. The two shaded areas sum to 5% (α) of the area under the curve.

Examples of calculating critical values

In hypothesis testing, there are two ways to determine whether there is enough evidence from the sample to reject H 0 or to fail to reject H 0 . The most common way is to compare the p-value with a pre-specified value of α, where α is the probability of rejecting H 0 when H 0 is true. However, an equivalent approach is to compare the calculated value of the test statistic based on your data with the critical value. The following are examples of how to calculate the critical value for a 1-sample t-test and a One-Way ANOVA.

Calculating a critical value for a 1-sample t-test

  • Select Calc > Probability Distributions > t .
  • Select Inverse cumulative probability .
  • In Degrees of freedom , enter 9 (the number of observations minus one).
  • In Input constant , enter 0.95 (one minus one-half alpha).

This gives you an inverse cumulative probability, which equals the critical value, of 1.83311. If the absolute value of the t-statistic value is greater than this critical value, then you can reject the null hypothesis, H 0 , at the 0.10 level of significance.

Calculating a critical value for an analysis of variance (ANOVA)

  • Choose Calc > Probability Distributions > F .
  • In Numerator degrees of freedom , enter 2 (the number of factor levels minus one).
  • In Denominator degrees of freedom , enter 9 (the degrees of freedom for error).
  • In Input constant , enter 0.95 (one minus alpha).

This gives you an inverse cumulative probability (critical value) of 4.25649. If the F-statistic is greater than this critical value, then you can reject the null hypothesis, H 0 , at the 0.05 level of significance.

  • Minitab.com
  • License Portal
  • Cookie Settings

You are now leaving support.minitab.com.

Click Continue to proceed to:

Critical Value Calculator

How to use critical value calculator, what is a critical value, critical value definition, how to calculate critical values, z critical values, t critical values, chi-square critical values (χ²), f critical values, behind the scenes of the critical value calculator.

Welcome to the critical value calculator! Here you can quickly determine the critical value(s) for two-tailed tests, as well as for one-tailed tests. It works for most common distributions in statistical testing: the standard normal distribution N(0,1) (that is when you have a Z-score), t-Student, chi-square, and F-distribution .

What is a critical value? And what is the critical value formula? Scroll down – we provide you with the critical value definition and explain how to calculate critical values in order to use them to construct rejection regions (also known as critical regions).

The critical value calculator is your go-to tool for swiftly determining critical values in statistical tests, be it one-tailed or two-tailed. To effectively use the calculator, follow these steps:

In the first field, input the distribution of your test statistic under the null hypothesis: is it a standard normal N (0,1), t-Student, chi-squared, or Snedecor's F? If you are not sure, check the sections below devoted to those distributions, and try to localize the test you need to perform.

In the field What type of test? choose the alternative hypothesis : two-tailed, right-tailed, or left-tailed.

If needed, specify the degrees of freedom of the test statistic's distribution. If you need more clarification, check the description of the test you are performing. You can learn more about the meaning of this quantity in statistics from the degrees of freedom calculator .

Set the significance level, α \alpha α . By default, we pre-set it to the most common value, 0.05, but you can adjust it to your needs.

The critical value calculator will display your critical value(s) and the rejection region(s).

Click the advanced mode if you need to increase the precision with which the critical values are computed.

For example, let's envision a scenario where you are conducting a one-tailed hypothesis test using a t-Student distribution with 15 degrees of freedom. You have opted for a right-tailed test and set a significance level (α) of 0.05. The results indicate that the critical value is 1.7531, and the critical region is (1.7531, ∞). This implies that if your test statistic exceeds 1.7531, you will reject the null hypothesis at the 0.05 significance level.

👩‍🏫 Want to learn more about critical values? Keep reading!

In hypothesis testing, critical values are one of the two approaches which allow you to decide whether to retain or reject the null hypothesis. The other approach is to calculate the p-value (for example, using the p-value calculator ).

The critical value approach consists of checking if the value of the test statistic generated by your sample belongs to the so-called rejection region , or critical region , which is the region where the test statistic is highly improbable to lie . A critical value is a cut-off value (or two cut-off values in the case of a two-tailed test) that constitutes the boundary of the rejection region(s). In other words, critical values divide the scale of your test statistic into the rejection region and the non-rejection region.

Once you have found the rejection region, check if the value of the test statistic generated by your sample belongs to it :

  • If so, it means that you can reject the null hypothesis and accept the alternative hypothesis; and
  • If not, then there is not enough evidence to reject H 0 .

But how to calculate critical values? First of all, you need to set a significance level , α \alpha α , which quantifies the probability of rejecting the null hypothesis when it is actually correct. The choice of α is arbitrary; in practice, we most often use a value of 0.05 or 0.01. Critical values also depend on the alternative hypothesis you choose for your test , elucidated in the next section .

To determine critical values, you need to know the distribution of your test statistic under the assumption that the null hypothesis holds. Critical values are then points with the property that the probability of your test statistic assuming values at least as extreme at those critical values is equal to the significance level α . Wow, quite a definition, isn't it? Don't worry, we'll explain what it all means.

First, let us point out it is the alternative hypothesis that determines what "extreme" means. In particular, if the test is one-sided, then there will be just one critical value; if it is two-sided, then there will be two of them: one to the left and the other to the right of the median value of the distribution.

Critical values can be conveniently depicted as the points with the property that the area under the density curve of the test statistic from those points to the tails is equal to α \alpha α :

Left-tailed test: the area under the density curve from the critical value to the left is equal to α \alpha α ;

Right-tailed test: the area under the density curve from the critical value to the right is equal to α \alpha α ; and

Two-tailed test: the area under the density curve from the left critical value to the left is equal to α / 2 \alpha/2 α /2 , and the area under the curve from the right critical value to the right is equal to α / 2 \alpha/2 α /2 as well; thus, total area equals α \alpha α .

Critical values for symmetric distribution

As you can see, finding the critical values for a two-tailed test with significance α \alpha α boils down to finding both one-tailed critical values with a significance level of α / 2 \alpha/2 α /2 .

The formulae for the critical values involve the quantile function , Q Q Q , which is the inverse of the cumulative distribution function ( c d f \mathrm{cdf} cdf ) for the test statistic distribution (calculated under the assumption that H 0 holds!): Q = c d f − 1 Q = \mathrm{cdf}^{-1} Q = cdf − 1 .

Once we have agreed upon the value of α \alpha α , the critical value formulae are the following:

  • Left-tailed test :
  • Right-tailed test :
  • Two-tailed test :

In the case of a distribution symmetric about 0 , the critical values for the two-tailed test are symmetric as well:

Unfortunately, the probability distributions that are the most widespread in hypothesis testing have somewhat complicated c d f \mathrm{cdf} cdf formulae. To find critical values by hand, you would need to use specialized software or statistical tables. In these cases, the best option is, of course, our critical value calculator! 😁

Use the Z (standard normal) option if your test statistic follows (at least approximately) the standard normal distribution N(0,1) .

In the formulae below, u u u denotes the quantile function of the standard normal distribution N(0,1):

Left-tailed Z critical value: u ( α ) u(\alpha) u ( α )

Right-tailed Z critical value: u ( 1 − α ) u(1-\alpha) u ( 1 − α )

Two-tailed Z critical value: ± u ( 1 − α / 2 ) \pm u(1- \alpha/2) ± u ( 1 − α /2 )

Check out Z-test calculator to learn more about the most common Z-test used on the population mean. There are also Z-tests for the difference between two population means, in particular, one between two proportions.

Use the t-Student option if your test statistic follows the t-Student distribution . This distribution is similar to N(0,1) , but its tails are fatter – the exact shape depends on the number of degrees of freedom . If this number is large (>30), which generically happens for large samples, then the t-Student distribution is practically indistinguishable from N(0,1). Check our t-statistic calculator to compute the related test statistic.

t-Student distribution densities

In the formulae below, Q t , d Q_{\text{t}, d} Q t , d ​ is the quantile function of the t-Student distribution with d d d degrees of freedom:

Left-tailed t critical value: Q t , d ( α ) Q_{\text{t}, d}(\alpha) Q t , d ​ ( α )

Right-tailed t critical value: Q t , d ( 1 − α ) Q_{\text{t}, d}(1 - \alpha) Q t , d ​ ( 1 − α )

Two-tailed t critical values: ± Q t , d ( 1 − α / 2 ) \pm Q_{\text{t}, d}(1 - \alpha/2) ± Q t , d ​ ( 1 − α /2 )

Visit the t-test calculator to learn more about various t-tests: the one for a population mean with an unknown population standard deviation , those for the difference between the means of two populations (with either equal or unequal population standard deviations), as well as about the t-test for paired samples .

Use the χ² (chi-square) option when performing a test in which the test statistic follows the χ²-distribution .

You need to determine the number of degrees of freedom of the χ²-distribution of your test statistic – below, we list them for the most commonly used χ²-tests.

Here we give the formulae for chi square critical values; Q χ 2 , d Q_{\chi^2, d} Q χ 2 , d ​ is the quantile function of the χ²-distribution with d d d degrees of freedom:

Left-tailed χ² critical value: Q χ 2 , d ( α ) Q_{\chi^2, d}(\alpha) Q χ 2 , d ​ ( α )

Right-tailed χ² critical value: Q χ 2 , d ( 1 − α ) Q_{\chi^2, d}(1 - \alpha) Q χ 2 , d ​ ( 1 − α )

Two-tailed χ² critical values: Q χ 2 , d ( α / 2 ) Q_{\chi^2, d}(\alpha/2) Q χ 2 , d ​ ( α /2 ) and Q χ 2 , d ( 1 − α / 2 ) Q_{\chi^2, d}(1 - \alpha/2) Q χ 2 , d ​ ( 1 − α /2 )

Several different tests lead to a χ²-score:

Goodness-of-fit test : does the empirical distribution agree with the expected distribution?

This test is right-tailed . Its test statistic follows the χ²-distribution with k − 1 k - 1 k − 1 degrees of freedom, where k k k is the number of classes into which the sample is divided.

Independence test : is there a statistically significant relationship between two variables?

This test is also right-tailed , and its test statistic is computed from the contingency table. There are ( r − 1 ) ( c − 1 ) (r - 1)(c - 1) ( r − 1 ) ( c − 1 ) degrees of freedom, where r r r is the number of rows, and c c c is the number of columns in the contingency table.

Test for the variance of normally distributed data : does this variance have some pre-determined value?

This test can be one- or two-tailed! Its test statistic has the χ²-distribution with n − 1 n - 1 n − 1 degrees of freedom, where n n n is the sample size.

Finally, choose F (Fisher-Snedecor) if your test statistic follows the F-distribution . This distribution has a pair of degrees of freedom .

Let us see how those degrees of freedom arise. Assume that you have two independent random variables, X X X and Y Y Y , that follow χ²-distributions with d 1 d_1 d 1 ​ and d 2 d_2 d 2 ​ degrees of freedom, respectively. If you now consider the ratio ( X d 1 ) : ( Y d 2 ) (\frac{X}{d_1}):(\frac{Y}{d_2}) ( d 1 ​ X ​ ) : ( d 2 ​ Y ​ ) , it turns out it follows the F-distribution with ( d 1 , d 2 ) (d_1, d_2) ( d 1 ​ , d 2 ​ ) degrees of freedom. That's the reason why we call d 1 d_1 d 1 ​ and d 2 d_2 d 2 ​ the numerator and denominator degrees of freedom , respectively.

In the formulae below, Q F , d 1 , d 2 Q_{\text{F}, d_1, d_2} Q F , d 1 ​ , d 2 ​ ​ stands for the quantile function of the F-distribution with ( d 1 , d 2 ) (d_1, d_2) ( d 1 ​ , d 2 ​ ) degrees of freedom:

Left-tailed F critical value: Q F , d 1 , d 2 ( α ) Q_{\text{F}, d_1, d_2}(\alpha) Q F , d 1 ​ , d 2 ​ ​ ( α )

Right-tailed F critical value: Q F , d 1 , d 2 ( 1 − α ) Q_{\text{F}, d_1, d_2}(1 - \alpha) Q F , d 1 ​ , d 2 ​ ​ ( 1 − α )

Two-tailed F critical values: Q F , d 1 , d 2 ( α / 2 ) Q_{\text{F}, d_1, d_2}(\alpha/2) Q F , d 1 ​ , d 2 ​ ​ ( α /2 ) and Q F , d 1 , d 2 ( 1 − α / 2 ) Q_{\text{F}, d_1, d_2}(1 -\alpha/2) Q F , d 1 ​ , d 2 ​ ​ ( 1 − α /2 )

Here we list the most important tests that produce F-scores: each of them is right-tailed .

ANOVA : tests the equality of means in three or more groups that come from normally distributed populations with equal variances. There are ( k − 1 , n − k ) (k - 1, n - k) ( k − 1 , n − k ) degrees of freedom, where k k k is the number of groups, and n n n is the total sample size (across every group).

Overall significance in regression analysis . The test statistic has ( k − 1 , n − k ) (k - 1, n - k) ( k − 1 , n − k ) degrees of freedom, where n n n is the sample size, and k k k is the number of variables (including the intercept).

Compare two nested regression models . The test statistic follows the F-distribution with ( k 2 − k 1 , n − k 2 ) (k_2 - k_1, n - k_2) ( k 2 ​ − k 1 ​ , n − k 2 ​ ) degrees of freedom, where k 1 k_1 k 1 ​ and k 2 k_2 k 2 ​ are the number of variables in the smaller and bigger models, respectively, and n n n is the sample size.

The equality of variances in two normally distributed populations . There are ( n − 1 , m − 1 ) (n - 1, m - 1) ( n − 1 , m − 1 ) degrees of freedom, where n n n and m m m are the respective sample sizes.

I'm Anna, the mastermind behind the critical value calculator and a PhD in mathematics from Jagiellonian University .

The idea for creating the tool originated from my experiences in teaching and research. Recognizing the need for a tool that simplifies the critical value determination process across various statistical distributions, I built a user-friendly calculator accessible to both students and professionals. After publishing the tool, I soon found myself using the calculator in my research and as a teaching aid.

Trust in this calculator is paramount to me. Each tool undergoes a rigorous review process , with peer-reviewed insights from experts and meticulous proofreading by native speakers. This commitment to accuracy and reliability ensures that users can be confident in the content. Please check the Editorial Policies page for more details on our standards.

What is a Z critical value?

A Z critical value is the value that defines the critical region in hypothesis testing when the test statistic follows the standard normal distribution . If the value of the test statistic falls into the critical region, you should reject the null hypothesis and accept the alternative hypothesis.

How do I calculate Z critical value?

To find a Z critical value for a given confidence level α :

Check if you perform a one- or two-tailed test .

For a one-tailed test:

Left -tailed: critical value is the α -th quantile of the standard normal distribution N(0,1).

Right -tailed: critical value is the (1-α) -th quantile.

Two-tailed test: critical value equals ±(1-α/2) -th quantile of N(0,1).

No quantile tables ? Use CDF tables! (The quantile function is the inverse of the CDF.)

Verify your answer with an online critical value calculator.

Is a t critical value the same as Z critical value?

In theory, no . In practice, very often, yes . The t-Student distribution is similar to the standard normal distribution, but it is not the same . However, if the number of degrees of freedom (which is, roughly speaking, the size of your sample) is large enough (>30), then the two distributions are practically indistinguishable , and so the t critical value has practically the same value as the Z critical value.

What is the Z critical value for 95% confidence?

The Z critical value for a 95% confidence interval is:

  • 1.96 for a two-tailed test;
  • 1.64 for a right-tailed test; and
  • -1.64 for a left-tailed test.

Black hole collision

Order from least to greatest, social media time alternatives.

  • Biology (100)
  • Chemistry (100)
  • Construction (144)
  • Conversion (292)
  • Ecology (30)
  • Everyday life (261)
  • Finance (569)
  • Health (440)
  • Physics (509)
  • Sports (104)
  • Statistics (182)
  • Other (181)
  • Discover Omni (40)

The Logo and Seal of the Freie Universität Berlin

Department of Earth Sciences

Service navigation.

  • SOGA Startpage
  • Privacy Policy
  • Accessibility Statement

Statistics and Geodata Analysis using Python (SOGA-Py)

Path Navigation

  • Basics of Statistics
  • Hypothesis Tests
  • Introduction to Hypothesis Testing
  • Critical Value and the p-Value

The Critical Value and the p-Value Approach to Hypothesis Testing

critical value approach in hypothesis testing

P-Value vs. Critical Value: A Friendly Guide for Beginners

In the world of statistics, you may have come across the terms p-value and critical value . These concepts are essential in hypothesis testing, a process that helps you make informed decisions based on data. As you embark on your journey to understand the significance and applications of these values, don’t worry; you’re not alone. Many professionals and students alike grapple with these concepts, but once you get the hang of what they mean, they become powerful tools at your fingertips.

The main difference between p-value and critical value is that the p-value quantifies the strength of evidence against a null hypothesis, while the critical value sets a threshold for assessing the significance of a test statistic. Simply put, if your p-value is below the critical value, you reject the null hypothesis.

As you read on, you can expect to dive deeper into the definitions, applications, and interpretations of these often misunderstood statistical concepts. The remainder of the article will guide you through how p-values and critical values work in real-world scenarios, tips on interpreting their results, and potential pitfalls to avoid. By the end, you’ll have a clear understanding of their role in hypothesis testing, helping you become a more effective researcher or analyst.

Important Sidenote: We interviewed numerous data science professionals (data scientists, hiring managers, recruiters – you name it) and identified 6 proven steps to follow for becoming a data scientist. Read my article: ‘6 Proven Steps To Becoming a Data Scientist [Complete Guide] for in-depth findings and recommendations! – This is perhaps the most comprehensive article on the subject you will find on the internet!

Table of Contents

Understanding P-Value and Critical Value

When you dive into the world of statistics, it’s essential to grasp the concepts of P-value and critical value . These two values play a crucial role in hypothesis testing, helping you make informed decisions based on data. In this section, we will focus on the concept of hypothesis testing and how P-value and critical value relate to it.

critical value approach in hypothesis testing

Concept of Hypothesis Testing

Hypothesis testing is a statistical technique used to analyze data and draw conclusions. You start by creating a null hypothesis (H0) and an alternative hypothesis (H1). The null hypothesis represents the idea that there is no significant effect or relationship between the variables being tested, while the alternative hypothesis claims that there is a significant effect or relationship.

To conduct a hypothesis test, follow these steps:

  • Formulate your null and alternative hypotheses.
  • Choose an appropriate statistical test and significance level (α).
  • Collect and analyze your data.
  • Calculate the test statistic and P-value.
  • Compare the P-value to the critical value.

Now, let’s discuss how P-value and critical value come into play during hypothesis testing.

The P-value is the probability of observing a test statistic as extreme (or more extreme) than the one calculated if the null hypothesis were true. In simpler terms, it’s the likelihood of getting your observed results by chance alone. The lower the P-value, the more evidence you have against the null hypothesis.

Here’s what you need to know about P-values:

  • A low P-value (typically ≤ 0.05) indicates that the null hypothesis is unlikely to be true.
  • A high P-value (typically > 0.05) suggests that the observed results align with the null hypothesis.

Critical Value

The critical value is a threshold that defines whether the test statistic is extreme enough to reject the null hypothesis. It depends on the chosen significance level (α) and the specific statistical test being used. If the test statistic exceeds the critical value, you reject the null hypothesis in favor of the alternative.

To summarize:

  • If the P-value ≤ critical value, reject the null hypothesis.
  • If the P-value > critical value, fail to reject the null hypothesis (do not conclude that the alternative is true).

In conclusion, understanding P-value and critical value is crucial for hypothesis testing. They help you determine the significance of your findings and make data-driven decisions. By grasping these concepts, you’ll be well-equipped to analyze data and draw meaningful conclusions in a variety of contexts.

P-Value Essentials

Calculating and interpreting p-values is essential to understanding statistical significance in research. In this section, we’ll cover the basics of p-values and how they relate to critical values.

Calculating P-Values

A p-value represents the probability of obtaining a result at least as extreme as the observed data, assuming the null hypothesis is correct. To calculate a p-value, follow these steps:

  • Define your null and alternative hypotheses.
  • Determine the test statistic and its distribution.
  • Calculate the observed test statistic based on your sample data.
  • Find the probability of obtaining a test statistic at least as extreme as the observed value.

Let’s dive deeper into these steps:

  • Step 1: Formulate the null hypothesis (H₀) and alternative hypothesis (H₁). The null hypothesis typically states that there is no effect or relationship between variables, while the alternative hypothesis suggests otherwise.
  • Step 2: Determine your test statistic and its distribution. The choice of test statistic depends on your data and hypotheses. Some common test statistics include the t -test, z -test, or chi-square test.
  • Step 3: Using your sample data, compute the test statistic. This value quantifies the difference between your sample data and the null hypothesis.
  • Step 4: Find the probability of obtaining a test statistic at least as extreme as the observed value, under the assumption that the null hypothesis is true. This probability is the p-value .

Interpreting P-Values

Once you’ve calculated the p-value, it’s time to interpret your results. The interpretation depends on the pre-specified significance level (α) you’ve chosen. Here’s a simplified guideline:

  • If p-value ≤ α , you can reject the null hypothesis.
  • If p-value > α , you cannot reject the null hypothesis.

Keep in mind that:

  • A lower p-value indicates stronger evidence against the null hypothesis.
  • A higher p-value implies weaker evidence against the null hypothesis.

Remember that statistical significance (p-value ≤ α) does not guarantee practical or scientific significance. It’s essential not to take the p-value as the sole metric for decision-making, but rather as a tool to help gauge your research outcomes.

In summary, p-values are crucial in understanding and interpreting statistical research results. By calculating and appropriately interpreting p-values, you can deepen your knowledge of your data and make informed decisions based on statistical evidence.

Critical Value Essentials

In this section, we’ll discuss two important aspects of critical values: Significance Level and Rejection Region . Knowing these concepts helps you better understand hypothesis testing and make informed decisions about the statistical significance of your results.

Significance Level

The significance level , often denoted as α or alpha, is an essential part of hypothesis testing. You can think of it as the threshold for deciding whether your results are statistically significant or not. In general, a common significance level is 0.05 or 5% , which means that there is a 5% chance of rejecting a true null hypothesis.

To help you understand better, here are a few key points:

  • The lower the significance level, the more stringent the test.
  • Higher α-levels may increase the risk of Type I errors (incorrectly rejecting the null hypothesis).
  • Lower α-levels may increase the risk of Type II errors (failing to reject a false null hypothesis).

Rejection Region

The rejection region is the range of values that, if your test statistic falls within, leads to the rejection of the null hypothesis. This area depends on the critical value and the significance level. The critical value is a specific point that separates the rejection region from the rest of the distribution. Test statistics that fall in the rejection region provide evidence that the null hypothesis might not be true and should be rejected.

Here are essential points to consider when using the rejection region:

  • Z-score : The z-score is a measure of how many standard deviations away from the mean a given value is. If your test statistic lies in the rejection region, it means that the z-score is significant.
  • Rejection regions are tailored for both one-tailed and two-tailed tests.
  • In a one-tailed test, the rejection region is either on the left or right side of the distribution.
  • In a two-tailed test, there are two rejection regions, one on each side of the distribution.

By understanding and considering the significance level and rejection region, you can more effectively interpret your statistical results and avoid making false assumptions or claims. Remember that critical values are crucial in determining whether to reject or accept the null hypothesis.

Statistical Tests and Decision Making

When you’re comparing the means of two samples, a t-test is often used. This test helps you determine whether there is a significant difference between the means. Here’s how you can conduct a t-test:

  • Calculate the t-statistic for your samples
  • Determine the degrees of freedom
  • Compare the t-statistic to a critical value from a t-distribution table

If the t-statistic is greater than the critical value, you can reject the null hypothesis and conclude that there is a significant difference between the sample means. Some key points about t-test:

  • Test statistic : In a t-test, the t-statistic is the key value that you calculate
  • Sample : For a t-test, you’ll need two independent samples to compare

The Analysis of Variance (ANOVA) is another statistical test, often used when you want to compare the means of three or more treatment groups. With this method, you analyze the differences between group means and make decisions on whether the total variation in the dataset can be accounted for by the variance within the groups or the variance between the groups. Here are the main steps in conducting an ANOVA test:

  • Calculate the F statistic
  • Determine the degrees of freedom for between-groups and within-groups
  • Compare the F statistic to a critical value from an F-distribution table

When the F statistic is larger than the critical value, you can reject the null hypothesis and conclude that there is a significant difference among the treatment groups. Keep these points in mind for ANOVA tests:

  • Treatment Groups : ANOVA tests require three or more groups to compare
  • Observations : You need multiple observations within each treatment group

Confidence Intervals

Confidence intervals (CIs) are a way to estimate values within a certain range, with a specified level of confidence. They help to indicate the reliability of an estimated parameter, like the mean or difference between sample means. Here’s what you need to know about calculating confidence intervals:

  • Determine the point estimate (e.g., sample mean or difference in means)
  • Calculate the standard error
  • Multiply the standard error by the appropriate critical value

The result gives you a range within which the true population parameter is likely to fall, with a certain level of confidence (e.g., 95%). Remember these insights when working with confidence intervals:

  • Confidence Level : The confidence level is the probability that the true population parameter falls within the calculated interval
  • Critical Value : Based on the specified confidence level, you’ll determine a critical value from a table (e.g., t-distribution)

Remember, using appropriate statistical tests, test statistics, and critical values will help you make informed decisions in your data analysis.

Comparing P-Values and Critical Values

critical value approach in hypothesis testing

Differences and Similarities

When analyzing data, you may come across two important concepts – p-values and critical values . While they both help determine the significance of a data set, they have some differences and similarities.

  • P-values are probabilities, ranging from 0 to 1, indicating how likely it is a particular result could be observed if the null hypothesis is true. Lower p-values suggest the null hypothesis should be rejected, meaning the observed data is not due to chance alone.
  • On the other hand, critical values are preset thresholds that decide whether the null hypothesis should be rejected or not. Results that surpass the critical value support adopting the alternative hypothesis.

The main similarity between p-values and critical values is their role in hypothesis testing. Both are used to determine if observed data provides enough evidence to reject the null hypothesis in favor of the alternative hypothesis.

Applications in Geospatial Data Analysis

In the field of geospatial data analysis, p-values and critical values play essential roles in making data-driven decisions. Researchers like Hartmann, Krois, and Waske from the Department of Earth Sciences at Freie Universitaet Berlin often use these concepts in their e-Learning project SOGA.

To better understand the applications, let’s look at three main aspects:

  • Spatial autocorrelation : With geospatial data, points might be related not only by their values but also by their locations. P-values can help assess spatial autocorrelation and recognize underlying spatial patterns.
  • Geostatistical analysis : Techniques like kriging or semivariogram estimation depend on critical values and p-values to decide the suitability of a model. By finding the best fit model, geospatial data can be better represented, ensuring accurate and precise predictions.
  • Comparing geospatial data groups : When comparing two subsets of data (e.g., mineral concentrations, soil types), p-values can be used in permutation tests or t-tests to verify if the observed differences are significant or due to chance.

In summary, when working with geospatial data analysis, p-values and critical values are crucial tools that enable you to make informed decisions about your data and its implications. By understanding the differences and similarities between the two concepts, you can apply them effectively in your geospatial data analysis journey.

Standard Distributions and Scores

In this section, we will discuss the Standard Normal Distribution and its associated scores, namely Z-Score and T-Statistic . These concepts are crucial in understanding the differences between p-values and critical values.

Standard Normal Distribution

The Standard Normal Distribution is a probability distribution that has a mean of 0 and a standard deviation of 1. This distribution is crucial for hypothesis testing, as it helps you make inferences about your data based on standard deviations from the mean. Some characteristics of this distribution include:

  • 68% of the data falls within ±1 standard deviation from the mean
  • 95% of the data falls within ±2 standard deviations from the mean
  • 99.7% of the data falls within ±3 standard deviations from the mean

The Z-Score is a measure of how many standard deviations away a data point is from the mean of the distribution. It is used to compare data points across different distributions with different means and standard deviations. To calculate the Z-Score, use the formula:

Key features of the Z-Score include:

  • Positive Z-Scores indicate values above the mean
  • Negative Z-Scores indicate values below the mean
  • A Z-Score of 0 is equal to the mean

T-Statistic

The T-Statistic , also known as the Student’s t-distribution , is another way to assess how far away a data point is from the mean. It comes in handy when:

  • You have a small sample size (generally less than 30)
  • Population variance is not known
  • Population is assumed to be normally distributed

The T-Statistic shares similarities with the Z-Score but adjusts for sample size, making it more appropriate for smaller samples. The formula for calculating the T-Statistic is:

In conclusion, understanding the Standard Normal Distribution , Z-Score , and T-Statistic will help you better differentiate between p-values and critical values, ultimately aiding in accurate statistical analysis and hypothesis testing.

Author’s Recommendations: Top Data Science Resources To Consider

Before concluding this article, I wanted to share few top data science resources that I have personally vetted for you. I am confident that you can greatly benefit in your data science journey by considering one or more of these resources.

  • DataCamp: If you are a beginner focused towards building the foundational skills in data science , there is no better platform than DataCamp. Under one membership umbrella, DataCamp gives you access to 335+ data science courses. There is absolutely no other platform that comes anywhere close to this. Hence, if building foundational data science skills is your goal: Click Here to Sign Up For DataCamp Today!
  • IBM Data Science Professional Certificate: If you are looking for a data science credential that has strong industry recognition but does not involve too heavy of an effort: Click Here To Enroll Into The IBM Data Science Professional Certificate Program Today! (To learn more: Check out my full review of this certificate program here )
  • MITx MicroMasters Program in Data Science: If you are at a more advanced stage in your data science journey and looking to take your skills to the next level, there is no Non-Degree program better than MIT MicroMasters. Click Here To Enroll Into The MIT MicroMasters Program Today ! (To learn more: Check out my full review of the MIT MicroMasters program here )
  • Roadmap To Becoming a Data Scientist: If you have decided to become a data science professional but not fully sure how to get started : read my article – 6 Proven Ways To Becoming a Data Scientist . In this article, I share my findings from interviewing 100+ data science professionals at top companies (including – Google, Meta, Amazon, etc.) and give you a full roadmap to becoming a data scientist.

Frequently Asked Questions

What is the relationship between p-value and critical value.

The p-value represents the probability of observing the test statistic under the null hypothesis, while the critical value is a predetermined threshold for declaring significance. If the p-value is less than the critical value, you reject the null hypothesis.

How do you interpret p-value in comparison to critical value?

When the p-value is smaller than the critical value , there is strong evidence against the null hypothesis, which means you reject it. In contrast, if the p-value is larger, you fail to reject the null hypothesis and cannot conclude a significant effect.

What does it mean when the p-value is greater than the critical value?

If the p-value is greater than the critical value , it indicates that the observed data are consistent with the null hypothesis, and you do not have enough evidence to reject it. In other words, the finding is not statistically significant.

How are critical values used to determine significance?

Critical values are used as a threshold to determine if a test statistic is considered significant. When the test statistic is more extreme than the critical value, you reject the null hypothesis, indicating that the observed effect is unlikely due to chance alone.

Why is it important to know both p-value and critical value in hypothesis testing?

Knowing both p-value and critical value helps you to:

  • Understand the strength of evidence against the null hypothesis
  • Decide whether to reject or fail to reject the null hypothesis
  • Assess the statistical significance of your findings
  • Avoid misinterpretations and false conclusions

How do you calculate critical values and compare them to p-values?

To calculate critical values, you:

  • Choose a significance level (α)
  • Determine the appropriate test statistic distribution
  • Find the value that corresponds to α in the distribution

Then, you compare the calculated critical value with the p-value to determine if the result is statistically significant or not. If the p-value is less than the critical value, you reject the null hypothesis.

BEFORE YOU GO: Don’t forget to check out my latest article – 6 Proven Steps To Becoming a Data Scientist [Complete Guide] . We interviewed numerous data science professionals (data scientists, hiring managers, recruiters – you name it) and created this comprehensive guide to help you land that perfect data science job.

Affiliate Disclosure: We participate in several affiliate programs and may be compensated if you make a purchase using our referral link, at no additional cost to you. You can, however, trust the integrity of our recommendation. Affiliate programs exist even for products that we are not recommending. We only choose to recommend you the products that we actually believe in.

Daisy is the founder of DataScienceNerd.com. Passionate for the field of Data Science, she shares her learnings and experiences in this domain, with the hope to help other Data Science enthusiasts in their path down this incredible discipline.

Recent Posts

Is Data Science Going to be Automated and Replaced by AI?

Data science has been a buzzword in recent years, and with the rapid advancements in artificial intelligence (AI) technologies, many wonder if data science as a field will be replaced by AI. As you...

Is Data Science/Analytics Saturated? [Detailed Exploration]

In the world of technology, there's always something new and exciting grabbing our attention. Data science and analytics, in particular, have exploded onto the scene, with many professionals flocking...

  • Number Theory
  • Data Structures
  • Cornerstones

Summary of the 3 Approaches to Hypothesis Testing

Steps to conduct a hypothesis test using $p$-values:.

Identify the null hypothesis and the alternative hypothesis (and decide which is the claim).

Ensure any necessary assumptions are met for the test to be conducted.

Find the test statistic.

Find the p-value associated with the test statistic as it relates to the alternative hypothesis.

Compare the p-value with the significance level, $\alpha$. If $p \lt \alpha$, conclude that the null hypothesis should be rejected based on what we saw. If not, conclude that we fail to reject the null hypothesis as a result of what we saw.

Make an inference.

Steps to Conduct a Hypothesis Test Using Critical Values:

Find the critical values associated with the significance level, $\alpha$, and the alternative hypothesis to establish the rejection region in the distribution.

If the test statistic falls in the rejection region, conclude that the null hypothesis should be rejected based on what we saw. If not, conclude that we fail to reject the null hypothesis as a result of what we saw.

Steps to Conduct a Hypothesis Test Using a Confidence Interval:

Construct a confidence interval with a confidence level of $(1-\alpha)$

If the hypothesized population parameter falls outside of the confidence interval, conclude that the null hypothesis should be rejected based on what we saw. If it falls within the confidence interval, conclude that we fail to reject the null hypothesis as a result of what we saw.

critical value approach in hypothesis testing

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

S.3.2 hypothesis testing (p-value approach).

The P -value approach involves determining "likely" or "unlikely" by determining the probability — assuming the null hypothesis was true — of observing a more extreme test statistic in the direction of the alternative hypothesis than the one observed. If the P -value is small, say less than (or equal to) \(\alpha\), then it is "unlikely." And, if the P -value is large, say more than \(\alpha\), then it is "likely."

If the P -value is less than (or equal to) \(\alpha\), then the null hypothesis is rejected in favor of the alternative hypothesis. And, if the P -value is greater than \(\alpha\), then the null hypothesis is not rejected.

Specifically, the four steps involved in using the P -value approach to conducting any hypothesis test are:

  • Specify the null and alternative hypotheses.
  • Using the sample data and assuming the null hypothesis is true, calculate the value of the test statistic. Again, to conduct the hypothesis test for the population mean μ , we use the t -statistic \(t^*=\frac{\bar{x}-\mu}{s/\sqrt{n}}\) which follows a t -distribution with n - 1 degrees of freedom.
  • Using the known distribution of the test statistic, calculate the P -value : "If the null hypothesis is true, what is the probability that we'd observe a more extreme test statistic in the direction of the alternative hypothesis than we did?" (Note how this question is equivalent to the question answered in criminal trials: "If the defendant is innocent, what is the chance that we'd observe such extreme criminal evidence?")
  • Set the significance level, \(\alpha\), the probability of making a Type I error to be small — 0.01, 0.05, or 0.10. Compare the P -value to \(\alpha\). If the P -value is less than (or equal to) \(\alpha\), reject the null hypothesis in favor of the alternative hypothesis. If the P -value is greater than \(\alpha\), do not reject the null hypothesis.

Example S.3.2.1

Mean gpa section  .

In our example concerning the mean grade point average, suppose that our random sample of n = 15 students majoring in mathematics yields a test statistic t * equaling 2.5. Since n = 15, our test statistic t * has n - 1 = 14 degrees of freedom. Also, suppose we set our significance level α at 0.05 so that we have only a 5% chance of making a Type I error.

Right Tailed

The P -value for conducting the right-tailed test H 0 : μ = 3 versus H A : μ > 3 is the probability that we would observe a test statistic greater than t * = 2.5 if the population mean \(\mu\) really were 3. Recall that probability equals the area under the probability curve. The P -value is therefore the area under a t n - 1 = t 14 curve and to the right of the test statistic t * = 2.5. It can be shown using statistical software that the P -value is 0.0127. The graph depicts this visually.

t-distrbution graph showing the right tail beyond a t value of 2.5

The P -value, 0.0127, tells us it is "unlikely" that we would observe such an extreme test statistic t * in the direction of H A if the null hypothesis were true. Therefore, our initial assumption that the null hypothesis is true must be incorrect. That is, since the P -value, 0.0127, is less than \(\alpha\) = 0.05, we reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ > 3.

Note that we would not reject H 0 : μ = 3 in favor of H A : μ > 3 if we lowered our willingness to make a Type I error to \(\alpha\) = 0.01 instead, as the P -value, 0.0127, is then greater than \(\alpha\) = 0.01.

Left Tailed

In our example concerning the mean grade point average, suppose that our random sample of n = 15 students majoring in mathematics yields a test statistic t * instead of equaling -2.5. The P -value for conducting the left-tailed test H 0 : μ = 3 versus H A : μ < 3 is the probability that we would observe a test statistic less than t * = -2.5 if the population mean μ really were 3. The P -value is therefore the area under a t n - 1 = t 14 curve and to the left of the test statistic t* = -2.5. It can be shown using statistical software that the P -value is 0.0127. The graph depicts this visually.

t distribution graph showing left tail below t value of -2.5

The P -value, 0.0127, tells us it is "unlikely" that we would observe such an extreme test statistic t * in the direction of H A if the null hypothesis were true. Therefore, our initial assumption that the null hypothesis is true must be incorrect. That is, since the P -value, 0.0127, is less than α = 0.05, we reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ < 3.

Note that we would not reject H 0 : μ = 3 in favor of H A : μ < 3 if we lowered our willingness to make a Type I error to α = 0.01 instead, as the P -value, 0.0127, is then greater than \(\alpha\) = 0.01.

In our example concerning the mean grade point average, suppose again that our random sample of n = 15 students majoring in mathematics yields a test statistic t * instead of equaling -2.5. The P -value for conducting the two-tailed test H 0 : μ = 3 versus H A : μ ≠ 3 is the probability that we would observe a test statistic less than -2.5 or greater than 2.5 if the population mean μ really was 3. That is, the two-tailed test requires taking into account the possibility that the test statistic could fall into either tail (hence the name "two-tailed" test). The P -value is, therefore, the area under a t n - 1 = t 14 curve to the left of -2.5 and to the right of 2.5. It can be shown using statistical software that the P -value is 0.0127 + 0.0127, or 0.0254. The graph depicts this visually.

t-distribution graph of two tailed probability for t values of -2.5 and 2.5

Note that the P -value for a two-tailed test is always two times the P -value for either of the one-tailed tests. The P -value, 0.0254, tells us it is "unlikely" that we would observe such an extreme test statistic t * in the direction of H A if the null hypothesis were true. Therefore, our initial assumption that the null hypothesis is true must be incorrect. That is, since the P -value, 0.0254, is less than α = 0.05, we reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ ≠ 3.

Note that we would not reject H 0 : μ = 3 in favor of H A : μ ≠ 3 if we lowered our willingness to make a Type I error to α = 0.01 instead, as the P -value, 0.0254, is then greater than \(\alpha\) = 0.01.

Now that we have reviewed the critical value and P -value approach procedures for each of the three possible hypotheses, let's look at three new examples — one of a right-tailed test, one of a left-tailed test, and one of a two-tailed test.

The good news is that, whenever possible, we will take advantage of the test statistics and P -values reported in statistical software, such as Minitab, to conduct our hypothesis tests in this course.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

8.2: Hypothesis Testing of Single Proportion

  • Last updated
  • Save as PDF
  • Page ID 25674

Learning Objectives

  • To learn how to apply the five-step critical value test procedure for test of hypotheses concerning a population proportion.
  • To learn how to apply the five-step \(p\)-value test procedure for test of hypotheses concerning a population proportion.

Both the critical value approach and the p-value approach can be applied to test hypotheses about a population proportion p. The null hypothesis will have the form \(H_0 : p = p_0\) for some specific number \(p_0\) between \(0\) and \(1\). The alternative hypothesis will be one of the three inequalities

  • \(p <p_0\),
  • \(p>p_0\), or
  • \(p≠p_0\)

for the same number \(p_0\) that appears in the null hypothesis.

The information in Section 6.3 gives the following formula for the test statistic and its distribution. In the formula \(p_0\) is the numerical value of \(p\) that appears in the two hypotheses, \(q_0=1−p_0, \hat{p}\) is the sample proportion, and \(n\) is the sample size. Remember that the condition that the sample be large is not that \(n\) be at least 30 but that the interval

\[ \left[ \hat{p} −3 \sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} , \hat{p} + 3 \sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} \right] \nonumber \]

lie wholly within the interval \([0,1]\).

Standardized Test Statistic for Large Sample Hypothesis Tests Concerning a Single Population Proportion

\[ Z = \dfrac{\hat{p} - p_0}{\sqrt{\dfrac{p_0q_o}{n}}} \label{eq2} \]

The test statistic has the standard normal distribution.

The distribution of the standardized test statistic and the corresponding rejection region for each form of the alternative hypothesis (left-tailed, right-tailed, or two-tailed), is shown in Figure \(\PageIndex{1}\).

01fe19537789cf83979f79f172b522c5.jpg

Example \(\PageIndex{1}\)

A soft drink maker claims that a majority of adults prefer its leading beverage over that of its main competitor’s. To test this claim \(500\) randomly selected people were given the two beverages in random order to taste. Among them, \(270\) preferred the soft drink maker’s brand, \(211\) preferred the competitor’s brand, and \(19\) could not make up their minds. Determine whether there is sufficient evidence, at the \(5\%\) level of significance, to support the soft drink maker’s claim against the default that the population is evenly split in its preference.

We will use the critical value approach to perform the test. The same test will be performed using the \(p\)-value approach in Example \(\PageIndex{3}\).

We must check that the sample is sufficiently large to validly perform the test. Since \(\hat{p} =270/500=0.54\),

\[\begin{align} & \left[ \hat{p} −3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} ,\hat{p} +3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} \right] \\ &=[0.54−(3)(0.02),0.54+(3)(0.02)] \\ &=[0.48, 0.60] ⊂[0,1] \end{align} \nonumber \]

so the sample is sufficiently large.

  • Step 1. The relevant test is

\[H_0 : p = 0.50  \nonumber \]

\[vs. \nonumber \]

\[H_a : p > 0.50\, @ \,\alpha =0.05 \nonumber \]

where \(p\) denotes the proportion of all adults who prefer the company’s beverage over that of its competitor’s beverage.

  • Step 2. The test statistic (Equation \ref{eq2}) is

\[Z=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \nonumber \]

and has the standard normal distribution.

  • Step 3. The value of the test statistic is

\[ \begin{align} Z &=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \\[6pt] &= \dfrac{0.54−0.50}{\sqrt{\dfrac{(0.50)(0.50)}{500}}} \\[6pt] &=1.789 \end{align} \nonumber \]

  • Step 4. Since the symbol in \(H_a\) is “\(>\)” this is a right-tailed test, so there is a single critical value, \(z_{α}=z_{0.05}\). Reading from the last line in Figure 7.1.6 its value is \(1.645\). The rejection region is \([1.645,∞)\).
  • Step 5. As shown in Figure \(\PageIndex{2}\) the test statistic falls in the rejection region. The decision is to reject \(H_0\). In the context of the problem our conclusion is:

The data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that a majority of adults prefer the company’s beverage to that of their competitor’s.

alt

Example \(\PageIndex{2}\)

Globally the long-term proportion of newborns who are male is \(51.46\%\). A researcher believes that the proportion of boys at birth changes under severe economic conditions. To test this belief randomly selected birth records of \(5,000\) babies born during a period of economic recession were examined. It was found in the sample that \(52.55\%\) of the newborns were boys. Determine whether there is sufficient evidence, at the \(10\%\) level of significance, to support the researcher’s belief.

We will use the critical value approach to perform the test. The same test will be performed using the \(p\)-value approach in Example \(\PageIndex{1}\).

The sample is sufficiently large to validly perform the test since

\[\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} =\sqrt{ \dfrac{(0.5255)(0.4745)}{5000}} ≈0.01 \nonumber \]

\[\begin{align} & \left[ \hat{p} −3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} ,\hat{p} +3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} \right] \\ &=[0.5255−0.03,0.5255+0.03] \\ &=[0.4955,0.5555] ⊂[0,1] \end{align} \nonumber \]

  • Step 1 . Let \(p\) be the true proportion of boys among all newborns during the recession period. The burden of proof is to show that severe economic conditions change it from the historic long-term value of \(0.5146\) rather than to show that it stays the same, so the hypothesis test is

\[H_0 : p = 0.5146  \nonumber \]

\[H_a : p \neq 0.5146\, @ \,\alpha =0.10 \nonumber \]

\[ \begin{align} Z &=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \\[6pt] &= \dfrac{0.5255−0.5146}{\sqrt{\dfrac{(0.5146)(0.4854)}{5000}}} \\[6pt] &=1.542 \end{align} \nonumber \]

  • Step 4. Since the symbol in \(H_a\) is “\(\neq\)” this is a two-tailed test, so there are a pair of critical values, \(\pm z_{\alpha /2}=\pm z_{0.05}=\pm 1.645\). The rejection region is \((-\infty ,-1.645]\cup [1.645,\infty )\).
  • Step 5. As shown in Figure \(\PageIndex{3}\) the test statistic does not fall in the rejection region. The decision is not to reject \(H_0\). In the context of the problem our conclusion is:

The data do not provide sufficient evidence, at the \(10\%\) level of significance, to conclude that the proportion of newborns who are male differs from the historic proportion in times of economic recession.

alt

Example \(\PageIndex{3}\)

Perform the test of Example \(\PageIndex{1}\) using the \(p\)-value approach.

We already know that the sample size is sufficiently large to validly perform the test.

  • Steps 1–3 of the five-step procedure described in Section 8.3 have already been done in Example \(\PageIndex{1}\) so we will not repeat them here, but only say that we know that the test is right-tailed and that value of the test statistic is \(Z = 1.789\).
  • Step 4. Since the test is right-tailed the p-value is the area under the standard normal curve cut off by the observed test statistic, \(Z = 1.789\), as illustrated in Figure \(\PageIndex{4}\). By Figure 7.1.5 that area and therefore the p-value is \(1−0.9633=0.0367\).
  • Step 5. Since the \(p\)-value is less than \(α=0.05\) the decision is to reject \(H_0\).

alt

Example \(\PageIndex{4}\)

Perform the test of Example \(\PageIndex{2}\) using the \(p\)-value approach.

  • Steps 1–3 of the five-step procedure described in Section 8.3 have already been done in Example \(\PageIndex{2}\). They tell us that the test is two-tailed and that value of the test statistic is \(Z = 1.542\).
  • Step 4. Since the test is two-tailed the \(p\)-value is the double of the area under the standard normal curve cut off by the observed test statistic, \(Z = 1.542\). By Figure 7.1.5 that area is \(1-0.9382=0.0618\), as illustrated in Figure \(\PageIndex{5}\), hence the \(p\)-value is \(2\times 0.0618=0.1236\).
  • Step 5. Since the \(p\)-value is greater than \(\alpha =0.10\) the decision is not to reject \(H_0\).

alt

Key Takeaway

  • There is one formula for the test statistic in testing hypotheses about a population proportion. The test statistic follows the standard normal distribution.
  • Either five-step procedure, critical value or \(p\)-value approach, can be used.

IMAGES

  1. Critical Value

    critical value approach in hypothesis testing

  2. Hypothesis Tests: Critical Value Approach

    critical value approach in hypothesis testing

  3. Finding z critical values for Hypothesis test

    critical value approach in hypothesis testing

  4. PPT

    critical value approach in hypothesis testing

  5. PPT

    critical value approach in hypothesis testing

  6. Hypothesis Testing: Upper, Lower, and Two Tailed Tests

    critical value approach in hypothesis testing

VIDEO

  1. Finding Critical Values for Hypothesis Testing

  2. CRITICAL VALUE APPROACH 149

  3. Hypothesis Testing

  4. Hypothesis Testing Large Sample Mean

  5. Hypothesis Testing

  6. P-value Approach in Hypothesis Testing

COMMENTS

  1. S.3.1 Hypothesis Testing (Critical Value Approach)

    The critical value for conducting the right-tailed test H0 : μ = 3 versus HA : μ > 3 is the t -value, denoted t\ (\alpha\), n - 1, such that the probability to the right of it is \ (\alpha\). It can be shown using either statistical software or a t -table that the critical value t 0.05,14 is 1.7613. That is, we would reject the null ...

  2. Critical Value Approach in Hypothesis Testing

    The critical value is the cut-off point to determine whether to accept or reject the null hypothesis for your sample distribution. The critical value approach provides a standardized method for hypothesis testing, enabling you to make informed decisions based on the evidence obtained from sample data. After calculating the test statistic using ...

  3. Critical Value: Definition, Finding & Calculator

    A critical value defines regions in the sampling distribution of a test statistic. These values play a role in both hypothesis tests and confidence intervals. In hypothesis tests, critical values determine whether the results are statistically significant. For confidence intervals, they help calculate the upper and lower limits.

  4. S.3.1 Hypothesis Testing (Critical Value Approach)

    The critical value for conducting the right-tailed test H0 : μ = 3 versus HA : μ > 3 is the t -value, denoted t α, n - 1, such that the probability to the right of it is α. It can be shown using either statistical software or a t -table that the critical value t 0.05,14 is 1.7613. That is, we would reject the null hypothesis H0 : μ = 3 in ...

  5. How to Calculate Critical Values for Statistical Hypothesis Testing

    Test Statistic <= Critical Value: Fail to reject the null hypothesis of the statistical test. Test Statistic > Critical Value: Reject the null hypothesis of the statistical test. Two-Tailed Test. A two-tailed test has two critical values, one on each side of the distribution, which is often assumed to be symmetrical (e.g. Gaussian and Student-t ...

  6. 7.5: Critical values, p-values, and significance level

    When we use z z -scores in this way, the obtained value of z z (sometimes called z z -obtained) is something known as a test statistic, which is simply an inferential statistic used to test a null hypothesis. The formula for our z z -statistic has not changed: z = X¯¯¯¯ − μ σ¯/ n−−√ (7.5.1) (7.5.1) z = X ¯ − μ σ ¯ / n.

  7. 8.1: The Elements of Hypothesis Testing

    Systematic Hypothesis Testing Procedure: Critical Value Approach. Identify the null and alternative hypotheses. ... The procedure that we have outlined in this section is called the "Critical Value Approach" to hypothesis testing to distinguish it from an alternative but equivalent approach that will be introduced at the end of Section 8.3.

  8. 8.3: Hypothesis Testing of Single Mean

    For this reason the tests in the two examples in this section will be made following the critical value approach to hypothesis testing summarized at the end of Section 8.1, but after each one we will show how the \(p\)-value approach could have been used. ... critical value or \(p\)-value approach, is used with either test statistic. 8.3: ...

  9. 9.3

    Let's close this example by formalizing the definition of a P-value, as well as summarizing the P-value approach to conducting a hypothesis test. P-Value. ... Two-tailed test. In this case, the critical region approach tells us to reject the null hypothesis \(H_0 \colon p = p_0\) against the alternative hypothesis \(H_A \colon p \ne p_0\): ...

  10. PDF Hypothesis testing: critical values

    The University of Edinburgh. Learning objectives. 1. Understand the parallel between p-values and critical values. 2. Be able to perform a one-sided or two-sided hypothesis test using the critical value method. 3. Understand the link between t-scores and critical values. 2 / 33.

  11. Critical Value and the p-Value • SOGA-R • Department of Earth Sciences

    By applying the critical value approach it is determined, whether or not the observed test statistic is more extreme than a defined critical value. Therefore, the observed test statistic (calculated on the basis of sample data) is compared to the critical value (a kind of cutoff value). ... For example, if the p-value of a hypothesis test is 0. ...

  12. What is a critical value?

    In hypothesis testing, there are two ways to determine whether there is enough evidence from the sample to reject H 0 or to fail to reject H 0.The most common way is to compare the p-value with a pre-specified value of α, where α is the probability of rejecting H 0 when H 0 is true. However, an equivalent approach is to compare the calculated value of the test statistic based on your data ...

  13. Critical Value Calculator

    In hypothesis testing, critical values are one of the two approaches which allow you to decide whether to retain or reject the null hypothesis. The other approach is to calculate the p-value (for example, using the p-value calculator).

  14. Critical Value and the p-Value • SOGA-Py • Department of Earth Sciences

    The Critical Value and the p-Value Approach to Hypothesis Testing. In order to decide whether to reject the null hypothesis, a test statistic is calculated. The decision is made based on the numerical value of that test statistic. There are two approaches how to arrive at that decision: The critical value approach.

  15. Concept of Hypothesis Testing

    You can follow these steps to conduct a hypothesis test using the critical value method: Step 1: Frame the appropriate null and alternative hypotheses. Step 2: Decide the confidence level (or the ...

  16. P-Value vs. Critical Value: A Friendly Guide for Beginners

    Daisy. The main difference between p-value and critical value is that the p-value quantifies the strength of evidence against a null hypothesis, while the critical value sets a threshold for assessing the significance of a test statistic. Simply put, if your p-value is below the critical value, you reject the null hypothesis.

  17. Hypothesis Testing: Critical Value Approach versus P-Value ...

    In this short video, we consider the critical value approach to hypothesis testing decisiuon making and compare this approach to the p-value approach to hypo...

  18. 6a.2

    Below these are summarized into six such steps to conducting a test of a hypothesis. Set up the hypotheses and check conditions: Each hypothesis test includes two hypotheses about the population. One is the null hypothesis, notated as H 0, which is a statement of a particular parameter value. This hypothesis is assumed to be true until there is ...

  19. 9.4: Hypothesis Tests about μ- Critical Region Approach

    This page titled 9.4: Hypothesis Tests about μ- Critical Region Approach is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. When the probability of an event ...

  20. Hypothesis Testing: The P

    Essentially, the P - Value is the probability of observing another mean value that is at least as extreme as the value found from the sample data. One advantage of the P - Value approach is that it can involve a comparison of the test statistic against the critical value to reach a decision, or the conclusion may be based upon the P - Value ...

  21. Summary of the 3 Approaches to Hypothesis Testing

    Steps to Conduct a Hypothesis Test Using Critical Values: Identify the null hypothesis and the alternative hypothesis (and decide which is the claim). Ensure any necessary assumptions are met for the test to be conducted. Find the test statistic. Find the critical values associated with the significance level, α α, and the alternative ...

  22. S.3.2 Hypothesis Testing (P-Value Approach)

    Two-Tailed. In our example concerning the mean grade point average, suppose again that our random sample of n = 15 students majoring in mathematics yields a test statistic t* instead of equaling -2.5.The P-value for conducting the two-tailed test H 0: μ = 3 versus H A: μ ≠ 3 is the probability that we would observe a test statistic less than -2.5 or greater than 2.5 if the population mean ...

  23. Stats: Hypothesis Testing using Critical Value Example

    This video walks through a hypothesis testing example using the critical value method. In this example, sigma (the population standard deviation) is known. S...

  24. 8.2: Hypothesis Testing of Single Proportion

    Either five-step procedure, critical value or p -value approach, can be used. 8.2: Hypothesis Testing of Single Proportion is shared under a CC BY-NC license and was authored, remixed, and/or curated by LibreTexts. Both the critical value approach and the p-value approach can be applied to test hypotheses about a population proportion.