Statology

Statistics Made Easy

How to use the Z Table (With Examples)

A z-table is a table that tells you what percentage of values fall below a certain z-score in a standard normal distribution.

A z-score simply tells you how many standard deviations away an individual data value falls from the mean. It is calculated as:

z-score = (x – μ) / σ

  • x:  individual data value
  • μ:  population mean
  • σ:  population standard deviation

This tutorial shows several examples of how to use the z table.

The scores on a certain college entrance exam are normally distributed with mean  μ = 82 and standard deviation σ = 8. Approximately what percentage of students score less than 84 on the exam?

Step 1: Find the z-score.

First, we will find the z-score associated with an exam score of 84:

z-score = (x – μ) /  σ = (84 – 82) / 8 = 2 / 8 =  0.25

Step 2: Use the z-table to find the percentage that corresponds to the z-score.

Next, we will look up the value  0.25  in the z-table :

Example of how to read the z table

Approximately  59.87%  of students score less than 84 on this exam.

The height of plants in a certain garden are normally distributed with a mean of  μ = 26.5 inches and a standard deviation of σ = 2.5 inches. Approximately what percentage of plants are greater than 26 inches tall?

First, we will find the z-score associated with a height of 26 inches.

z-score = (x – μ) /  σ = (26 – 26.5) / 2.5 = -0.5 / 2.5 = -0.2

Next, we will look up the value -0.2   in the z-table :

Example of how to interpret z table

We see that 42.07% of values fall below a z-score of -0.2. However, in this example we want to know what percentage of values are greater  than -0.2, which we can find by using the formula 100% – 42.07% = 57.93%.

Thus, aproximately  59.87%  of the plants in this garden are greater than 26 inches tall.

The weight of a certain species of dolphin is normally distributed with a mean of μ = 400 pounds and a standard deviation of σ = 25 pounds. Approximately what percentage of dolphins weigh between 410 and 425 pounds?

Step 1: Find the z-scores.

First, we will find the z-scores associated with 410 pounds and 425 pounds

z-score of 410 = (x – μ) /  σ = (410 – 400) / 25 = 10 / 25 =  0.4

z-score of 425 = (x – μ) /  σ = (425 – 400) / 25 = 25 / 25 =  1

Step 2: Use the z-table to find the percentages that corresponds to each z-score.

First, we will look up the value  0.4   in the z-table :

Example of using z table

Then, we will look up the value  1   in the z-table :

Z table example

Lastly, we will subtract the smaller value from the larger value:  0.8413 – 0.6554 = 0.1859 .

Thus, approximately  18.59%  of dolphins weigh between 410 and 425 pounds.

Additional Resources

An Introduction to the Normal Distribution Normal Distribution Area Calculator Z Score Calculator

' src=

Published by Zach

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Logo for Maricopa Open Digital Press

10 Chapter 10: Hypothesis Testing with Z

Setting up the hypotheses.

When setting up the hypotheses with z, the parameter is associated with a sample mean (in the previous chapter examples the parameters for the null used 0). Using z is an occasion in which the null hypothesis is a value other than 0. For example, if we are working with mothers in the U.S. whose children are at risk of low birth weight, we can use 7.47 pounds, the average birth weight in the US, as our null value and test for differences against that. For now, we will focus on testing a value of a single mean against what we expect from the population.

Using birthweight as an example, our null hypothesis takes the form: H 0 : μ = 7.47 Notice that we are testing the value for μ, the population parameter, NOT the sample statistic ̅X (or M). We are referring to the data right now in raw form (we have not standardized it using z yet). Again, using inferential statistics, we are interested in understanding the population, drawing from our sample observations. For the research question, we have a mean value from the sample to use, we have specific data is – it is observed and used as a comparison for a set point.

As mentioned earlier, the alternative hypothesis is simply the reverse of the null hypothesis, and there are three options, depending on where we expect the difference to lie. We will set the criteria for rejecting the null hypothesis based on the directionality (greater than, less than, or not equal to) of the alternative.

If we expect our obtained sample mean to be above or below the null hypothesis value (knowing which direction), we set a directional hypothesis. O ur alternative hypothesis takes the form based on the research question itself. In our example with birthweight, this could be presented as H A : μ > 7.47 or H A : μ < 7.47. 

Note that we should only use a directional hypothesis if we have a good reason, based on prior observations or research, to suspect a particular direction. When we do not know the direction, such as when we are entering a new area of research, we use a non-directional alternative hypothesis. In our birthweight example, this could be set as H A : μ ≠ 7.47

In working with data for this course we will need to set a critical value of the test statistic for alpha (α) for use of test statistic tables in the back of the book. This is determining the critical rejection region that has a set critical value based on α.

Determining Critical Value from α

We set alpha (α) before collecting data in order to determine whether or not we should reject the null hypothesis. We set this value beforehand to avoid biasing ourselves by viewing our results and then determining what criteria we should use.

When a research hypothesis predicts an effect but does not predict a direction for the effect, it is called a non-directional hypothesis . To test the significance of a non-directional hypothesis, we have to consider the possibility that the sample could be extreme at either tail of the comparison distribution. We call this a two-tailed test .

hypothesis z table

Figure 1. showing a 2-tail test for non-directional hypothesis for z for area C is the critical rejection region.

When a research hypothesis predicts a direction for the effect, it is called a directional hypothesis . To test the significance of a directional hypothesis, we have to consider the possibility that the sample could be extreme at one-tail of the comparison distribution. We call this a one-tailed test .

hypothesis z table

Figure 2. showing a 1-tail test for a directional hypothesis (predicting an increase) for z for area C is the critical rejection region.

Determining Cutoff Scores with Two-Tailed Tests

Typically we specify an α level before analyzing the data. If the data analysis results in a probability value below the α level, then the null hypothesis is rejected; if it is not, then the null hypothesis is not rejected. In other words, if our data produce values that meet or exceed this threshold, then we have sufficient evidence to reject the null hypothesis ; if not, we fail to reject the null (we never “accept” the null). According to this perspective, if a result is significant, then it does not matter how significant it is. Moreover, if it is not significant, then it does not matter how close to being significant it is. Therefore, if the 0.05 level is being used, then probability values of 0.049 and 0.001 are treated identically. Similarly, probability values of 0.06 and 0.34 are treated identically. Note we will discuss ways to address effect size (which is related to this challenge of NHST).

When setting the probability value, there is a special complication in a two-tailed test. We have to divide the significance percentage between the two tails. For example, with a 5% significance level, we reject the null hypothesis only if the sample is so extreme that it is in either the top 2.5% or the bottom 2.5% of the comparison distribution. This keeps the overall level of significance at a total of 5%. A one-tailed test does have such an extreme value but with a one-tailed test only one side of the distribution is considered.

hypothesis z table

Figure 3. Critical value differences in one and two-tail tests. Photo Credit

Let’s re view th e set critical values for Z.

We discussed z-scores and probability in chapter 8.  If we revisit the z-score for 5% and 1%, we can identify the critical regions for the critical rejection areas from the unit standard normal table.

  • A two-tailed test at the 5% level has a critical boundary Z score of +1.96 and -1.96
  • A one-tailed test at the 5% level has a critical boundary Z score of +1.64 or -1.64
  • A two-tailed test at the 1% level has a critical boundary Z score of +2.58 and -2.58
  • A one-tailed test at the 1% level has a critical boundary Z score of +2.33 or -2.33.

Review: Critical values, p-values, and significance level

There are two criteria we use to assess whether our data meet the thresholds established by our chosen significance level, and they both have to do with our discussions of probability and distributions. Recall that probability refers to the likelihood of an event, given some situation or set of conditions. In hypothesis testing, that situation is the assumption that the null hypothesis value is the correct value, or that there is no effec t. The value laid out in H 0 is our condition under which we interpret our results. To reject this assumption, and thereby reject the null hypothesis, we need results that would be very unlikely if the null was true.

Now recall that values of z which fall in the tails of the standard normal distribution represent unlikely values. That is, the proportion of the area under the curve as or more extreme than z is very small as we get into the tails of the distribution. Our significance level corresponds to the area under the tail that is exactly equal to α: if we use our normal criterion of α = .05, then 5% of the area under the curve becomes what we call the rejection region (also called the critical region) of the distribution. This is illustrated in Figure 4.

image

Figure 4: The rejection region for a one-tailed test

The shaded rejection region takes us 5% of the area under the curve. Any result which falls in that region is sufficient evidence to reject the null hypothesis.

The rejection region is bounded by a specific z-value, as is any area under the curve. In hypothesis testing, the value corresponding to a specific rejection region is called the critical value, z crit (“z-crit”) or z* (hence the other name “critical region”). Finding the critical value works exactly the same as finding the z-score corresponding to any area under the curve like we did in Unit 1. If we go to the normal table, we will find that the z-score corresponding to 5% of the area under the curve is equal to 1.645 (z = 1.64 corresponds to 0.0405 and z = 1.65 corresponds to 0.0495, so .05 is exactly in between them) if we go to the right and -1.645 if we go to the left. The direction must be determined by your alternative hypothesis, and drawing then shading the distribution is helpful for keeping directionality straight.

Suppose, however, that we want to do a non-directional test. We need to put the critical region in both tails, but we don’t want to increase the overall size of the rejection region (for reasons we will see later). To do this, we simply split it in half so that an equal proportion of the area under the curve falls in each tail’s rejection region. For α = .05, this means 2.5% of the area is in each tail, which, based on the z-table, corresponds to critical values of z* = ±1.96. This is shown in Figure 5.

image

Figure 5: Two-tailed rejection region

Thus, any z-score falling outside ±1.96 (greater than 1.96 in absolute value) falls in the rejection region. When we use z-scores in this way, the obtained value of z (sometimes called z-obtained) is something known as a test statistic, which is simply an inferential statistic used to test a null hypothesis.

Calculate the test statistic: Z

Now that we understand setting up the hypothesis and determining the outcome, let’s examine hypothesis testing with z!  The next step is to carry out the study and get the actual results for our sample. Central to hypothesis test is comparison of the population and sample means. To make our calculation and determine where the sample is in the hypothesized distribution we calculate the Z for the sample data.

Make a decision

To decide whether to reject the null hypothesis, we compare our sample’s Z score to the Z score that marks our critical boundary. If our sample Z score falls inside the rejection region of the comparison distribution (is greater than the z-score critical boundary) we reject the null hypothesis.

The formula for our z- statistic has not changed:

hypothesis z table

To formally test our hypothesis, we compare our obtained z-statistic to our critical z-value. If z obt > z crit , that means it falls in the rejection region (to see why, draw a line for z = 2.5 on Figure 1 or Figure 2) and so we reject H 0 . If z obt < z crit , we fail to reject. Remember that as z gets larger, the corresponding area under the curve beyond z gets smaller. Thus, the proportion, or p-value, will be smaller than the area for α, and if the area is smaller, the probability gets smaller. Specifically, the probability of obtaining that result, or a more extreme result, under the condition that the null hypothesis is true gets smaller.

Conversely, if we fail to reject, we know that the proportion will be larger than α because the z-statistic will not be as far into the tail. This is illustrated for a one- tailed test in Figure 6.

image

Figure 6. Relation between α, z obt , and p

When the null hypothesis is rejected, the effect is said to be statistically significant . Do not confuse statistical significance with practical significance. A small effect can be highly significant if the sample size is large enough.

Why does the word “significant” in the phrase “statistically significant” mean something so different from other uses of the word? Interestingly, this is because the meaning of “significant” in everyday language has changed. It turns out that when the procedures for hypothesis testing were developed, something was “significant” if it signified something. Thus, finding that an effect is statistically significant signifies that the effect is real and not due to chance. Over the years, the meaning of “significant” changed, leading to the potential misinterpretation.

Review: Steps of the Hypothesis Testing Process

The process of testing hypotheses follows a simple four-step procedure. This process will be what we use for the remained of the textbook and course, and though the hypothesis and statistics we use will change, this process will not.

Step 1: State the Hypotheses

Your hypotheses are the first thing you need to lay out. Otherwise, there is nothing to test! You have to state the null hypothesis (which is what we test) and the alternative hypothesis (which is what we expect). These should be stated mathematically as they were presented above AND in words, explaining in normal English what each one means in terms of the research question.

Step 2: Find the Critical Values

Next, we formally lay out the criteria we will use to test our hypotheses. There are two pieces of information that inform our critical values: α, which determines how much of the area under the curve composes our rejection region, and the directionality of the test, which determines where the region will be.

Step 3: Compute the Test Statistic

Once we have our hypotheses and the standards we use to test them, we can collect data and calculate our test statistic, in this case z . This step is where the vast majority of differences in future chapters will arise: different tests used for different data are calculated in different ways, but the way we use and interpret them remains the same.

Step 4: Make the Decision

Finally, once we have our obtained test statistic, we can compare it to our critical value and decide whether we should reject or fail to reject the null hypothesis. When we do this, we must interpret the decision in relation to our research question, stating what we concluded, what we based our conclusion on, and the specific statistics we obtained.

Example: Movie Popcorn

Let’s see how hypothesis testing works in action by working through an example. Say that a movie theater owner likes to keep a very close eye on how much popcorn goes into each bag sold, so he knows that the average bag has 8 cups of popcorn and that this varies a little bit, about half a cup. That is, the known population mean is μ = 8.00 and the known population standard deviation is σ =0.50. The owner wants to make sure that the newest employee is filling bags correctly, so over the course of a week he randomly assesses 25 bags filled by the employee to test for a difference (n = 25). He doesn’t want bags overfilled or under filled, so he looks for differences in both directions. This scenario has all of the information we need to begin our hypothesis testing procedure.

Our manager is looking for a difference in the mean cups of popcorn bags compared to the population mean of 8. We will need both a null and an alternative hypothesis written both mathematically and in words. We’ll always start with the null hypothesis:

H 0 : There is no difference in the cups of popcorn bags from this employee H 0 : μ = 8.00

Notice that we phrase the hypothesis in terms of the population parameter μ, which in this case would be the true average cups of bags filled by the new employee.

Our assumption of no difference, the null hypothesis, is that this mean is exactly

the same as the known population mean value we want it to match, 8.00. Now let’s do the alternative:

H A : There is a difference in the cups of popcorn bags from this employee H A : μ ≠ 8.00

In this case, we don’t know if the bags will be too full or not full enough, so we do a two-tailed alternative hypothesis that there is a difference.

Our critical values are based on two things: the directionality of the test and the level of significance. We decided in step 1 that a two-tailed test is the appropriate directionality. We were given no information about the level of significance, so we assume that α = 0.05 is what we will use. As stated earlier in the chapter, the critical values for a two-tailed z-test at α = 0.05 are z* = ±1.96. This will be the criteria we use to test our hypothesis. We can now draw out our distribution so we can visualize the rejection region and make sure it makes sense

image

Figure 7: Rejection region for z* = ±1.96

Step 3: Calculate the Test Statistic

Now we come to our formal calculations. Let’s say that the manager collects data and finds that the average cups of this employee’s popcorn bags is ̅X = 7.75 cups. We can now plug this value, along with the values presented in the original problem, into our equation for z:

So our test statistic is z = -2.50, which we can draw onto our rejection region distribution:

image

Figure 8: Test statistic location

Looking at Figure 5, we can see that our obtained z-statistic falls in the rejection region. We can also directly compare it to our critical value: in terms of absolute value, -2.50 > -1.96, so we reject the null hypothesis. We can now write our conclusion:

When we write our conclusion, we write out the words to communicate what it actually means, but we also include the average sample size we calculated (the exact location doesn’t matter, just somewhere that flows naturally and makes sense) and the z-statistic and p-value. We don’t know the exact p-value, but we do know that because we rejected the null, it must be less than α.

Effect Size

When we reject the null hypothesis, we are stating that the difference we found was statistically significant, but we have mentioned several times that this tells us nothing about practical significance. To get an idea of the actual size of what we found, we can compute a new statistic called an effect size. Effect sizes give us an idea of how large, important, or meaningful a statistically significant effect is.

For mean differences like we calculated here, our effect size is Cohen’s d :

hypothesis z table

Effect sizes are incredibly useful and provide important information and clarification that overcomes some of the weakness of hypothesis testing. Whenever you find a significant result, you should always calculate an effect size

Table 1. Interpretation of Cohen’s d

Example: Office Temperature

Let’s do another example to solidify our understanding. Let’s say that the office building you work in is supposed to be kept at 74 degree Fahrenheit but is allowed

to vary by 1 degree in either direction. You suspect that, as a cost saving measure, the temperature was secretly set higher. You set up a formal way to test your hypothesis.

You start by laying out the null hypothesis:

H 0 : There is no difference in the average building temperature H 0 : μ = 74

Next you state the alternative hypothesis. You have reason to suspect a specific direction of change, so you make a one-tailed test:

H A : The average building temperature is higher than claimed H A : μ > 74

image

Now that you have everything set up, you spend one week collecting temperature data:

You calculate the average of these scores to be 𝑋̅ = 76.6 degrees. You use this to calculate the test statistic, using μ = 74 (the supposed average temperature), σ = 1.00 (how much the temperature should vary), and n = 5 (how many data points you collected):

z = 76.60 − 74.00 = 2.60    = 5.78

          1.00/√5            0.45

This value falls so far into the tail that it cannot even be plotted on the distribution!

image

Figure 7: Obtained z-statistic

You compare your obtained z-statistic, z = 5.77, to the critical value, z* = 1.645, and find that z > z*. Therefore you reject the null hypothesis, concluding: Based on 5 observations, the average temperature (𝑋̅ = 76.6 degrees) is statistically significantly higher than it is supposed to be, z = 5.77, p < .05.

d = (76.60-74.00)/ 1= 2.60

The effect size you calculate is definitely large, meaning someone has some explaining to do!

Example: Different Significance Level

First, let’s take a look at an example phrased in generic terms, rather than in the context of a specific research question, to see the individual pieces one more time. This time, however, we will use a stricter significance level, α = 0.01, to test the hypothesis.

We will use 60 as an arbitrary null hypothesis value: H 0 : The average score does not differ from the population H 0 : μ = 50

We will assume a two-tailed test: H A : The average score does differ H A : μ ≠ 50

We have seen the critical values for z-tests at α = 0.05 levels of significance several times. To find the values for α = 0.01, we will go to the standard normal table and find the z-score cutting of 0.005 (0.01 divided by 2 for a two-tailed test) of the area in the tail, which is z crit * = ±2.575. Notice that this cutoff is much higher than it was for α = 0.05. This is because we need much less of the area in the tail, so we need to go very far out to find the cutoff. As a result, this will require a much larger effect or much larger sample size in order to reject the null hypothesis.

We can now calculate our test statistic.  The average of 10 scores is M = 60.40 with a µ = 60. We will use σ = 10 as our known population standard deviation. From this information, we calculate our z-statistic as:

Our obtained z-statistic, z = 0.13, is very small. It is much less than our critical value of 2.575. Thus, this time, we fail to reject the null hypothesis. Our conclusion would look something like:

Notice two things about the end of the conclusion. First, we wrote that p is greater than instead of p is less than, like we did in the previous two examples. This is because we failed to reject the null hypothesis. We don’t know exactly what the p- value is, but we know it must be larger than the α level we used to test our hypothesis. Second, we used 0.01 instead of the usual 0.05, because this time we tested at a different level. The number you compare to the p-value should always be the significance level you test at. Because we did not detect a statistically significant effect, we do not need to calculate an effect size. Note: some statisticians will suggest to always calculate effects size as a possibility of Type II error. Although insignificant, calculating d = (60.4-60)/10 = .04 which suggests no effect (and not a possibility of Type II error).

Review Considerations in Hypothesis Testing

Errors in hypothesis testing.

Keep in mind that rejecting the null hypothesis is not an all-or-nothing decision. The Type I error rate is affected by the α level: the lower the α level the lower the Type I error rate. It might seem that α is the probability of a Type I error. However, this is not correct. Instead, α is the probability of a Type I error given that the null hypothesis is true. If the null hypothesis is false, then it is impossible to make a Type I error. The second type of error that can be made in significance testing is failing to reject a false null hypothesis. This kind of error is called a Type II error.

Statistical Power

The statistical power of a research design is the probability of rejecting the null hypothesis given the sample size and expected relationship strength. Statistical power is the complement of the probability of committing a Type II error. Clearly, researchers should be interested in the power of their research designs if they want to avoid making Type II errors. In particular, they should make sure their research design has adequate power before collecting data. A common guideline is that a power of .80 is adequate. This means that there is an 80% chance of rejecting the null hypothesis for the expected relationship strength.

Given that statistical power depends primarily on relationship strength and sample size, there are essentially two steps you can take to increase statistical power: increase the strength of the relationship or increase the sample size. Increasing the strength of the relationship can sometimes be accomplished by using a stronger manipulation or by more carefully controlling extraneous variables to reduce the amount of noise in the data (e.g., by using a within-subjects design rather than a between-subjects design). The usual strategy, however, is to increase the sample size. For any expected relationship strength, there will always be some sample large enough to achieve adequate power.

Inferential statistics uses data from a sample of individuals to reach conclusions about the whole population. The degree to which our inferences are valid depends upon how we selected the sample (sampling technique) and the characteristics (parameters) of population data. Statistical analyses assume that sample(s) and population(s) meet certain conditions called statistical assumptions.

It is easy to check assumptions when using statistical software and it is important as a researcher to check for violations; if violations of statistical assumptions are not appropriately addressed then results may be interpreted incorrectly.

Learning Objectives

Having read the chapter, students should be able to:

  • Conduct a hypothesis test using a z-score statistics, locating critical region, and make a statistical decision including.
  • Explain the purpose of measuring effect size and power, and be able to compute Cohen’s d.

Exercises – Ch. 10

  • List the main steps for hypothesis testing with the z-statistic. When and why do you calculate an effect size?
  • z = 1.99, two-tailed test at α = 0.05
  • z = 1.99, two-tailed test at α = 0.01
  • z = 1.99, one-tailed test at α = 0.05
  • You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with μ = 78 and σ = 12. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: 82, 74, 62, 68, 79, 94, 90, 81, 80.
  • A study examines self-esteem and depression in teenagers.  A sample of 25 teens with a low self-esteem are given the Beck Depression Inventory.  The average score for the group is 20.9.  For the general population, the average score is 18.3 with σ = 12.  Use a two-tail test with α = 0.05 to examine whether teenagers with low self-esteem show significant differences in depression.
  • You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $12 (μ = 42, σ = 12). You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is $44.50 from tips. Test for a difference between this value and the population mean at the α = 0.05 level of significance.

Answers to Odd- Numbered Exercises – Ch. 10

1. List hypotheses. Determine critical region. Calculate z.  Compare z to critical region. Draw Conclusion.  We calculate an effect size when we find a statistically significant result to see if our result is practically meaningful or important

5. Step 1: H 0 : μ = 42 “My average tips does not differ from other servers”, H A : μ ≠ 42 “My average tips do differ from others”

Introduction to Statistics for Psychology Copyright © 2021 by Alisa Beyer is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

How to Use the Z-Score Table (Standard Normal Table)

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

A Z-score table, also called the standard normal table, or z-score chart, is a mathematical table that allows us to know the percentage of values below (usually a decimal figure) to the left of a given Z-score on a standard normal distribution (SND).

standard normal curve

There are two z-score tables which are:

Positive Z-score Table : Used when the Z-score is positive and above the mean. A positive Z-score table allows you to find the percentage or probability of all values occurring below a given positive Z-score in a standard normal distribution.

Negative Z-score Table : Used when the Z-score is negative and below the mean. A negative Z-score table allows you to find the percentage or probability of all values occurring below a given negative Z-score in a standard normal distribution. 

Each type of table typically includes values for both the whole number and tenth place of the Z-score in the rows (e.g., -3.3, -3.2, …, 3.2, 3.3) and for the hundredth place in the columns (e.g., 0.00, 0.01, …, 0.09).

A Z-score table can be used to determine if a score is statistically significant by providing a way to find the p -value associated with a given Z-score.

The p-value is the probability of obtaining a result at least as extreme as the one observed, assuming the null hypothesis is true.

How To Read Z-Score Table

Reading a Z-score table might initially seem tricky, but it becomes pretty straightforward once you understand the layout.

There are two kinds of Z-tables: for “less than” probabilities and for “more than” probabilities. The “less than” table is the most commonly used one.

A Z-score table shows the percentage of values (usually a decimal figure) to the left of a given Z-score on a standard normal distribution.

Here’s how you can read it:

Look at the Z-table. The left column will contain the first part of the Z-score (e.g., the whole number and the first digit after the decimal point). Go down this column until you find your Z-score’s first part.

Next, look at the top row of the Z-table. This row will contain the second part of the Z-score (the remaining decimal number). Go across this row until you find your Z-score’s second part.

The intersection of the row from the first part and the column from the second part will give you the value associated with your Z-score. This value represents the proportion of the data set that lies below the value corresponding to your Z-score in a standard normal distribution.

For example, imagine our Z-score value is 1.09.

First, look at the left side column of the z-table to find the value corresponding to one decimal place of the z-score. In this case, it is 1.0.

Then, we look up the remaining number across the table (on the top), which is 0.09 in our example.

How do you find the Z score on a table?

The corresponding area is 0.8621, which translates into 86.21% of the standard normal distribution being below (or to the left) of the z-score.

The proportion (%) of the SND to the left of the z-score

To find the p-value, subtract this from 1 (which gives you 0.1379), then multiply by 2 (which gives you p = 0.2758).

The results are not statistically significant because the p-value is greater than the predetermined significance level (p = 0.05), and the null hypothesis is accepted.

Right of a positive z-score

To find the area to the right of a positive z-score, begin by reading off the area in the standard normal distribution table.

Since the total area under the bell curve is 1 (as a decimal value equivalent to 100%), we subtract the area from the table from 1.

For example, the area to the left of z = 1.09 is given in the table as .8621. Thus the area to the right of z = 1.09 is 1 – .8621. = .1379.

Left of a negative z-score

If you have a negative z-score, use the same table but disregard the negative sign, then subtract the area from the table from 1.

Right of a negative z-score

If you have a negative z-score, use the same table but disregard the negative sign to find the area above your z-score.

Finding the area between two z-scores

To find the area between two negative z-scores, we must first find the area (proportion of the SND) to the left of the lowest z-score value and the area (proportion of the SND) to the right of the highest z-score value.

Next, we must add these proportional values and subtract them from 1 (the SND’s total area of the SND.

Further Information

Z-Score Table (for positive or negative scores)

Finding the proportion of a normal distribution that is above a value by calculating a z-score and using a z-table (Kahn Academy Video) Statistics for Psychology Book Download

Print Friendly, PDF & Email

Z-test Calculator

What is a z-test, when do i use z-tests, z-test formula, p-value from z-test, two-tailed z-test and one-tailed z-test, z-test critical values & critical regions, how to use the one-sample z-test calculator, z-test examples.

This Z-test calculator is a tool that helps you perform a one-sample Z-test on the population's mean . Two forms of this test - a two-tailed Z-test and a one-tailed Z-tests - exist, and can be used depending on your needs. You can also choose whether the calculator should determine the p-value from Z-test or you'd rather use the critical value approach!

Read on to learn more about Z-test in statistics, and, in particular, when to use Z-tests, what is the Z-test formula, and whether to use Z-test vs. t-test. As a bonus, we give some step-by-step examples of how to perform Z-tests!

Or you may also check our t-statistic calculator , where you can learn the concept of another essential statistic. If you are also interested in F-test, check our F-statistic calculator .

A one sample Z-test is one of the most popular location tests. The null hypothesis is that the population mean value is equal to a given number, μ 0 \mu_0 μ 0 ​ :

We perform a two-tailed Z-test if we want to test whether the population mean is not μ 0 \mu_0 μ 0 ​ :

and a one-tailed Z-test if we want to test whether the population mean is less/greater than μ 0 \mu_0 μ 0 ​ :

Let us now discuss the assumptions of a one-sample Z-test.

You may use a Z-test if your sample consists of independent data points and:

the data is normally distributed , and you know the population variance ;

the sample is large , and data follows a distribution which has a finite mean and variance. You don't need to know the population variance.

The reason these two possibilities exist is that we want the test statistics that follow the standard normal distribution N ( 0 , 1 ) \mathrm N(0, 1) N ( 0 , 1 ) . In the former case, it is an exact standard normal distribution, while in the latter, it is approximately so, thanks to the central limit theorem.

The question remains, "When is my sample considered large?" Well, there's no universal criterion. In general, the more data points you have, the better the approximation works. Statistics textbooks recommend having no fewer than 50 data points, while 30 is considered the bare minimum.

Let x 1 , . . . , x n x_1, ..., x_n x 1 ​ , ... , x n ​ be an independent sample following the normal distribution N ( μ , σ 2 ) \mathrm N(\mu, \sigma^2) N ( μ , σ 2 ) , i.e., with a mean equal to μ \mu μ , and variance equal to σ 2 \sigma ^2 σ 2 .

We pose the null hypothesis, H 0  ⁣  ⁣ :  ⁣  ⁣   μ = μ 0 \mathrm H_0 \!\!:\!\! \mu = \mu_0 H 0 ​ :   μ = μ 0 ​ .

We define the test statistic, Z , as:

x ˉ \bar x x ˉ is the sample mean, i.e., x ˉ = ( x 1 + . . . + x n ) / n \bar x = (x_1 + ... + x_n) / n x ˉ = ( x 1 ​ + ... + x n ​ ) / n ;

μ 0 \mu_0 μ 0 ​ is the mean postulated in H 0 \mathrm H_0 H 0 ​ ;

n n n is sample size; and

σ \sigma σ is the population standard deviation.

In what follows, the uppercase Z Z Z stands for the test statistic (treated as a random variable), while the lowercase z z z will denote an actual value of Z Z Z , computed for a given sample drawn from N(μ,σ²).

If H 0 \mathrm H_0 H 0 ​ holds, then the sum S n = x 1 + . . . + x n S_n = x_1 + ... + x_n S n ​ = x 1 ​ + ... + x n ​ follows the normal distribution, with mean n μ 0 n \mu_0 n μ 0 ​ and variance n 2 σ n^2 \sigma n 2 σ . As Z Z Z is the standardization (z-score) of S n / n S_n/n S n ​ / n , we can conclude that the test statistic Z Z Z follows the standard normal distribution N ( 0 , 1 ) \mathrm N(0, 1) N ( 0 , 1 ) , provided that H 0 \mathrm H_0 H 0 ​ is true. By the way, we have the z-score calculator if you want to focus on this value alone.

If our data does not follow a normal distribution, or if the population standard deviation is unknown (and thus in the formula for Z Z Z we substitute the population standard deviation σ \sigma σ with sample standard deviation), then the test statistics Z Z Z is not necessarily normal. However, if the sample is sufficiently large, then the central limit theorem guarantees that Z Z Z is approximately N ( 0 , 1 ) \mathrm N(0, 1) N ( 0 , 1 ) .

In the sections below, we will explain to you how to use the value of the test statistic, z z z , to make a decision , whether or not you should reject the null hypothesis . Two approaches can be used in order to arrive at that decision: the p-value approach, and critical value approach - and we cover both of them! Which one should you use? In the past, the critical value approach was more popular because it was difficult to calculate p-value from Z-test. However, with help of modern computers, we can do it fairly easily, and with decent precision. In general, you are strongly advised to report the p-value of your tests!

Formally, the p-value is the smallest level of significance at which the null hypothesis could be rejected. More intuitively, p-value answers the questions: provided that I live in a world where the null hypothesis holds, how probable is it that the value of the test statistic will be at least as extreme as the z z z - value I've got for my sample? Hence, a small p-value means that your result is very improbable under the null hypothesis, and so there is strong evidence against the null hypothesis - the smaller the p-value, the stronger the evidence.

To find the p-value, you have to calculate the probability that the test statistic, Z Z Z , is at least as extreme as the value we've actually observed, z z z , provided that the null hypothesis is true. (The probability of an event calculated under the assumption that H 0 \mathrm H_0 H 0 ​ is true will be denoted as P r ( event ∣ H 0 ) \small \mathrm{Pr}(\text{event} | \mathrm{H_0}) Pr ( event ∣ H 0 ​ ) .) It is the alternative hypothesis which determines what more extreme means :

  • Two-tailed Z-test: extreme values are those whose absolute value exceeds ∣ z ∣ |z| ∣ z ∣ , so those smaller than − ∣ z ∣ -|z| − ∣ z ∣ or greater than ∣ z ∣ |z| ∣ z ∣ . Therefore, we have:

The symmetry of the normal distribution gives:

  • Left-tailed Z-test: extreme values are those smaller than z z z , so
  • Right-tailed Z-test: extreme values are those greater than z z z , so

To compute these probabilities, we can use the cumulative distribution function, (cdf) of N ( 0 , 1 ) \mathrm N(0, 1) N ( 0 , 1 ) , which for a real number, x x x , is defined as:

Also, p-values can be nicely depicted as the area under the probability density function (pdf) of N ( 0 , 1 ) \mathrm N(0, 1) N ( 0 , 1 ) , due to:

With all the knowledge you've got from the previous section, you're ready to learn about Z-tests.

  • Two-tailed Z-test:

From the fact that Φ ( − z ) = 1 − Φ ( z ) \Phi(-z) = 1 - \Phi(z) Φ ( − z ) = 1 − Φ ( z ) , we deduce that

The p-value is the area under the probability distribution function (pdf) both to the left of − ∣ z ∣ -|z| − ∣ z ∣ , and to the right of ∣ z ∣ |z| ∣ z ∣ :

two-tailed p value

  • Left-tailed Z-test:

The p-value is the area under the pdf to the left of our z z z :

left-tailed p value

  • Right-tailed Z-test:

The p-value is the area under the pdf to the right of z z z :

right-tailed p value

The decision as to whether or not you should reject the null hypothesis can be now made at any significance level, α \alpha α , you desire!

if the p-value is less than, or equal to, α \alpha α , the null hypothesis is rejected at this significance level; and

if the p-value is greater than α \alpha α , then there is not enough evidence to reject the null hypothesis at this significance level.

The critical value approach involves comparing the value of the test statistic obtained for our sample, z z z , to the so-called critical values . These values constitute the boundaries of regions where the test statistic is highly improbable to lie . Those regions are often referred to as the critical regions , or rejection regions . The decision of whether or not you should reject the null hypothesis is then based on whether or not our z z z belongs to the critical region.

The critical regions depend on a significance level, α \alpha α , of the test, and on the alternative hypothesis. The choice of α \alpha α is arbitrary; in practice, the values of 0.1, 0.05, or 0.01 are most commonly used as α \alpha α .

Once we agree on the value of α \alpha α , we can easily determine the critical regions of the Z-test:

To decide the fate of H 0 \mathrm H_0 H 0 ​ , check whether or not your z z z falls in the critical region:

If yes, then reject H 0 \mathrm H_0 H 0 ​ and accept H 1 \mathrm H_1 H 1 ​ ; and

If no, then there is not enough evidence to reject H 0 \mathrm H_0 H 0 ​ .

As you see, the formulae for the critical values of Z-tests involve the inverse, Φ − 1 \Phi^{-1} Φ − 1 , of the cumulative distribution function (cdf) of N ( 0 , 1 ) \mathrm N(0, 1) N ( 0 , 1 ) .

Our calculator reduces all the complicated steps:

Choose the alternative hypothesis: two-tailed or left/right-tailed.

In our Z-test calculator, you can decide whether to use the p-value or critical regions approach. In the latter case, set the significance level, α \alpha α .

Enter the value of the test statistic, z z z . If you don't know it, then you can enter some data that will allow us to calculate your z z z for you:

  • sample mean x ˉ \bar x x ˉ (If you have raw data, go to the average calculator to determine the mean);
  • tested mean μ 0 \mu_0 μ 0 ​ ;
  • sample size n n n ; and
  • population standard deviation σ \sigma σ (or sample standard deviation if your sample is large).

Results appear immediately below the calculator.

If you want to find z z z based on p-value , please remember that in the case of two-tailed tests there are two possible values of z z z : one positive and one negative, and they are opposite numbers. This Z-test calculator returns the positive value in such a case. In order to find the other possible value of z z z for a given p-value, just take the number opposite to the value of z z z displayed by the calculator.

To make sure that you've fully understood the essence of Z-test, let's go through some examples:

  • A bottle filling machine follows a normal distribution. Its standard deviation, as declared by the manufacturer, is equal to 30 ml. A juice seller claims that the volume poured in each bottle is, on average, one liter, i.e., 1000 ml, but we suspect that in fact the average volume is smaller than that...

Formally, the hypotheses that we set are the following:

H 0  ⁣ :   μ = 1000  ml \mathrm H_0 \! : \mu = 1000 \text{ ml} H 0 ​ :   μ = 1000  ml

H 1  ⁣ :   μ < 1000  ml \mathrm H_1 \! : \mu \lt 1000 \text{ ml} H 1 ​ :   μ < 1000  ml

We went to a shop and bought a sample of 9 bottles. After carefully measuring the volume of juice in each bottle, we've obtained the following sample (in milliliters):

1020 , 970 , 1000 , 980 , 1010 , 930 , 950 , 980 , 980 \small 1020, 970, 1000, 980, 1010, 930, 950, 980, 980 1020 , 970 , 1000 , 980 , 1010 , 930 , 950 , 980 , 980 .

Sample size: n = 9 n = 9 n = 9 ;

Sample mean: x ˉ = 980   m l \bar x = 980 \ \mathrm{ml} x ˉ = 980   ml ;

Population standard deviation: σ = 30   m l \sigma = 30 \ \mathrm{ml} σ = 30   ml ;

And, therefore, p-value = Φ ( − 2 ) ≈ 0.0228 \text{p-value} = \Phi(-2) \approx 0.0228 p-value = Φ ( − 2 ) ≈ 0.0228 .

As 0.0228 < 0.05 0.0228 \lt 0.05 0.0228 < 0.05 , we conclude that our suspicions aren't groundless; at the most common significance level, 0.05, we would reject the producer's claim, H 0 \mathrm H_0 H 0 ​ , and accept the alternative hypothesis, H 1 \mathrm H_1 H 1 ​ .

We tossed a coin 50 times. We got 20 tails and 30 heads. Is there sufficient evidence to claim that the coin is biased?

Clearly, our data follows Bernoulli distribution, with some success probability p p p and variance σ 2 = p ( 1 − p ) \sigma^2 = p (1-p) σ 2 = p ( 1 − p ) . However, the sample is large, so we can safely perform a Z-test. We adopt the convention that getting tails is a success.

Let us state the null and alternative hypotheses:

H 0  ⁣ :   p = 0.5 \mathrm H_0 \! : p = 0.5 H 0 ​ :   p = 0.5 (the coin is fair - the probability of tails is 0.5 0.5 0.5 )

H 1  ⁣ :   p ≠ 0.5 \mathrm H_1 \! : p \ne 0.5 H 1 ​ :   p  = 0.5 (the coin is biased - the probability of tails differs from 0.5 0.5 0.5 )

In our sample we have 20 successes (denoted by ones) and 30 failures (denoted by zeros), so:

Sample size n = 50 n = 50 n = 50 ;

Sample mean x ˉ = 20 / 50 = 0.4 \bar x = 20/50 = 0.4 x ˉ = 20/50 = 0.4 ;

Population standard deviation is given by σ = 0.5 × 0.5 \sigma = \sqrt{0.5 \times 0.5} σ = 0.5 × 0.5 ​ (because 0.5 0.5 0.5 is the proportion p p p hypothesized in H 0 \mathrm H_0 H 0 ​ ). Hence, σ = 0.5 \sigma = 0.5 σ = 0.5 ;

  • And, therefore

Since 0.1573 > 0.1 0.1573 \gt 0.1 0.1573 > 0.1 we don't have enough evidence to reject the claim that the coin is fair , even at such a large significance level as 0.1 0.1 0.1 . In that case, you may safely toss it to your Witcher or use the coin flip probability calculator to find your chances of getting, e.g., 10 heads in a row (which are extremely low!).

What is the difference between Z-test vs t-test?

We use a t-test for testing the population mean of a normally distributed dataset which had an unknown population standard deviation . We get this by replacing the population standard deviation in the Z-test statistic formula by the sample standard deviation, which means that this new test statistic follows (provided that H₀ holds) the t-Student distribution with n-1 degrees of freedom instead of N(0,1) .

When should I use t-test over the Z-test?

For large samples, the t-Student distribution with n degrees of freedom approaches the N(0,1). Hence, as long as there are a sufficient number of data points (at least 30), it does not really matter whether you use the Z-test or the t-test, since the results will be almost identical. However, for small samples with unknown variance, remember to use the t-test instead of Z-test .

How do I calculate the Z test statistic?

To calculate the Z test statistic:

  • Compute the arithmetic mean of your sample .
  • From this mean subtract the mean postulated in null hypothesis .
  • Multiply by the square root of size sample .
  • Divide by the population standard deviation .
  • That's it, you've just computed the Z test statistic!

False positive paradox

Flat vs. round earth, steps to calories.

  • Biology (100)
  • Chemistry (100)
  • Construction (144)
  • Conversion (292)
  • Ecology (30)
  • Everyday life (261)
  • Finance (569)
  • Health (440)
  • Physics (509)
  • Sports (104)
  • Statistics (182)
  • Other (181)
  • Discover Omni (40)

How to use the Z Table (With Examples)

A z-table is a table that tells you what percentage of values fall below a certain z-score in a standard normal distribution.

A z-score simply tells you how many standard deviations away an individual data value falls from the mean. It is calculated as:

z-score = (x – μ) / σ

  • x:  individual data value
  • μ:  population mean
  • σ:  population standard deviation

This tutorial shows several examples of how to use the z table.

The scores on a certain college entrance exam are normally distributed with mean  μ = 82 and standard deviation σ = 8. Approximately what percentage of students score less than 84 on the exam?

Step 1: Find the z-score.

First, we will find the z-score associated with an exam score of 84:

z-score = (x – μ) /  σ = (84 – 82) / 8 = 2 / 8 =  0.25

Step 2: Use the z-table to find the percentage that corresponds to the z-score.

Next, we will look up the value  0.25  in the z-table :

Example of how to read the z table

Approximately  59.87%  of students score less than 84 on this exam.

The height of plants in a certain garden are normally distributed with a mean of  μ = 26.5 inches and a standard deviation of σ = 2.5 inches. Approximately what percentage of plants are greater than 26 inches tall?

First, we will find the z-score associated with a height of 26 inches.

z-score = (x – μ) /  σ = (26 – 26.5) / 2.5 = -0.5 / 2.5 = -0.2

Next, we will look up the value -0.2   in the z-table :

Example of how to interpret z table

We see that 42.07% of values fall below a z-score of -0.2. However, in this example we want to know what percentage of values are greater  than -0.2, which we can find by using the formula 100% – 42.07% = 57.93%.

Thus, aproximately  59.87%  of the plants in this garden are greater than 26 inches tall.

The weight of a certain species of dolphin is normally distributed with a mean of μ = 400 pounds and a standard deviation of σ = 25 pounds. Approximately what percentage of dolphins weigh between 410 and 425 pounds?

Step 1: Find the z-scores.

First, we will find the z-scores associated with 410 pounds and 425 pounds

z-score of 410 = (x – μ) /  σ = (410 – 400) / 25 = 10 / 25 =  0.4

z-score of 425 = (x – μ) /  σ = (425 – 400) / 25 = 25 / 25 =  1

Step 2: Use the z-table to find the percentages that corresponds to each z-score.

First, we will look up the value  0.4   in the z-table :

Example of using z table

Then, we will look up the value  1   in the z-table :

Z table example

Lastly, we will subtract the smaller value from the larger value:  0.8413 – 0.6554 = 0.1859 .

Thus, approximately  18.59%  of dolphins weigh between 410 and 425 pounds.

Additional Resources

An Introduction to the Normal Distribution Normal Distribution Area Calculator Z Score Calculator

How to Perform a One-Way ANOVA on a TI-84 Calculator

Chi-square test of independence: definition, formula, and example, related posts, sxy calculator for linear regression, sxx calculator for linear regression, two sample z-test calculator, one sample z-test calculator, f-test for equal variances calculator, tolerance interval calculator, area to the right of z-score calculator, area to the left of z-score calculator, x-bar (sample mean) calculator, trimmed mean calculator.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

5.3.2: Table of Critical Values of z

  • Last updated
  • Save as PDF
  • Page ID 22052

  • Michelle Oja
  • Taft College

Table \(\PageIndex{1}\) shows negative z-scores, their probability (p-value), and percentage of scores that fall below that probability (p-value). Table \(\PageIndex{2}\) shows positive  z-scores, their probability (p-value), and percentage of scores that fall below that probability (p-value) . If this table is too unwieldy, here is a PDF of a z-score table with only three columns (z-score, p-value, percent) with more than 600 rows of z-scores (instead of Table \(\PageIndex{1}\)).

Table 1\(\PageIndex{1}\): Negative z-Scores. (CC-BY-SA; modified by Michelle Oja from Jsmura via Wikimedia Commons )

Table 2\(\PageIndex{2}\): Positive z-Scores. (CC-BY-SA; modified by Michelle Oja from Jsmura via Wikimedia Commons )

Attributions & C ontributors

  • Jsmura via Wikimedia Commons )

A Z table, also referred to as a standard normal table, is a table of the values of the cumulative distribution function of a normal distribution. It tells us the probability that values in a normal distribution lie below, above, or between values on the standard normal distribution . This is useful because, typically, it is necessary to integrate the probability density function (pdf) of a random variable to determine the probabilities of outcomes within an interval; for the case of a normal distribution , this is particularly difficult. However, since all normal distributions can be converted to a standard normal distribution, and since the associated probabilities have already been computed and compiled into Z tables for a standard normal distribution, it is possible to reference Z tables rather than having to integrate the pdf to determine the probability of outcomes within a given interval occurring.

Z-scores and Z distributions

A normally distributed random variable can be standardized by converting its values to Z-scores using the following formula,

where μ is the mean, σ is the standard deviation, and x is the value being converted. The resulting distribution of Z-scores is referred to as a Z distribution (or standard normal distribution), where the Z-score represents the number of standard deviations (not the value of the standard deviation) that a given value is from the mean. For example, a Z-score of 1 means that the value is 1 standard deviation from the mean. A Z-score is used to determine where an observed value lies in a distribution, relative to its mean, and can be positive, negative, or 0:

  • A positive Z-score indicates that a value is above (right of) the mean.
  • A negative Z-score indicates that a value is below (left of) the mean.
  • A Z-score of 0 indicates that a value is equal to the mean.

The figure below shows a normal distribution with μ = 5, σ = 4, and x = 11, as well as its corresponding Z distribution. The x-value (and all values in the normal distribution) is standardized as follows:

Using a Z table

There are a few different types of Z tables (described below). Generally, the process of using a Z table involves first calculating a Z-score for the value of interest. Once this is done, different Z tables can be used to determine various probabilities by finding the probability associated with the Z-score of the value of interest. Below is an example of a cumulative from mean Z table:

hypothesis z table

Reading the Z table for Z = 1.32, the probability that a value lies between a Z-score of 0 and 1.32 is approximately 41%.

The average score on a math exam for a class of 150 students was a 78/100 with a standard deviation of 6. Use a cumulative from mean Z table to find the probability of a score being above or below an 87.

First, convert 87 to a Z-score:

Referencing the above Z table, a Z-score of 1.5 corresponds to a probability of 0.43319, or around 43%. However, this probability only represents the probability from the mean to the Z-score, as shown in the figure below:

hypothesis z table

Since 50% of values lie above and below the mean, the probability that a score will be below an 87 is represented by the following distribution:

hypothesis z table

Therefore, P(Z < 1.5) is:

The remaining unshaded area of the curve in the figure above represents P(Z > 1.5), and can be found by subtracting P(Z < 1.5) from 1:

Thus, approximately 93% of scores were below an 87, and only 7% of scores were above an 87.

Types of Z tables

There are a few different types of Z tables that provide the probabilities of various Z distributions: cumulative from mean, cumulative, and complementary cumulative.

Cumulative from mean

A cumulative from mean Z table provides the probability that a statistic lies between 0 (the mean) and z 1 , or P(0 1 ), as represented by the area in the figure below:

hypothesis z table

Using a cumulative from mean Z table, it is possible to find various probabilities. The simplest probability to determine is the probability that an outcome is in the interval 0 1 , since that is what the table represents.

It is also possible to calculate other probabilities using a cumulative from mean Z table, such as P(Z > z 1 ), by adding or subtracting the appropriate areas under the curve. For example, to determine P(Z > z 1 ) subtract P(0 1 ) from 50%, since a Z distribution has 50% of its values above and below the mean.

Thus, subtracting the area represented by P(0 1 ) from the total area in the right side of the graph will yield P(Z > z 1 ). Alternatively, this probability can be directly read off a complementary cumulative Z table (described below). As can be seen, there are a number of ways to determine various probabilities using Z tables. As such, it is important to be aware of the type of Z table being used.

A cumulative Z table provides the probability that a statistic is less than Z, or P(Z 1 ). The two graphs below show the area under the Z distribution represented by P(Z 1 ).

Using a cumulative Z table, the probabilities can simply be read off the table for the given values of z 1 . Like any Z table, the cumulative Z table can be used to determine other probabilities as well. For example, P(Z > z 1 ) can be determined by subtracting P(Z 1 ) from 1, since the entire area under the curve is 1. This leaves only the area P(Z > z 1 ).

Complementary cumulative

A complementary cumulative Z table provides the probability that a statistic is greater than Z, or P(Z > z 1 ). This can be found as 1 - P(Z 1 ). The two graphs below show the area under the Z distribution represented by P(Z > z 1 ):

As with the other types of distributions, it is possible to use a complementary cumulative Z table to find other probabilities, such as P(Z 1 ), by subtracting P(Z > z 1 ) from 1. As can be seen, it is possible to find various probabilities with each of the types of Z tables; the key is to be aware of which type of Z table is available and what distributions it represents.

Learn Math and Stats with Dr. G

A shortcut is the longest distance between two points.

Learn Math and Stats with Dr. G

Finding z Critical Values (zc)

In many cases, critical values are required.

A critical value often represents a rejection region cut-off value for a hypothesis test – also called a zc value for a confidence interval.

For confidence intervals and two-tailed z-tests, you can use the zTable to determine the critical values (zc).

Find the critical values for a 90% Confidence Interval.

NOTICE: A 90% Confidence Interval will have the same critical values (rejection regions) as a two-tailed z test with alpha = .10.

The Critical Values for a 90% confidence or alpha = .10 are +/- 1.645.

Example 2 Find the critical values for a 95% confidence interval. These are the same as the rejection region z-value cut-offs for a two-tailed z test with alpha = .05.

Note that when alpha = .05 we are using a 95% confidence interval.

  • Forgot your Password?

First, please create an account

How to find a critical z value, 1. left-tailed tests.

For a left-tailed test, suppose we need to find the critical z-value for a hypothesis test that would reject the null hypothesis (H 0 ) at a 2.5% significance level. To do this, we want to find, on our normal distribution, the cutoff on the left tail that corresponds to the lower 2.5% of our distribution.

1a. Graphing Calculator

The first way is by using a graphing calculator. First, hit "2nd", then "DISTR" (This is above the button "VARS").

File:5204-critz1.png

Then scroll down to the third function, which is inverse norm (invNorm). This is the inverse of the normal distribution. We're going to hit "Enter".

Next, we're going to input 0.025, because this is a left-tailed test, so we're looking at the lower 2.5% of our distribution. For this specific calculator (TI-84 Plus), we need to type 0.025 for the area, and 0 for mu and 1 for sigma because these are the values that correspond with a normal distribution. Hit "Enter", and we get a z-test statistic of negative 1.96.

File:5205-critz2.png

At about -1.96, this is the cutoff for the lower 2.5% of our data.

File:5206-critz3.png

Basically any z-score that is below a negative 1.96 means we're going to reject the null hypothesis. Any z-score that is above a negative 1.96 is going to fall in this unshaded region of our distribution. This means that we're willing to accept the variation in our sample from the center of our distribution due to chance, and we're going to fail to reject the null hypothesis.

1b. Z-Table

The second method is using a z-table. When using the z-table, we look for our significance level in the table. In this case, remember we were looking at a left-tailed test. This means we need to use the negative z-table with negative z-scores, not positive, because we're looking at the lower half of the distribution. Remember, the significance level is 0.025, or 2.5%, so we are going to look for that value or the closest thing to it. Here it is on a z-table:

A significance level of 0.025 corresponds to a z-score of negative 1.96. Therefore, our z critical value is negative 1.96.

A third way to find the critical z-value that corresponds to a 2.5% significance level for a left-tailed test is in Excel. All we have to do is go to our "Formulas" tab. We're going to insert under the "Statistical" column. We're looking for "NORM.S.INV", which is right here:

File:5208-critz5.png

This is for the inverse of the normal distribution, and because it's a left-tailed test, we're looking at the lower half of our distribution. We're going to put in the 0.025 for the lower 2.5%. Hit "Enter", and notice how we get the same critical z-value that we did using the calculator and table:

File:5209-critz6.png

term to know Critical Value A value that can be compared to the test statistic to decide the outcome of a hypothesis test

2. Right-Tailed Tests

For a right-tailed test, suppose we need to find the critical z-value for a hypothesis test that would reject the null (H 0 ) at a 5% significance level. To do this, we want to find, on our normal distribution, the cutoff on the upper part of the distribution where we are not going to attribute the difference in proportion due to chance.

2a. Graphing Calculator

The first way is by using a graphing calculator. First, hit "2nd", then "DISTR" (This is above the button "VARS"). Then scroll down to the third function, which is inverse norm (invNorm). This is the inverse of the normal distribution. We're going to hit "Enter".

The significance level is 5%, but we're not going to put in 0.05 like we did with the left-tailed test, where the significance level was 2.5% and we entered 0.025. In the normal distribution, we always read left to right, and it always goes from 0 percent to 100 percent. We're looking at a right-tailed test, which is the upper portion of our distribution. That cutoff is the top 5% of our distribution. So, 100% minus 5% is going to be 95%. We are actually going to put in the inverse norm of 0.95, and that's going to get us a corresponding critical z-value of about 1.645.

File:5211-critz8.png

Any z-test statistic that is greater than 1.645 falls in the upper 5% of our distribution, and therefore we would reject the null hypothesis.

File:5212-critz9.png

2b. Z-Table

The second method uses the z-table. Because we're looking at a right-tailed test, we're going to have positive z-scores since we're looking at the upper half of the distribution. We'll use the positive z-table that corresponds with positive z-scores.

The significance level was 5%, but it was the upper 5%. Remember, this corresponds to the 95th percentile on our distribution. In the table, we need to look for the closest thing to 95%, or 0.95.

This actually falls in between these two values, 0.9495 and 0.9505. This value corresponds to a z-score of 1.6 in the left column, and it falls between the 0.04 and the 0.05 in the top row. When we take the average of 1.64 and 1.65, we get a critical z-value of 1.645.

A third way to find the critical z-value that corresponds to a 5% significance level for an upper tail test, or a right-tailed test, is by using Excel. Again, go to "Formulas" tab. We're going to insert under the "Statistical" column our "NORM.S.INV", but we're not going to put in 0.05 for the 5%. Because we're looking at the upper part of our distribution, this is going to correspond to the 95th percentile. We're going to enter 0.95, and notice how we get the same critical value we did from our table and our calculator, which is a positive 1.645.

File:5214-critz11.png

3. Two-Sided Tests

For a two-sided test, suppose we want to find the critical z-score for a hypothesis test that would reject the null at a 1% significance level. Because it's a two-sided test, we have to divide that 1% into each tail. Therefore, 1% divided by 2 means we're going to be looking for the cutoff at the lower 0.5% of the distribution, and the upper 0.5% of our distribution.

3a. Graphing Calculator

The first way to find this value is with a graphing calculator. Let's go ahead and first find the corresponding critical z-score for the lower part of our distribution. Hit "2nd", "DISTR", "invNorm". This tail is 0.5%, so we're going to put 0.005.

File:5216-critz13.png

This gives us a corresponding z-score of negative 2.576. In a distribution, this falls right about here, negative 2.576.

File:5217-critz14.png

The shaded region corresponds to the lower 0.5% of the distribution. If we do this correctly, we should get the same z-score, but a positive value for the upper portion of our distribution for that 0.5% cut off.

Let's go ahead and do inverse norm again on our calculator. But we can't put in 0.005, because remember, our distribution reads from 0% to 100%. We actually have to do 100% minus 0.05%, or 99.5%. We are going to put in 0.995 and get a positive 2.576.

File:5218-critz15.png

This positive 2.576 corresponds to the upper 0.5% of our distribution.

File:5219-critz16.png

Any z-score that we would calculate that would be greater than a positive 2.576 or less than a negative 2.576 means we would reject the null hypothesis.

3b. Z-Table

Using our z-table, we first look for the corresponding critical value for the lower half of our distribution, since it's a two-sided test. Remember, we're not going to look for the closest thing to 1%, but we're going to look for the closest value to 0.5%, or 0.005.

Let's use the table to find the lower critical value for our two-sided test in a 1% significance level, we're going to find the closest thing to 0.5%.

The closest value to 0.005 is between these two values, 0.0051 and 0.0049. This corresponds to a negative 2.5 in the left column, and in between the 0.07 and the 0.08 in the top row. If we're using the table, we're going to get an average critical z-value of negative 2.575, which is quite close to what the calculator gave us. Remember, sometimes the table can just give us an estimate.

Let's use the table to find the upper critical value for our two-sided test in a 1% significance level, we're going to try to find the closest to 100% minus 0.5%, or 99.5%.

The closest value to 99.5%, or 0.995, is in between these two values, 0.9949 and 0.9951. This corresponds to a positive 2.5 in the left column, and falling between the 0.07 and the 0.08 in the top row. If we're using the table, we would get a critical z-value of a positive 2.575, taking the average between those two values.

In Excel, we're going to find the two critical z-values that correspond to the 1% significance level for our two-sided test. Again, go under your "Formulas" tab. We're going to insert the "NORM.S.INV" under the "Statistical" column. We'll first find the lower critical value that corresponds to the lower 0.5%, so enter 0.005.

File:5222-critz19.png

You can see that we get our first critical z-value of negative 2.576. Now, if we do this correctly, we should get a positive 2.576. Again, we're going to insert to get the second critical value for the upper part of our distribution. The upper percentage that corresponds to the top 0.5% is going to be our 99.5%, so 0.995.

File:5224-critz21.png

We get the positive critical z-value of 2.576.

summary We calculated a critical z-score for a left-tailed, right-tailed, and two-tailed test, utilizing three methods for each test: graphing calculator, z-table, and Excel. Good luck!

Source: THIS TUTORIAL WAS AUTHORED BY JONATHAN OSTERS FOR SOPHIA LEARNING. PLEASE SEE OUR TERMS OF USE .

Practice Problems

A value that can be compared to the test statistic to decide the outcome of a hypothesis test

  • Privacy Policy
  • Cookie Policy
  • Terms of Use

Your Privacy Choices Icon

© 2024 SOPHIA Learning, LLC. SOPHIA is a registered trademark of SOPHIA Learning, LLC.

Z Table. Z Score Table. Normal Distribution Table. Standard Normal Table.

Null Hypothesis

Hypothesis & hypothesis testing.

‘Hypothesis’ is an educated guess about anything which can be tested or observed. A hypothesis can be defined as an assumption made based on evidence.

In statistics, hypothesis is the assumption that we make regarding population parameters from the sample statistics and the testing of the credibility of the same is known as hypothesis testing.

A hypothesis statement usually consists of an independent variable and a dependent variable in which the change in the dependent variable with respect to the change in the independent variable can be observed.

Examples of a hypothesis statement can be as follows:

  • If I study more, I will get good marks.
  • If I drink coffee, I will be able to concentrate more.

A hypothesis test is used to test the relation between two research variables. A hypothesis statement is proved true or false by the hypothesis test. It is used to know about a population by taking sample data. These sample data are randomly taken from a large population whose parameters could not be easily calculated.

(As often happens in real life, a population can be too large to perform any mathematical operations, and direct calculation of the population parameters can be practically difficult. For instance, if we have to know the ratings of a product from the people in a country, we cannot possibly take the ratings of each individual and then come to the conclusion. Instead, we randomly select a group of people who could be treated as a sample representing the entire population. This process will make the calculation a lot easier and faster and more importantly, feasible.)

A hypothesis test will have two statements that are mutually exclusive. The sample data helps us to conclude which of the two assumed population parameters is true. The two mutually exclusive hypothesis are the null hypothesis and the alternate hypothesis.

Hypothesis testing looks for any sort of difference or effect in a population parameter. If there is no difference between the sample data observations and the population parameters, then it favors the null hypothesis.

On the other hand, if there is any proof in the sample data inference that changes the null hypothesis value, then the test favors the alternate hypothesis value.

A hypothesis test is usually carried out to contradict the null hypothesis. However, the alternate hypothesis favored result of a sample data might be due to chance and does not necessarily affect the population parameters. We have to know if the effect is significant or meaningful or it occurred by chance. The hypothesis test considers this probability of making errors as well.

There are 4 major steps in hypothesis testing:

  • Stating the two mutually exclusive statements.
  • Formulating an analysis plan to carry out the test.
  • Carry out the test
  • Analyzing the result and based on it, accept or reject the null hypothesis.

Types of Hypothesis

There are different types of hypothesis:

  • Simple hypothesis: Simple hypothesis defines the relationship between one dependent variable and a single independent variable. The above two examples are simple hypothesis.
  • Complex hypothesis: Complex hypothesis defines the relationship between more than one independent and dependent variables. For example, ‘the consumption of junk food and lack of exercise will lead to weight gain, obesity, and health problems’ is a complex hypothesis statement.
  • Directional hypothesis: Directional hypothesis defines the relationship between the variables as well as their nature. It shows the direction of the effect.
  • Null hypothesis: A contradictory statement that states there is no relationship between the variables.

What is the Null Hypothesis?

It is a negative statement which states that there is no relationship between the dependent and the independent variable. In every experiment, the researchers work to disprove, reject, and contradict the null hypothesis.

Suppose in an experiment we are experimenting with the relationship between the amount of coffee consumption of people and their concentration level, our alternate hypothesis will be that there is a connection between them. But the null hypothesis in this experiment will be that there is no relation between the concentration level of people and their coffee consumption at all. As it is clearly seen, the alternate hypothesis is the exact opposite of the null hypothesis. That is, they are mutually exclusive.

The null hypothesis is the accepted population parameters. Null hypothesis is represented by the symbol H 0 which can be read as H-null, H-zero, or H-naught. It is often associated with equal to sign as there is no approximation or uncertainty in this statement.

A null hypothesis is useful in all the experiments as it can be tested to know whether any relationship is there between the dependent variable and the independent variable. It also helps to advance a theory. Also, it is useful to test whether the results obtained after the test is significant or due to chance.

The null hypothesis is a statement based on some evidence that is not strong enough to ultimately make it true without any doubt. We need to prove that this statement, drawn from the available data is absolutely true by further testing. We can only reject the null hypothesis if the evidence we produce after the test is significant and strong, eliminating any possible statements that suggest the possibility of error.

There are different types of null hypothesis:

  • Simple hypothesis: it absolutely defines and specifies the population. The sample distribution in this case will be a function of the sample size, N.
  • Composite hypothesis: it does not specify the population distribution completely.
  • Exact hypothesis: defines the exact value of the parameter.
  • Inexact hypothesis: specifies a range or interval rather than a definite value.

How Null Hypothesis Works?

As previously mentioned, a null statement must be proved correct through further tests. Assuming that the null hypothesis is true, we collect data and determine the probabilities of the collected data of a random sample. This forms the principle of the null hypothesis.

After the analysis, if the observations and results are not sufficient or strong enough to support the null hypothesis, it can be rejected. On the other hand, if the evidence is reliable and supports the null hypothesis, it is accepted.

The result of the experiment after the analysis of the data is treated as evidence against either the null hypothesis or the alternate hypothesis. We don’t have to believe that the null hypothesis is absolutely true or false to conduct the research. We just have to assume that there might be some relationship between the variables that are analyzed.

Some statistical tools are used in the testing of the null hypothesis where the data is analyzed to know the extend of deviation of the data from the null hypothesis. However, in some cases, the evidence doesn’t contradict the null statement strongly. In such cases, where we would not be sure whether the null hypothesis is true or false, we often accept it as true.

In simpler terms, we only reject the null hypothesis if the evidence strongly supports the alternate hypothesis. If the evidence is weak, it means that the experiment simply fails to prove the relationship of the phenomenon. Thus, the reliability of the evidence is of the utmost importance.

Also, it is important to note that the null hypothesis cannot be proven true by research and experiment. We can accept it as true until any reliable evidence come up to contradict it. We cannot mathematically prove that the null hypothesis is true absolutely. When we accept the null hypothesis we are actually proving that the alternate hypothesis is not true and therefore we accept the null hypothesis as true. Similarly, if the alternate hypothesis is proven true, we can certainly reject the null hypothesis proving that it is false.

Now why consider a null hypothesis at all? If we are only interested to prove the relationship between the variables, why we bother to state a null hypothesis? The answer is that when we do a scientific experiment, we have to systematically prove the theories, making sure that there is no flaw in the results. Stating the null hypothesis, we could enhance that the new hypothesis is tested true without any flaws. It is a systematic way to ensure that the research is not flawed.

Significant Level /Significance Value

The significant level is an important parameter in the hypothesis test. Unlike the P-value, a significant level is not calculated for a test. It is a value that is chosen for the test. The significant level is referred to as alpha.

A significant level can be defined as the measure with which we determine how strongly the sample data contradict the null. The result of the sample data gives evidence against the null hypothesis and the significant level is the parameter with which we determine it is valid or not. It is a probabilistic value. It is the probability of producing an effective result statistically when the null hypothesis is true. That is it gives the probability of rejecting the null value even when it is true. Rejecting a null value when it is true is called a Type I error. The Significant value is equal to the Type I error rate.

Hence the significant level value should be low so as to avoid making the mistake of considering the effect of sample data when it is not significant. Usually, the alpha value is chosen as 0.05. This implies that you have a 5% chance of getting an effective result even when the null hypothesis is true.

Null Hypothesis & Significant Value

Statistically, the significance of the result is analyzed by a value called a significant value. It is the probability of rejecting the null hypothesis when it is actually true. Denoted by alpha ( α ), it is the measure of the strength of evidence in the sample before rejecting the null hypothesis. Usually, the alpha level is taken at 5%. This means that the researcher has a 5% probability of rejecting the null hypothesis when it is correct.

Let us look at how the alpha value is used to know the significance of the effect of the data.

A hypothesis test can be carried out by calculating the P-value of the sample. It is the measure of the probability of a random occurrence of the result.

If we want to prove and accept the alternate hypothesis, we must come up with evidence suggesting that the result is reliable and it is not occurred by chance.

An experiment is conducted to favor the alternate hypothesis and contradict the null hypothesis. Hence the p-value should be low for the acceptance of the alternate hypothesis. That is, if the p-value of the sample is less than the significant value, then the evidence is strong enough to support the alternate hypothesis. It means that the result is significant. Hence the rejection of the null hypothesis.

On the other hand, if the p-value of the sample is greater than the significant value, it strongly contradicts the alternate hypothesis. It means the result is not significant. Hence it accepts the null hypothesis.

In the probability distribution graph, there is a region called the rejection region defined by the significance value, wherein the null hypothesis is rejected.

hypothesis z table

Type I and Type II Error

Type I Error : Type I Error occurs when we reject the null hypothesis when it is true. This occurs when a large value is taken as the alpha value. This can be called a false positive scenario. This error can be minimized by taking a low significance level value.

Type II Error : Type II Error is the exact opposite of Type I error. Here we do not reject the null hypothesis when it is actually false. This may occur if the alpha value is marginally low. Appropriately taking the alpha value will reduce both the Type I and Type II errors.

For example, if we take the alpha value 12%, then there is more chance to reject the null value when it is true. This would be a Type I error. However, if an alpha value as low as 1% is taken, then it is possible to accept the null value even if it is false. This is called a Type II error. Hence it is a standard measure to take the value 5% for alpha which is neither too high nor too low.

Why the Null Hypothesis is called ‘Null’?

The null hypothesis is called so because it the commonly accepted statement that researchers work to nullify or cancel . A null statement is a statement of no effect. That is, it is the statement that nullifies an effect. It does not mean that the statement itself is null. Rather, it is the statement that favors a null effect from the experiment. The null hypothesis is a statement that nullifies or cancel the relationship between the dependent and independent variable.This is where the null hypothesis is contradicting the alternate hypothesis.

Null Hypothesis vs Alternate Hypothesis

The major differences between the null hypothesis and the alternate hypothesis are as follows:

  • The null hypothesis is a statement that has no effect and states that there is no relation between the dependent and the independent variable. The alternate hypothesis is a statement of effect and states that there is some relation between the variables.
  • The null hypothesis is denoted by H 0 . the alternate hypothesis is denoted by H a or H 1 .
  • The null hypothesis is observed due to the result of chance. The alternate hypothesis is observed due to the result of some effect in the experiment.
  • The equal sign represents the mathematical formulation of the null hypothesis. On the other hand, some inequality signs such as greater or less than are used to represent the mathematical formula of the alternate hypothesis.
  • The null value is accepted only if the p-value is greater than the significance value. A smaller p-value than the significance value will support the alternate hypothesis.

IMAGES

  1. How To Find Critical Value In Statistics

    hypothesis z table

  2. Hypothesis Testing Formula

    hypothesis z table

  3. Z Test

    hypothesis z table

  4. Z Test

    hypothesis z table

  5. 7 Images Z Score Table Two Tailed And Description

    hypothesis z table

  6. [Solved] HYPOTHESIS TESTING THE CRITICAL VALUE OF Z IN TESTING

    hypothesis z table

VIDEO

  1. Z Test and Hypothesis Testing Stats with Python

  2. FA II STATISTICS/ Chapter no 7 / Testing of hypothesis/ Z distribution / Example 7.8

  3. FA II STATISTICS/ Chapter no 7/ Hypothesis testing/ Example 7.4/ hypothesis testing about u

  4. FA II Statistics IChapter no 7 lTesting of hypothesis lStandard normal distribution l Example7.2,7.3

  5. How to find critical values for a hypothesis test using a z or t table, part 8

  6. FA II STATISTICS/Chapter no 7/Testing of hypothesis /Z distribution /Example no 7.7

COMMENTS

  1. Z-table

    The z-score for our apple is (110-100) / 15 = 0.67. The truncated z-table below shows the area for our z-score. By looking at the intersection for 0.6 and 0.07, the z-table indicates that the area under the curve is 0.74857. This value indicates that 74.857% of all apple weights will be lower than our apple weight of 110 grams.

  2. Z-table (Right of Curve or Left)

    The z-table is short for the "Standard Normal z-table". The Standard Normal model is used in hypothesis testing, including tests on proportions and on the difference between two means.The area under the whole of a normal distribution curve is 1, or 100 percent. The z-table helps by telling us what percentage is under the curve at any particular point.

  3. How to use the Z Table (With Examples)

    Step 1: Find the z-score. First, we will find the z-score associated with an exam score of 84: z-score = (x - μ) / σ = (84 - 82) / 8 = 2 / 8 = 0.25. Step 2: Use the z-table to find the percentage that corresponds to the z-score. Next, we will look up the value 0.25 in the z-table: Approximately 59.87% of students score less than 84 on ...

  4. Z table

    The z-table is known by several names, including z score table, normal distribution table, standard normal table, and standard normal distribution table. ... If the z-score is extreme, then you may reject the null hypothesis. Z-score can be used to scale or standardize variables, allowing for comparisons between variables with different units ...

  5. Z Test

    Z-Test is such a test statistic where we make use of the mean value and z score to determine the P-value. Z-Test compares the mean of two large samples taken from a population when the variance is known. Z-Test is usually used to conduct a hypothesis test when the sample size is greater than 30.

  6. Hypothesis Testing: Z-Scores

    Great, now that we know what hypothesis testing is when to apply the z-test, and the orientations of the hypotheses according to the alternative hypothesis, it's time to see a couple of examples. ... To find the critical values, we look at z-table the value of z that approximates an area under the curve similar to 0.0250. In this case, the ...

  7. PDF Z Score Table

    Microsoft Word - Z Score Table. Score Table- chart value corresponds to area below z score. 0.09. 3.4 0.0002. 3.3 0.0003. 3.2 0.0005. 3.1 0.0007. 3.0 0.0010. 2.9 0.0014.

  8. PDF The Z-test

    The z-test is a hypothesis test to determine if a single observed mean is signi cantly di erent (or greater or less than) the mean under the null hypothesis, hypwhen you ... From the z-table: z Area between mean and z Area beyond z..... 2.11 0.4826 0.0174 2.12 0.4830 0.0170 2.13 0.4834 0.0166

  9. 10 Chapter 10: Hypothesis Testing with Z

    Figure 1. showing a 2-tail test for non-directional hypothesis for z for area C is the critical rejection region. ... If we go to the normal table, we will find that the z-score corresponding to 5% of the area under the curve is equal to 1.645 (z = 1.64 corresponds to 0.0405 and z = 1.65 corresponds to 0.0495, so .05 is exactly in between them ...

  10. How to Use the Z-Score Table (Standard Normal Table)

    Using a z-score table to calculate the proportion (%) of the SND to the left of the z-score. The corresponding area is 0.8621, which translates into 86.21% of the standard normal distribution being below (or to the left) of the z-score. To find the p-value, subtract this from 1 (which gives you 0.1379), then multiply by 2 (which gives you p = 0 ...

  11. Z-test Calculator

    The critical value approach involves comparing the value of the test statistic obtained for our sample, z z z, to the so-called critical values.These values constitute the boundaries of regions where the test statistic is highly improbable to lie.Those regions are often referred to as the critical regions, or rejection regions.The decision of whether or not you should reject the null ...

  12. How to use the Z Table (With Examples)

    Step 1: Find the z-score. First, we will find the z-score associated with an exam score of 84: z-score = (x - μ) / σ = (84 - 82) / 8 = 2 / 8 = 0.25. Step 2: Use the z-table to find the percentage that corresponds to the z-score. Next, we will look up the value 0.25 in the z-table: Approximately 59.87% of students score less than 84 on ...

  13. 5.3.2: Table of Critical Values of z

    5.3.2: Table of Critical Values of z. Table 5.3.2.1 5.3.2. 1 shows negative z-scores, their probability (p-value), and percentage of scores that fall below that probability (p-value). Table 5.3.2.2 5.3.2. 2 shows positive z-scores, their probability (p-value), and percentage of scores that fall below that probability (p-value) . If this table ...

  14. PDF Hypothesis Testing with z Tests

    will reject the null hypothesis (cutoffs) p levels (α): Probabilities used to determine the critical value 5. Calculate test statistic (e.g., z statistic) 6. Make a decision Statistically Significant: Instructs us to reject the null hypothesis because the pattern in the data differs from whldbhlhat we would expect by chance alone.

  15. Z-test

    Using a Z table (or a p-value calculator), the p-value for a two-tailed Z-test for a Z-score of 2.604 is 0.009214. Since the p-value is less than the selected significance level, we reject the null hypothesis in favor of the alternative hypothesis, and conclude that the drug has a statistically significant effect on the performance of the patients.

  16. Z table

    Once this is done, different Z tables can be used to determine various probabilities by finding the probability associated with the Z-score of the value of interest. Below is an example of a cumulative from mean Z table: Reading the Z table for Z = 1.32, the probability that a value lies between a Z-score of 0 and 1.32 is approximately 41%.

  17. Finding z Critical Values (zc)

    A critical value often represents a rejection region cut-off value for a hypothesis test - also called a zc value for a confidence interval. For confidence intervals and two-tailed z-tests, you can use the zTable to determine the critical values (zc). Example. Find the critical values for a 90% Confidence Interval.

  18. Z-test

    A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution.Z-test tests the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value (for example, 1.96 for 5% two tailed) which makes it more convenient than the Student's t-test whose ...

  19. PDF Hypothesis Testing with z Tests

    CHAPTER 7 Hypothesis Testing with z Tests 159 TABLE 7-1. Excerpt from the z Table The z table provides the percentage of scores between the mean and a given z value. The full table includes positive z statistics from 0.00 to 4.50. The negative zstatistics are not included because all we have to do is change the sign from positive to negative. Remember, the normal curve is symmetric: One side ...

  20. How to Find a Critical Z Value Tutorial

    Any z-test statistic that is greater than 1.645 falls in the upper 5% of our distribution, and therefore we would reject the null hypothesis. 2b. Z-Table. The second method uses the z-table. Because we're looking at a right-tailed test, we're going to have positive z-scores since we're looking at the upper half of the distribution.

  21. Z TABLE

    Using two Z tables makes life easier such that based on whether you want the know the area from the mean for a positive value or a negative value, you can use the respective Z score table. If you want to know the area between the mean and a negative value you will use the first table (1.1) shown above which is the left-hand/negative Z-table. ...

  22. Null Hypothesis

    A null hypothesis is useful in all the experiments as it can be tested to know whether any relationship is there between the dependent variable and the independent variable. It also helps to advance a theory. Also, it is useful to test whether the results obtained after the test is significant or due to chance.