Grad Coach

Validity & Reliability In Research

A Plain-Language Explanation (With Examples)

By: Derek Jansen (MBA) | Expert Reviewer: Kerryn Warren (PhD) | September 2023

Validity and reliability are two related but distinctly different concepts within research. Understanding what they are and how to achieve them is critically important to any research project. In this post, we’ll unpack these two concepts as simply as possible.

This post is based on our popular online course, Research Methodology Bootcamp . In the course, we unpack the basics of methodology  using straightfoward language and loads of examples. If you’re new to academic research, you definitely want to use this link to get 50% off the course (limited-time offer).

Overview: Validity & Reliability

  • The big picture
  • Validity 101
  • Reliability 101 
  • Key takeaways

First, The Basics…

First, let’s start with a big-picture view and then we can zoom in to the finer details.

Validity and reliability are two incredibly important concepts in research, especially within the social sciences. Both validity and reliability have to do with the measurement of variables and/or constructs – for example, job satisfaction, intelligence, productivity, etc. When undertaking research, you’ll often want to measure these types of constructs and variables and, at the simplest level, validity and reliability are about ensuring the quality and accuracy of those measurements .

As you can probably imagine, if your measurements aren’t accurate or there are quality issues at play when you’re collecting your data, your entire study will be at risk. Therefore, validity and reliability are very important concepts to understand (and to get right). So, let’s unpack each of them.

Free Webinar: Research Methodology 101

What Is Validity?

In simple terms, validity (also called “construct validity”) is all about whether a research instrument accurately measures what it’s supposed to measure .

For example, let’s say you have a set of Likert scales that are supposed to quantify someone’s level of overall job satisfaction. If this set of scales focused purely on only one dimension of job satisfaction, say pay satisfaction, this would not be a valid measurement, as it only captures one aspect of the multidimensional construct. In other words, pay satisfaction alone is only one contributing factor toward overall job satisfaction, and therefore it’s not a valid way to measure someone’s job satisfaction.

how to write reliability and validity in dissertation

Oftentimes in quantitative studies, the way in which the researcher or survey designer interprets a question or statement can differ from how the study participants interpret it . Given that respondents don’t have the opportunity to ask clarifying questions when taking a survey, it’s easy for these sorts of misunderstandings to crop up. Naturally, if the respondents are interpreting the question in the wrong way, the data they provide will be pretty useless . Therefore, ensuring that a study’s measurement instruments are valid – in other words, that they are measuring what they intend to measure – is incredibly important.

There are various types of validity and we’re not going to go down that rabbit hole in this post, but it’s worth quickly highlighting the importance of making sure that your research instrument is tightly aligned with the theoretical construct you’re trying to measure .  In other words, you need to pay careful attention to how the key theories within your study define the thing you’re trying to measure – and then make sure that your survey presents it in the same way.

For example, sticking with the “job satisfaction” construct we looked at earlier, you’d need to clearly define what you mean by job satisfaction within your study (and this definition would of course need to be underpinned by the relevant theory). You’d then need to make sure that your chosen definition is reflected in the types of questions or scales you’re using in your survey . Simply put, you need to make sure that your survey respondents are perceiving your key constructs in the same way you are. Or, even if they’re not, that your measurement instrument is capturing the necessary information that reflects your definition of the construct at hand.

If all of this talk about constructs sounds a bit fluffy, be sure to check out Research Methodology Bootcamp , which will provide you with a rock-solid foundational understanding of all things methodology-related. Remember, you can take advantage of our 60% discount offer using this link.

Need a helping hand?

how to write reliability and validity in dissertation

What Is Reliability?

As with validity, reliability is an attribute of a measurement instrument – for example, a survey, a weight scale or even a blood pressure monitor. But while validity is concerned with whether the instrument is measuring the “thing” it’s supposed to be measuring, reliability is concerned with consistency and stability . In other words, reliability reflects the degree to which a measurement instrument produces consistent results when applied repeatedly to the same phenomenon , under the same conditions .

As you can probably imagine, a measurement instrument that achieves a high level of consistency is naturally more dependable (or reliable) than one that doesn’t – in other words, it can be trusted to provide consistent measurements . And that, of course, is what you want when undertaking empirical research. If you think about it within a more domestic context, just imagine if you found that your bathroom scale gave you a different number every time you hopped on and off of it – you wouldn’t feel too confident in its ability to measure the variable that is your body weight 🙂

It’s worth mentioning that reliability also extends to the person using the measurement instrument . For example, if two researchers use the same instrument (let’s say a measuring tape) and they get different measurements, there’s likely an issue in terms of how one (or both) of them are using the measuring tape. So, when you think about reliability, consider both the instrument and the researcher as part of the equation.

As with validity, there are various types of reliability and various tests that can be used to assess the reliability of an instrument. A popular one that you’ll likely come across for survey instruments is Cronbach’s alpha , which is a statistical measure that quantifies the degree to which items within an instrument (for example, a set of Likert scales) measure the same underlying construct . In other words, Cronbach’s alpha indicates how closely related the items are and whether they consistently capture the same concept . 

Reliability reflects whether an instrument produces consistent results when applied to the same phenomenon, under the same conditions.

Recap: Key Takeaways

Alright, let’s quickly recap to cement your understanding of validity and reliability:

  • Validity is concerned with whether an instrument (e.g., a set of Likert scales) is measuring what it’s supposed to measure
  • Reliability is concerned with whether that measurement is consistent and stable when measuring the same phenomenon under the same conditions.

In short, validity and reliability are both essential to ensuring that your data collection efforts deliver high-quality, accurate data that help you answer your research questions . So, be sure to always pay careful attention to the validity and reliability of your measurement instruments when collecting and analysing data. As the adage goes, “rubbish in, rubbish out” – make sure that your data inputs are rock-solid.

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling Udemy Course, Methodology Bootcamp . If you want to work smart, you don't want to miss this .

You Might Also Like:

Narrative analysis explainer

THE MATERIAL IS WONDERFUL AND BENEFICIAL TO ALL STUDENTS.

THE MATERIAL IS WONDERFUL AND BENEFICIAL TO ALL STUDENTS AND I HAVE GREATLY BENEFITED FROM THE CONTENT.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

how to write reliability and validity in dissertation

Understanding Reliability and Validity

These related research issues ask us to consider whether we are studying what we think we are studying and whether the measures we use are consistent.

Reliability

Reliability is the extent to which an experiment, test, or any measuring procedure yields the same result on repeated trials. Without the agreement of independent observers able to replicate research procedures, or the ability to use research tools and procedures that yield consistent measurements, researchers would be unable to satisfactorily draw conclusions, formulate theories, or make claims about the generalizability of their research. In addition to its important role in research, reliability is critical for many parts of our lives, including manufacturing, medicine, and sports.

Reliability is such an important concept that it has been defined in terms of its application to a wide range of activities. For researchers, four key types of reliability are:

Equivalency Reliability

Equivalency reliability is the extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association. In quantitative studies and particularly in experimental studies, a correlation coefficient, statistically referred to as r , is used to show the strength of the correlation between a dependent variable (the subject under study), and one or more independent variables , which are manipulated to determine effects on the dependent variable. An important consideration is that equivalency reliability is concerned with correlational, not causal, relationships.

For example, a researcher studying university English students happened to notice that when some students were studying for finals, their holiday shopping began. Intrigued by this, the researcher attempted to observe how often, or to what degree, this these two behaviors co-occurred throughout the academic year. The researcher used the results of the observations to assess the correlation between studying throughout the academic year and shopping for gifts. The researcher concluded there was poor equivalency reliability between the two actions. In other words, studying was not a reliable predictor of shopping for gifts.

Stability Reliability

Stability reliability (sometimes called test, re-test reliability) is the agreement of measuring instruments over time. To determine stability, a measure or test is repeated on the same subjects at a future date. Results are compared and correlated with the initial test to give a measure of stability.

An example of stability reliability would be the method of maintaining weights used by the U.S. Bureau of Standards. Platinum objects of fixed weight (one kilogram, one pound, etc...) are kept locked away. Once a year they are taken out and weighed, allowing scales to be reset so they are "weighing" accurately. Keeping track of how much the scales are off from year to year establishes a stability reliability for these instruments. In this instance, the platinum weights themselves are assumed to have a perfectly fixed stability reliability.

Internal Consistency

Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables.

For example, a researcher designs a questionnaire to find out about college students' dissatisfaction with a particular textbook. Analyzing the internal consistency of the survey items dealing with dissatisfaction will reveal the extent to which items on the questionnaire focus on the notion of dissatisfaction.

Interrater Reliability

Interrater reliability is the extent to which two or more individuals (coders or raters) agree. Interrater reliability addresses the consistency of the implementation of a rating system.

A test of interrater reliability would be the following scenario: Two or more researchers are observing a high school classroom. The class is discussing a movie that they have just viewed as a group. The researchers have a sliding rating scale (1 being most positive, 5 being most negative) with which they are rating the student's oral responses. Interrater reliability assesses the consistency of how the rating system is implemented. For example, if one researcher gives a "1" to a student response, while another researcher gives a "5," obviously the interrater reliability would be inconsistent. Interrater reliability is dependent upon the ability of two or more individuals to be consistent. Training, education and monitoring skills can enhance interrater reliability.

Related Information: Reliability Example

An example of the importance of reliability is the use of measuring devices in Olympic track and field events. For the vast majority of people, ordinary measuring rulers and their degree of accuracy are reliable enough. However, for an Olympic event, such as the discus throw, the slightest variation in a measuring device -- whether it is a tape, clock, or other device -- could mean the difference between the gold and silver medals. Additionally, it could mean the difference between a new world record and outright failure to qualify for an event. Olympic measuring devices, then, must be reliable from one throw or race to another and from one competition to another. They must also be reliable when used in different parts of the world, as temperature, air pressure, humidity, interpretation, or other variables might affect their readings.

Validity refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. While reliability is concerned with the accuracy of the actual measuring instrument or procedure, validity is concerned with the study's success at measuring what the researchers set out to measure.

Researchers should be concerned with both external and internal validity. External validity refers to the extent to which the results of a study are generalizable or transferable. (Most discussions of external validity focus solely on generalizability; see Campbell and Stanley, 1966. We include a reference here to transferability because many qualitative research studies are not designed to be generalized.)

Internal validity refers to (1) the rigor with which the study was conducted (e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and wasn't measured) and (2) the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore (Huitt, 1998). In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.

Scholars discuss several types of internal validity. For brief discussions of several types of internal validity, click on the items below:

Face Validity

Face validity is concerned with how a measure or procedure appears. Does it seem like a reasonable way to gain the information the researchers are attempting to obtain? Does it seem well designed? Does it seem as though it will work reliably? Unlike content validity, face validity does not depend on established theories for support (Fink, 1995).

Criterion Related Validity

Criterion related validity, also referred to as instrumental validity, is used to demonstrate the accuracy of a measure or procedure by comparing it with another measure or procedure which has been demonstrated to be valid.

For example, imagine a hands-on driving test has been shown to be an accurate test of driving skills. By comparing the scores on the written driving test with the scores from the hands-on driving test, the written test can be validated by using a criterion related strategy in which the hands-on driving test is compared to the written test.

Construct Validity

Construct validity seeks agreement between a theoretical concept and a specific measuring device or procedure. For example, a researcher inventing a new IQ test might spend a great deal of time attempting to "define" intelligence in order to reach an acceptable level of construct validity.

Construct validity can be broken down into two sub-categories: Convergent validity and discriminate validity. Convergent validity is the actual general agreement among ratings, gathered independently of one another, where measures should be theoretically related. Discriminate validity is the lack of a relationship among measures which theoretically should not be related.

To understand whether a piece of research has construct validity, three steps should be followed. First, the theoretical relationships must be specified. Second, the empirical relationships between the measures of the concepts must be examined. Third, the empirical evidence must be interpreted in terms of how it clarifies the construct validity of the particular measure being tested (Carmines & Zeller, p. 23).

Content Validity

Content Validity is based on the extent to which a measurement reflects the specific intended domain of content (Carmines & Zeller, 1991, p.20).

Content validity is illustrated using the following examples: Researchers aim to study mathematical learning and create a survey to test for mathematical skill. If these researchers only tested for multiplication and then drew conclusions from that survey, their study would not show content validity because it excludes other mathematical functions. Although the establishment of content validity for placement-type exams seems relatively straight-forward, the process becomes more complex as it moves into the more abstract domain of socio-cultural studies. For example, a researcher needing to measure an attitude like self-esteem must decide what constitutes a relevant domain of content for that attitude. For socio-cultural studies, content validity forces the researchers to define the very domains they are attempting to study.

Related Information: Validity Example

Many recreational activities of high school students involve driving cars. A researcher, wanting to measure whether recreational activities have a negative effect on grade point average in high school students, might conduct a survey asking how many students drive to school and then attempt to find a correlation between these two factors. Because many students might use their cars for purposes other than or in addition to recreation (e.g., driving to work after school, driving to school rather than walking or taking a bus), this research study might prove invalid. Even if a strong correlation was found between driving and grade point average, driving to school in and of itself would seem to be an invalid measure of recreational activity.

The challenges of achieving reliability and validity are among the most difficult faced by researchers. In this section, we offer commentaries on these challenges.

Difficulties of Achieving Reliability

It is important to understand some of the problems concerning reliability which might arise. It would be ideal to reliably measure, every time, exactly those things which we intend to measure. However, researchers can go to great lengths and make every attempt to ensure accuracy in their studies, and still deal with the inherent difficulties of measuring particular events or behaviors. Sometimes, and particularly in studies of natural settings, the only measuring device available is the researcher's own observations of human interaction or human reaction to varying stimuli. As these methods are ultimately subjective in nature, results may be unreliable and multiple interpretations are possible. Three of these inherent difficulties are quixotic reliability, diachronic reliability and synchronic reliability.

Quixotic reliability refers to the situation where a single manner of observation consistently, yet erroneously, yields the same result. It is often a problem when research appears to be going well. This consistency might seem to suggest that the experiment was demonstrating perfect stability reliability. This, however, would not be the case.

For example, if a measuring device used in an Olympic competition always read 100 meters for every discus throw, this would be an example of an instrument consistently, yet erroneously, yielding the same result. However, quixotic reliability is often more subtle in its occurrences than this. For example, suppose a group of German researchers doing an ethnographic study of American attitudes ask questions and record responses. Parts of their study might produce responses which seem reliable, yet turn out to measure felicitous verbal embellishments required for "correct" social behavior. Asking Americans, "How are you?" for example, would in most cases, elicit the token, "Fine, thanks." However, this response would not accurately represent the mental or physical state of the respondents.

Diachronic reliability refers to the stability of observations over time. It is similar to stability reliability in that it deals with time. While this type of reliability is appropriate to assess features that remain relatively unchanged over time, such as landscape benchmarks or buildings, the same level of reliability is more difficult to achieve with socio-cultural phenomena.

For example, in a follow-up study one year later of reading comprehension in a specific group of school children, diachronic reliability would be hard to achieve. If the test were given to the same subjects a year later, many confounding variables would have impacted the researchers' ability to reproduce the same circumstances present at the first test. The final results would almost assuredly not reflect the degree of stability sought by the researchers.

Synchronic reliability refers to the similarity of observations within the same time frame; it is not about the similarity of things observed. Synchronic reliability, unlike diachronic reliability, rarely involves observations of identical things. Rather, it concerns itself with particularities of interest to the research.

For example, a researcher studies the actions of a duck's wing in flight and the actions of a hummingbird's wing in flight. Despite the fact that the researcher is studying two distinctly different kinds of wings, the action of the wings and the phenomenon produced is the same.

Comments on a Flawed, Yet Influential Study

An example of the dangers of generalizing from research that is inconsistent, invalid, unreliable, and incomplete is found in the Time magazine article, "On A Screen Near You: Cyberporn" (De Witt, 1995). This article relies on a study done at Carnegie Mellon University to determine the extent and implications of online pornography. Inherent to the study are methodological problems of unqualified hypotheses and conclusions, unsupported generalizations and a lack of peer review.

Ignoring the functional problems that manifest themselves later in the study, it seems that there are a number of ethical problems within the article. The article claims to be an exhaustive study of pornography on the Internet, (it was anything but exhaustive), it resembles a case study more than anything else. Marty Rimm, author of the undergraduate paper that Time used as a basis for the article, claims the paper was an "exhaustive study" of online pornography when, in fact, the study based most of its conclusions about pornography on the Internet on the "descriptions of slightly more than 4,000 images" (Meeks, 1995, p. 1). Some USENET groups see hundreds of postings in a day.

Considering the thousands of USENET groups, 4,000 images no longer carries the authoritative weight that its author intended. The real problem is that the study (an undergraduate paper similar to a second-semester composition assignment) was based not on pornographic images themselves, but on the descriptions of those images. This kind of reduction detracts significantly from the integrity of the final claims made by the author. In fact, this kind of research is commensurate with doing a study of the content of pornographic movies based on the titles of the movies, then making sociological generalizations based on what those titles indicate. (This is obviously a problem with a number of types of validity, because Rimm is not studying what he thinks he is studying, but instead something quite different. )

The author of the Time article, Philip Elmer De Witt writes, "The research team at CMU has undertaken the first systematic study of pornography on the Information Superhighway" (Godwin, 1995, p. 1). His statement is problematic in at least three ways. First, the research team actually consisted of a few of Rimm's undergraduate friends with no methodological training whatsoever. Additionally, no mention of the degree of interrater reliability is made. Second, this systematic study is actually merely a "non-randomly selected subset of commercial bulletin-board systems that focus on selling porn" (Godwin, p. 6). As pornography vending is actually just a small part of the whole concerning the use of pornography on the Internet, the entire premise of this study's content validity is firmly called into question. Finally, the use of the term "Information Superhighway" is a false assessment of what in actuality is only a few USENET groups and BBSs (Bulletin Board System), which make up only a small fraction of the entire "Information Superhighway" traffic. Essentially, what is here is yet another violation of content validity.

De Witt is quoted as saying: "In an 18-month study, the team surveyed 917,410 sexually-explicit pictures, descriptions, short-stories and film clips. On those USENET newsgroups where digitized images are stored, 83.5 percent of the pictures were pornographic" (De Witt 40).

Statistically, some interesting contradictions arise. The figure 917,410 was taken from adult-oriented BBSs--none came from actual USENET groups or the Internet itself. This is a glaring discrepancy. Out of the 917,410 files, 212,114 are only descriptions (Hoffman & Novak, 1995, p.2). The question is, how many actual images did the "researchers" see?

"Between April and July 1994, the research team downloaded all available images (3,254)...the team encountered technical difficulties with 13 percent of these images...This left a total of 2,830 images for analysis" (p. 2). This means that out of 917,410 files discussed in this study, 914,580 of them were not even pictures! As for the 83.5 percent figure, this is actually based on "17 alt.binaries groups that Rimm considered pornographic" (p. 2).

In real terms, 17 USENET groups is a fraction of a percent of all USENET groups available. Worse yet, Time claimed that "...only about 3 percent of all messages on the USENET [represent pornographic material], while the USENET itself represents 11.5 percent of the traffic on the Internet" (De Witt, p. 40).

Time neglected to carry the interpretation of this data out to its logical conclusion, which is that less than half of 1 percent (3 percent of 11 percent) of the images on the Internet are associated with newsgroups that contain pornographic imagery. Furthermore, of this half percent, an unknown but even smaller percentage of the messages in newsgroups that are 'associated with pornographic imagery', actually contained pornographic material (Hoffman & Novak, p. 3).

Another blunder can be seen in the avoidance of peer-review, which suggests that there was some political interests being served in having the study become a Time cover story. Marty Rimm contracted the Georgetown Law Review and Time in an agreement to publish his study as long as they kept it under lock and key. During the months before publication, many interested scholars and professionals tried in vain to obtain a copy of the study in order to check it for flaws. De Witt justified not letting such peer-review take place, and also justified the reliability and validity of the study, on the grounds that because the Georgetown Law Review had accepted it, it was therefore reliable and valid, and needed no peer-review. What he didn't know, was that law reviews are not edited by professionals, but by "third year law students" (Godwin, p. 4).

There are many consequences of the failure to subject such a study to the scrutiny of peer review. If it was Rimm's desire to publish an article about on-line pornography in a manner that legitimized his article, yet escaped the kind of critical review the piece would have to undergo if published in a scholarly journal of computer-science, engineering, marketing, psychology, or communications. What better venue than a law journal? A law journal article would have the added advantage of being taken seriously by law professors, lawyers, and legally-trained policymakers. By virtue of where it appeared, it would automatically be catapulted into the center of the policy debate surrounding online censorship and freedom of speech (Godwin).

Herein lies the dangerous implication of such a study: Because the questions surrounding pornography are of such immediate political concern, the study was placed in the forefront of the U.S. domestic policy debate over censorship on the Internet, (an integral aspect of current anti-First Amendment legislation) with little regard for its validity or reliability.

On June 26, the day the article came out, Senator Grassley, (co-sponsor of the anti-porn bill, along with Senator Dole) began drafting a speech that was to be delivered that very day in the Senate, using the study as evidence. The same day, at the same time, Mike Godwin posted on WELL (Whole Earth 'Lectronic Link, a forum for professionals on the Internet) what turned out to be the overstatement of the year: "Philip's story is an utter disaster, and it will damage the debate about this issue because we will have to spend lots of time correcting misunderstandings that are directly attributable to the story" (Meeks, p. 7).

As Godwin was writing this, Senator Grassley was speaking to the Senate: "Mr. President, I want to repeat that: 83.5 percent of the 900,000 images reviewed--these are all on the Internet--are pornographic, according to the Carnegie-Mellon study" ( p. 7). Several days later, Senator Dole was waving the magazine in front of the Senate like a battle flag.

Donna Hoffman, professor at Vanderbilt University, summed up the dangerous political implications by saying, "The critically important national debate over First Amendment rights and restrictions of information on the Internet and other emerging media requires facts and informed opinion, not hysteria" (p.1).

In addition to the hysteria, Hoffman sees a plethora of other problems with the study. "Because the content analysis and classification scheme are 'black boxes,'" Hoffman said, "because no reliability and validity results are presented, because no statistical testing of the differences both within and among categories for different types of listings has been performed, and because not a single hypothesis has been tested, formally or otherwise, no conclusions should be drawn until the issues raised in this critique are resolved" (p. 4).

However, the damage has already been done. This questionable research by an undergraduate engineering major has been generalized to such an extent that even the U.S. Senate, and in particular Senators Grassley and Dole, have been duped, albeit through the strength of their own desires to see only what they wanted to see.

Annotated Bibliography

American Psychological Association. (1985). Standards for educational and psychological testing. Washington, DC: Author.

This work on focuses on reliability, validity and the standards that testers need to achieve in order to ensure accuracy.

Babbie, E.R. & Huitt, R.E. (1979). The practice of social research 2nd ed . Belmont, CA: Wadsworth Publishing.

An overview of social research and its applications.

Beauchamp, T. L., Faden, R.R., Wallace, Jr., R.J. & Walters, L . ( 1982). Ethical issues in social science research. Baltimore and London: The Johns Hopkins University Press.

A systematic overview of ethical issues in Social Science Research written by researchers with firsthand familiarity with the situations and problems researchers face in their work. This book raises several questions of how reliability and validity can be affected by ethics.

Borman, K.M. et al. (1986). Ethnographic and qualitative research design and why it doesn't work. American behavioral scientist 30 , 42-57.

The authors pose questions concerning threats to qualitative research and suggest solutions.

Bowen, K. A. (1996, Oct. 12). The sin of omission -punishable by death to internal validity: An argument for integration of quantitative research methods to strengthen internal validity. Available: http://trochim.human.cornell.edu/gallery/bowen/hss691.htm

An entire Web site that examines the merits of integrating qualitative and quantitative research methodologies through triangulation. The author argues that improving the internal validity of social science will be the result of such a union.

Brinberg, D. & McGrath, J.E. (1985). Validity and the research process . Beverly Hills: Sage Publications.

The authors investigate validity as value and propose the Validity Network Schema, a process by which researchers can infuse validity into their research.

Bussières, J-F. (1996, Oct.12). Reliability and validity of information provided by museum Web sites. Available: http://www.oise.on.ca/~jfbussieres/issue.html

This Web page examines the validity of museum Web sites which calls into question the validity of Web-based resources in general. Addresses the issue that all Websites should be examined with skepticism about the validity of the information contained within them.

Campbell, D. T. & Stanley, J.C. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.

An overview of experimental research that includes pre-experimental designs, controls for internal validity, and tables listing sources of invalidity in quasi-experimental designs. Reference list and examples.

Carmines, E. G. & Zeller, R.A. (1991). Reliability and validity assessment . Newbury Park: Sage Publications.

An introduction to research methodology that includes classical test theory, validity, and methods of assessing reliability.

Carroll, K. M. (1995). Methodological issues and problems in the assessment of substance use. Psychological Assessment, Sep. 7 n3 , 349-58.

Discusses methodological issues in research involving the assessment of substance abuse. Introduces strategies for avoiding problems with the reliability and validity of methods.

Connelly, F. M. & Clandinin, D.J. (1990). Stories of experience and narrative inquiry. Educational Researcher 19:5 , 2-12.

A survey of narrative inquiry that outlines criteria, methods, and writing forms. It includes a discussion of risks and dangers in narrative studies, as well as a research agenda for curricula and classroom studies.

De Witt, P.E. (1995, July 3). On a screen near you: Cyberporn. Time, 38-45.

This is an exhaustive Carnegie Mellon study of online pornography by Marty Rimm, electrical engineering student.

Fink, A., ed. (1995). The survey Handbook, v.1 .Thousand Oaks, CA: Sage.

A guide to survey; this is the first in a series referred to as the "survey kit". It includes bibliograpgical references. Addresses survey design, analysis, reporting surveys and how to measure the validity and reliability of surveys.

Fink, A., ed. (1995). How to measure survey reliability and validity v. 7 . Thousand Oaks, CA: Sage.

This volume seeks to select and apply reliability criteria and select and apply validity criteria. The fundamental principles of scaling and scoring are considered.

Godwin, M. (1995, July). JournoPorn, dissection of the Time article. Available: http://www.hotwired.com

A detailed critique of Time magazine's Cyberporn , outlining flaws of methodology as well as exploring the underlying assumptions of the article.

Hambleton, R.K. & Zaal, J.N., eds. (1991). Advances in educational and psychological testing . Boston: Kluwer Academic.

Information on the concepts of reliability and validity in psychology and education.

Harnish, D.L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature . In D.L. Harnish et al. eds., Selected readings in transition.

This article investigates threats to validity in special education research.

Haynes, N. M. (1995). How skewed is 'the bell curve'? Book Product Reviews . 1-24.

This paper claims that R.J. Herrnstein and C. Murray's The Bell Curve: Intelligence and Class Structure in American Life does not have scientific merit and claims that the bell curve is an unreliable measure of intelligence.

Healey, J. F. (1993). Statistics: A tool for social research, 3rd ed . Belmont: Wadsworth Publishing.

Inferential statistics, measures of association, and multivariate techniques in statistical analysis for social scientists are addressed.

Helberg, C. (1996, Oct.12). Pitfalls of data analysis (or how to avoid lies and damned lies). Available: http//maddog/fammed.wisc.edu/pitfalls/

A discussion of things researchers often overlook in their data analysis and how statistics are often used to skew reliability and validity for the researchers purposes.

Hoffman, D. L. and Novak, T.P. (1995, July). A detailed critique of the Time article: Cyberporn. Available: http://www.hotwired.com

A methodological critique of the Time article that uncovers some of the fundamental flaws in the statistics and the conclusions made by De Witt.

Huitt, William G. (1998). Internal and External Validity . http://www.valdosta.peachnet.edu/~whuitt/psy702/intro/valdgn.html

A Web document addressing key issues of external and internal validity.

Jones, J. E. & Bearley, W.L. (1996, Oct 12). Reliability and validity of training instruments. Organizational Universe Systems. Available: http://ous.usa.net/relval.htm

The authors discuss the reliability and validity of training design in a business setting. Basic terms are defined and examples provided.

Cultural Anthropology Methods Journal. (1996, Oct. 12). Available: http://www.lawrence.edu/~bradleyc/cam.html

An online journal containing articles on the practical application of research methods when conducting qualitative and quantitative research. Reliability and validity are addressed throughout.

Kirk, J. & Miller, M. M. (1986). Reliability and validity in qualitative research. Beverly Hills: Sage Publications.

This text describes objectivity in qualitative research by focusing on the issues of validity and reliability in terms of their limitations and applicability in the social and natural sciences.

Krakower, J. & Niwa, S. (1985). An assessment of validity and reliability of the institutinal perfarmance survey . Boulder, CO: National center for higher education management systems.

Educational surveys and higher education research and the efeectiveness of organization.

Lauer, J. M. & Asher, J.W. (1988). Composition Research. New York: Oxford University Press.

A discussion of empirical designs in the context of composition research as a whole.

Laurent, J. et al. (1992, Mar.) Review of validity research on the stanford-binet intelligence scale: 4th Ed. Psychological Assessment . 102-112.

This paper looks at the results of construct and criterion- related validity studies to determine if the SB:FE is a valid measure of intelligence.

LeCompte, M. D., Millroy, W.L., & Preissle, J. eds. (1992). The handbook of qualitative research in education. San Diego: Academic Press.

A compilation of the range of methodological and theoretical qualitative inquiry in the human sciences and education research. Numerous contributing authors apply their expertise to discussing a wide variety of issues pertaining to educational and humanities research as well as suggestions about how to deal with problems when conducting research.

McDowell, I. & Newell, C. (1987). Measuring health: A guide to rating scales and questionnaires . New York: Oxford University Press.

This gives a variety of examples of health measurement techniques and scales and discusses the validity and reliability of important health measures.

Meeks, B. (1995, July). Muckraker: How Time failed. Available: http://www.hotwired.com

A step-by-step outline of the events which took place during the researching, writing, and negotiating of the Time article of 3 July, 1995 titled: On A Screen Near You: Cyberporn .

Merriam, S. B. (1995). What can you tell from an N of 1?: Issues of validity and reliability in qualitative research. Journal of Lifelong Learning v4 , 51-60.

Addresses issues of validity and reliability in qualitative research for education. Discusses philosophical assumptions underlying the concepts of internal validity, reliability, and external validity or generalizability. Presents strategies for ensuring rigor and trustworthiness when conducting qualitative research.

Morris, L.L, Fitzgibbon, C.T., & Lindheim, E. (1987). How to measure performance and use tests. In J.L. Herman (Ed.), Program evaluation kit (2nd ed.). Newbury Park, CA: Sage.

Discussion of reliability and validity as it pertyains to measuring students' performance.

Murray, S., et al. (1979, April). Technical issues as threats to internal validity of experimental and quasi-experimental designs. San Francisco: University of California. 8-12.

(From Yang et al. bibliography--unavailable as of this writing.)

Russ-Eft, D. F. (1980). Validity and reliability in survey research. American Institutes for Research in the Behavioral Sciences August , 227 151.

An investigation of validity and reliability in survey research with and overview of the concepts of reliability and validity. Specific procedures for measuring sources of error are suggested as well as general suggestions for improving the reliability and validity of survey data. A extensive annotated bibliography is provided.

Ryser, G. R. (1994). Developing reliable and valid authentic assessments for the classroom: Is it possible? Journal of Secondary Gifted Education Fall, v6 n1 , 62-66.

Defines the meanings of reliability and validity as they apply to standardized measures of classroom assessment. This article defines reliability as scorability and stability and validity is seen as students' ability to use knowledge authentically in the field.

Schmidt, W., et al. (1982). Validity as a variable: Can the same certification test be valid for all students? Institute for Research on Teaching July, ED 227 151.

A technical report that presents specific criteria for judging content, instructional and curricular validity as related to certification tests in education.

Scholfield, P. (1995). Quantifying language. A researcher's and teacher's guide to gathering language data and reducing it to figures . Bristol: Multilingual Matters.

A guide to categorizing, measuring, testing, and assessing aspects of language. A source for language-related practitioners and researchers in conjunction with other resources on research methods and statistics. Questions of reliability, and validity are also explored.

Scriven, M. (1993). Hard-Won Lessons in Program Evaluation . San Francisco: Jossey-Bass Publishers.

A common sense approach for evaluating the validity of various educational programs and how to address specific issues facing evaluators.

Shou, P. (1993, Jan.). The singer loomis inventory of personality: A review and critique. [Paper presented at the Annual Meeting of the Southwest Educational Research Association.]

Evidence for reliability and validity are reviewed. A summary evaluation suggests that SLIP (developed by two Jungian analysts to allow examination of personality from the perspective of Jung's typology) appears to be a useful tool for educators and counselors.

Sutton, L.R. (1992). Community college teacher evaluation instrument: A reliability and validity study . Diss. Colorado State University.

Studies of reliability and validity in occupational and educational research.

Thompson, B. & Daniel, L.G. (1996, Oct.). Seminal readings on reliability and validity: A "hit parade" bibliography. Educational and psychological measurement v. 56 , 741-745.

Editorial board members of Educational and Psychological Measurement generated bibliography of definitive publications of measurement research. Many articles are directly related to reliability and validity.

Thompson, E. Y., et al. (1995). Overview of qualitative research . Diss. Colorado State University.

A discussion of strengths and weaknesses of qualitative research and its evolution and adaptation. Appendices and annotated bibliography.

Traver, C. et al. (1995). Case Study . Diss. Colorado State University.

This presentation gives an overview of case study research, providing definitions and a brief history and explanation of how to design research.

Trochim, William M. K. (1996) External validity. (. Available: http://trochim.human.cornell.edu/kb/EXTERVAL.htm

A comprehensive treatment of external validity found in William Trochim's online text about research methods and issues.

Trochim, William M. K. (1996) Introduction to validity. (. Available: hhttp://trochim.human.cornell.edu/kb/INTROVAL.htm

An introduction to validity found in William Trochim's online text about research methods and issues.

Trochim, William M. K. (1996) Reliability. (. Available: http://trochim.human.cornell.edu/kb/reltypes.htm

A comprehensive treatment of reliability found in William Trochim's online text about research methods and issues.

Validity. (1996, Oct. 12). Available: http://vislab-www.nps.navy.mil/~haga/validity.html

A source for definitions of various forms and types of reliability and validity.

Vinsonhaler, J. F., et al. (1983, July). Improving diagnostic reliability in reading through training. Institute for Research on Teaching ED 237 934.

This technical report investigates the practical application of a program intended to improve the diagnoses of reading deficient students. Here, reliability is assumed and a pragmatic answer to a specific educational problem is suggested as a result.

Wentland, E. J. & Smith, K.W. (1993). Survey responses: An evaluation of their validity . San Diego: Academic Press.

This book looks at the factors affecting response validity (or the accuracy of self-reports in surveys) and provides several examples with varying accuracy levels.

Wiget, A. (1996). Father juan greyrobe: Reconstructing tradition histories, and the reliability and validity of uncorroborated oral tradition. Ethnohistory 43:3 , 459-482.

This paper presents a convincing argument for the validity of oral histories in ethnographic research where at least some of the evidence can be corroborated through written records.

Yang, G. H., et al. (1995). Experimental and quasi-experimental educational research . Diss. Colorado State University.

This discussion defines experimentation and considers the rhetorical issues and advantages and disadvantages of experimental research. Annotated bibliography.

Yarroch, W. L. (1991, Sept.). The Implications of content versus validity on science tests. Journal of Research in Science Teaching , 619-629.

The use of content validity as the primary assurance of the measurement accuracy for science assessment examinations is questioned. An alternative accuracy measure, item validity, is proposed to look at qualitative comparisons between different factors.

Yin, R. K. (1989). Case study research: Design and methods . London: Sage Publications.

This book discusses the design process of case study research, including collection of evidence, composing the case study report, and designing single and multiple case studies.

Related Links

Internal Validity Tutorial. An interactive tutorial on internal validity.

http://server.bmod.athabascau.ca/html/Validity/index.shtml

Howell, Jonathan, Paul Miller, Hyun Hee Park, Deborah Sattler, Todd Schack, Eric Spery, Shelley Widhalm, & Mike Palmquist. (2005). Reliability and Validity. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=66

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Dissertation Methodology – Structure, Example and Writing Guide

Dissertation Methodology – Structure, Example and Writing Guide

  • Table of Contents

Dissertation Methodology

Dissertation Methodology

In any research, the methodology chapter is one of the key components of your dissertation. It provides a detailed description of the methods you used to conduct your research and helps readers understand how you obtained your data and how you plan to analyze it. This section is crucial for replicating the study and validating its results.

Here are the basic elements that are typically included in a dissertation methodology:

  • Introduction : This section should explain the importance and goals of your research .
  • Research Design : Outline your research approach and why it’s appropriate for your study. You might be conducting an experimental research, a qualitative research, a quantitative research, or a mixed-methods research.
  • Data Collection : This section should detail the methods you used to collect your data. Did you use surveys, interviews, observations, etc.? Why did you choose these methods? You should also include who your participants were, how you recruited them, and any ethical considerations.
  • Data Analysis : Explain how you intend to analyze the data you collected. This could include statistical analysis, thematic analysis, content analysis, etc., depending on the nature of your study.
  • Reliability and Validity : Discuss how you’ve ensured the reliability and validity of your study. For instance, you could discuss measures taken to reduce bias, how you ensured that your measures accurately capture what they were intended to, or how you will handle any limitations in your study.
  • Ethical Considerations : This is where you state how you have considered ethical issues related to your research, how you have protected the participants’ rights, and how you have complied with the relevant ethical guidelines.
  • Limitations : Acknowledge any limitations of your methodology, including any biases and constraints that might have affected your study.
  • Summary : Recap the key points of your methodology chapter, highlighting the overall approach and rationalization of your research.

Types of Dissertation Methodology

The type of methodology you choose for your dissertation will depend on the nature of your research question and the field you’re working in. Here are some of the most common types of methodologies used in dissertations:

Experimental Research

This involves creating an experiment that will test your hypothesis. You’ll need to design an experiment, manipulate variables, collect data, and analyze that data to draw conclusions. This is commonly used in fields like psychology, biology, and physics.

Survey Research

This type of research involves gathering data from a large number of participants using tools like questionnaires or surveys. It can be used to collect a large amount of data and is often used in fields like sociology, marketing, and public health.

Qualitative Research

This type of research is used to explore complex phenomena that can’t be easily quantified. Methods include interviews, focus groups, and observations. This methodology is common in fields like anthropology, sociology, and education.

Quantitative Research

Quantitative research uses numerical data to answer research questions. This can include statistical, mathematical, or computational techniques. It’s common in fields like economics, psychology, and health sciences.

Case Study Research

This type of research involves in-depth investigation of a particular case, such as an individual, group, or event. This methodology is often used in psychology, social sciences, and business.

Mixed Methods Research

This combines qualitative and quantitative research methods in a single study. It’s used to answer more complex research questions and is becoming more popular in fields like social sciences, health sciences, and education.

Action Research

This type of research involves taking action and then reflecting upon the results. This cycle of action-reflection-action continues throughout the study. It’s often used in fields like education and organizational development.

Longitudinal Research

This type of research involves studying the same group of individuals over an extended period of time. This could involve surveys, observations, or experiments. It’s common in fields like psychology, sociology, and medicine.

Ethnographic Research

This type of research involves the in-depth study of people and cultures. Researchers immerse themselves in the culture they’re studying to collect data. This is often used in fields like anthropology and social sciences.

Structure of Dissertation Methodology

The structure of a dissertation methodology can vary depending on your field of study, the nature of your research, and the guidelines of your institution. However, a standard structure typically includes the following elements:

  • Introduction : Briefly introduce your overall approach to the research. Explain what you plan to explore and why it’s important.
  • Research Design/Approach : Describe your overall research design. This can be qualitative, quantitative, or mixed methods. Explain the rationale behind your chosen design and why it is suitable for your research questions or hypotheses.
  • Data Collection Methods : Detail the methods you used to collect your data. You should include what type of data you collected, how you collected it, and why you chose this method. If relevant, you can also include information about your sample population, such as how many people participated, how they were chosen, and any relevant demographic information.
  • Data Analysis Methods : Explain how you plan to analyze your collected data. This will depend on the nature of your data. For example, if you collected quantitative data, you might discuss statistical analysis techniques. If you collected qualitative data, you might discuss coding strategies, thematic analysis, or narrative analysis.
  • Reliability and Validity : Discuss how you’ve ensured the reliability and validity of your research. This might include steps you took to reduce bias or increase the accuracy of your measurements.
  • Ethical Considerations : If relevant, discuss any ethical issues associated with your research. This might include how you obtained informed consent from participants, how you ensured participants’ privacy and confidentiality, or any potential conflicts of interest.
  • Limitations : Acknowledge any limitations in your research methodology. This could include potential sources of bias, difficulties with data collection, or limitations in your analysis methods.
  • Summary/Conclusion : Briefly summarize the key points of your methodology, emphasizing how it helps answer your research questions or hypotheses.

How to Write Dissertation Methodology

Writing a dissertation methodology requires you to be clear and precise about the way you’ve carried out your research. It’s an opportunity to convince your readers of the appropriateness and reliability of your approach to your research question. Here is a basic guideline on how to write your methodology section:

1. Introduction

Start your methodology section by restating your research question(s) or objective(s). This ensures your methodology directly ties into the aim of your research.

2. Approach

Identify your overall approach: qualitative, quantitative, or mixed methods. Explain why you have chosen this approach.

  • Qualitative methods are typically used for exploratory research and involve collecting non-numerical data. This might involve interviews, observations, or analysis of texts.
  • Quantitative methods are used for research that relies on numerical data. This might involve surveys, experiments, or statistical analysis.
  • Mixed methods use a combination of both qualitative and quantitative research methods.

3. Research Design

Describe the overall design of your research. This could involve explaining the type of study (e.g., case study, ethnography, experimental research, etc.), how you’ve defined and measured your variables, and any control measures you’ve implemented.

4. Data Collection

Explain in detail how you collected your data.

  • If you’ve used qualitative methods, you might detail how you selected participants for interviews or focus groups, how you conducted observations, or how you analyzed existing texts.
  • If you’ve used quantitative methods, you might detail how you designed your survey or experiment, how you collected responses, and how you ensured your data is reliable and valid.

5. Data Analysis

Describe how you analyzed your data.

  • If you’re doing qualitative research, this might involve thematic analysis, discourse analysis, or grounded theory.
  • If you’re doing quantitative research, you might be conducting statistical tests, regression analysis, or factor analysis.

Discuss any ethical issues related to your research. This might involve explaining how you obtained informed consent, how you’re protecting participants’ privacy, or how you’re managing any potential harms to participants.

7. Reliability and Validity

Discuss the steps you’ve taken to ensure the reliability and validity of your data.

  • Reliability refers to the consistency of your measurements, and you might discuss how you’ve piloted your instruments or used standardized measures.
  • Validity refers to the accuracy of your measurements, and you might discuss how you’ve ensured your measures reflect the concepts they’re supposed to measure.

8. Limitations

Every study has its limitations. Discuss the potential weaknesses of your chosen methods and explain any obstacles you faced in your research.

9. Conclusion

Summarize the key points of your methodology, emphasizing how it helps to address your research question or objective.

Example of Dissertation Methodology

An Example of Dissertation Methodology is as follows:

Chapter 3: Methodology

  • Introduction

This chapter details the methodology adopted in this research. The study aimed to explore the relationship between stress and productivity in the workplace. A mixed-methods research design was used to collect and analyze data.

Research Design

This study adopted a mixed-methods approach, combining quantitative surveys with qualitative interviews to provide a comprehensive understanding of the research problem. The rationale for this approach is that while quantitative data can provide a broad overview of the relationships between variables, qualitative data can provide deeper insights into the nuances of these relationships.

Data Collection Methods

Quantitative Data Collection : An online self-report questionnaire was used to collect data from participants. The questionnaire consisted of two standardized scales: the Perceived Stress Scale (PSS) to measure stress levels and the Individual Work Productivity Questionnaire (IWPQ) to measure productivity. The sample consisted of 200 office workers randomly selected from various companies in the city.

Qualitative Data Collection : Semi-structured interviews were conducted with 20 participants chosen from the initial sample. The interview guide included questions about participants’ experiences with stress and how they perceived its impact on their productivity.

Data Analysis Methods

Quantitative Data Analysis : Descriptive and inferential statistics were used to analyze the survey data. Pearson’s correlation was used to examine the relationship between stress and productivity.

Qualitative Data Analysis : Interviews were transcribed and subjected to thematic analysis using NVivo software. This process allowed for identifying and analyzing patterns and themes regarding the impact of stress on productivity.

Reliability and Validity

To ensure reliability and validity, standardized measures with good psychometric properties were used. In qualitative data analysis, triangulation was employed by having two researchers independently analyze the data and then compare findings.

Ethical Considerations

All participants provided informed consent prior to their involvement in the study. They were informed about the purpose of the study, their rights as participants, and the confidentiality of their responses.

Limitations

The main limitation of this study is its reliance on self-report measures, which can be subject to biases such as social desirability bias. Moreover, the sample was drawn from a single city, which may limit the generalizability of the findings.

Where to Write Dissertation Methodology

In a dissertation or thesis, the Methodology section usually follows the Literature Review. This placement allows the Methodology to build upon the theoretical framework and existing research outlined in the Literature Review, and precedes the Results or Findings section. Here’s a basic outline of how most dissertations are structured:

  • Acknowledgements
  • Literature Review (or it may be interspersed throughout the dissertation)
  • Methodology
  • Results/Findings
  • References/Bibliography

In the Methodology chapter, you will discuss the research design, data collection methods, data analysis methods, and any ethical considerations pertaining to your study. This allows your readers to understand how your research was conducted and how you arrived at your results.

Advantages of Dissertation Methodology

The dissertation methodology section plays an important role in a dissertation for several reasons. Here are some of the advantages of having a well-crafted methodology section in your dissertation:

  • Clarifies Your Research Approach : The methodology section explains how you plan to tackle your research question, providing a clear plan for data collection and analysis.
  • Enables Replication : A detailed methodology allows other researchers to replicate your study. Replication is an important aspect of scientific research because it provides validation of the study’s results.
  • Demonstrates Rigor : A well-written methodology shows that you’ve thought critically about your research methods and have chosen the most appropriate ones for your research question. This adds credibility to your study.
  • Enhances Transparency : Detailing your methods allows readers to understand the steps you took in your research. This increases the transparency of your study and allows readers to evaluate potential biases or limitations.
  • Helps in Addressing Research Limitations : In your methodology section, you can acknowledge and explain the limitations of your research. This is important as it shows you understand that no research method is perfect and there are always potential weaknesses.
  • Facilitates Peer Review : A detailed methodology helps peer reviewers assess the soundness of your research design. This is an important part of the publication process if you aim to publish your dissertation in a peer-reviewed journal.
  • Establishes the Validity and Reliability : Your methodology section should also include a discussion of the steps you took to ensure the validity and reliability of your measurements, which is crucial for establishing the overall quality of your research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Delimitations

Delimitations in Research – Types, Examples and...

Research Design

Research Design – Types, Methods and Examples

What is a Hypothesis

What is a Hypothesis – Types, Examples and...

Dissertation

Dissertation – Format, Example and Template

Dissertation vs Thesis

Dissertation vs Thesis – Key Differences

Ethical Considerations

Ethical Considerations – Types, Examples and...

Topscriptie

Reliability and validity of your thesis

Describing the reliability and validity of your research is an important part of your thesis.

Students in higher professional education as well as for academic students are required to describe these, and both are usually discussed in your methodology.

We help students with this daily, because describing these research concepts is often not too hard, but using them? That’s a different story!

In this article we explain these concepts and give you tips on how to use them in your thesis. Good luck!

Reliability, an example

When you look up the term reliability in you research manual, you will often find different definitions.

In the end, what matters with reliability is that the results of your research match the actual situation as much as possible. When a fellow student would research your topic, you want him or her to obtain practically the same results. Only then can we speak of a reliable study and will your research be reproducible.

how to write reliability and validity in dissertation

Reliability in quantitative research

Reliability in quantitative research is often expressed as a reliability percentage of 95% or 99%. Do you want to know how many respondents you need to achieve this reliability score? You can use a sample size calculator, for example using this link.

You can also test the reliability of questionnaires in SPSS by calculating the so-called Cronbach’s Alpha. If this value is .80 or higher, this indicates a high reliability.

Most literature indicates that a score of 0.70 or higher is also still reliable.

Watch out: you can’t just combine all questions (with different response categories)!

Contact us now…

Validity in quantitative research

The validity of quantitative research usually has to do with which questions you ask your respondents. Validity means that the results you find will be factually correct.

It is often best to draw up your questions after you have finished your literature review (which is also most reliable!). You can then better judge which concepts are important, and use these concepts to draw up different questions (indicators). While doing this, always keep in mind the goal of the study and your sub-questions.

We have previously discussed the methodological validity, but you can also – this applies especially to academic students – look at the statistical validity.  For this you check whether the used statistical models meet the underlying assumptions.

Reliability and validity in qualitative research

Reliability is just as important in qualitative research as it is in quantitative research. In discussing this, you need to describe certain things regarding the circumstances of your research, like for example the fact that a quiet location was chosen (so that respondents could not be disturbed), the attitude of the interviewer (respondents should not be influenced, so take on an open attitude, ask open question instead of guiding ones, stay silent in between, etc.). It is also important that you describe the  representativeness of your sample,  as the representativeness of the sample makes for better reliability. If you wish, for example, to map the wishes and needs of a company’s target group, try not just to include current customers in your study but also and especially potential new costumers.

Furthermore, it is good to discuss why you have specifically chosen this research method and what the pros and cons of this method are.

When using interviews, for example, you can choose between structured, semi-structured or in-depth interviews. The goal is to justify your choice as well as possible. (Tip: it often helps to describe the purpose of the interviews.)

how to write reliability and validity in dissertation

As with quantitative research, validity in qualitative research is about drawing up the right questions/topics that really cover your subject matter. Here too you can make use of an operationalization model. The interview questions (with structured and semi-structured interviews) are often listed in an interview-guide, to help you go to an interview well prepared. With in-depth interviews it is common to use an item-list.

In this document you can also write down for yourself an introduction that you give each interviewee at the start of an interview, and you also mention the confidentiality. See the image below for an example of an interview-guide.

Finally you need to describe here how you are going to perform the analysis and whether you have for example recorded the interviews (and transcribed them).

Moreover, it is always good to present your interview-questions to an expert and to do some trial interviews if possible.

Describing reliability and validity in your thesis

To summarize, discuss as many aspects as possible that have led to the highest possible reliability of your study. Include at least the following:

Reliability of literature review (secondary research)

The definition of a literature review according to Topscriptie is:  An overview of already existing information on your subject (what is known already) that shows the context of and identifies ‘gaps’ in the literature, takes the form of a critical and coherent review and sheds light on the connections between previous studies.

Many students forget to describe how they have tried to keep the reliability of their literature review (desk research) as high as possible. It is important to discuss this in your thesis. Describe for example the following aspects:

With the above mentioned tips from Topscriptie you can get started yourself on describing the reliability and validity of your study in your thesis. Don’t forget to discuss the limits of your study in your  discussion . It actually makes your article more solid when you can show that you are aware of these. Don’t forget that every thesis and every study has its own problems with the reliability and validity.

Topscriptie has already helped more than 6,000 students!

Let us help you with your studies or graduation. Discover what we can do for you.

how to write reliability and validity in dissertation

Call or WhatsApp a thesis supervisor

how to write reliability and validity in dissertation

Winner of the best thesis agency in the Netherlands

powered by Google

Research-Methodology

Reliability and Validity

Issues of research reliability and validity need to be addressed in methodology chapter in a concise manner.

Reliability refers to the extent to which the same answers can be obtained using the same instruments more than one time. In simple terms, if your research is associated with high levels of reliability, then other researchers need to be able to generate the same results, using the same research methods under similar conditions. It is noted that “reliability problems crop up in many forms.

Reliability is a concern every time a single observer is the source of data, because we have no certain guard against the impact of that observer’s subjectivity” (Babbie, 2010, p.158). According to Wilson (2010) reliability issues are most of the time closely associated with subjectivity and once a researcher adopts a subjective approach towards the study, then the level of reliability of the work is going to be compromised.

Validity of research can be explained as an extent at which requirements of scientific research method have been followed during the process of generating research findings. Oliver (2010) considers validity to be a compulsory requirement for all types of studies. There are different forms of research validity and main ones are specified by Cohen et al (2007) as content validity, criterion-related validity, construct validity, internal validity, external validity, concurrent validity and face validity.

Measures to ensure validity of a research include, but not limited to the following points:

a) Appropriate time scale for the study has to be selected;

b) Appropriate methodology has to be chosen, taking into account the characteristics of the study;

c) The most suitable sample method for the study has to be selected;

d) The respondents must not be pressured in any ways to select specific choices among the answer sets.

It is important to understand that although threats to research reliability and validity can never be totally eliminated, however researchers need to strive to minimize this threat as much as possible.

Reliability and validity

John Dudovskiy

  • Babbie, E. R. (2010) “The Practice of Social Research” Cengage Learning
  • Cohen, L., Manion, L., Morrison, K, & Morrison, R.B. (2007) “Research methods in education” Routledge
  • Oliver, V, 2010, 301 Smart Answers to Tough Business Etiquette Questions, Skyhorse Publishing, New York USA
  • Wilson, J. (2010) “Essentials of Business Research: A Guide to Doing Your Research Project” SAGE Publications

Reliability vs. Validity in Research: The Essence of Credible Research

image

Table of contents

  • 1 Understanding Reliability in Research
  • 2 Understanding Validity in Research
  • 3 Key Differences Between Reliability and Validity
  • 4 The Role of Reliability and Validity in Research Design
  • 5.1 Ensuring Reliability
  • 5.2 Ensuring Validity
  • 5.3 Considerations for Specific Research Methods
  • 6 Ensuring Excellence in Research Through Meticulous Methodology

The concepts of reliability and validity play pivotal roles in ensuring the integrity and credibility of research findings. These foundational principles are crucial for researchers aiming to produce work that contributes to their field and withstands scrutiny. Understanding the interplay between reliability vs validity in research is essential for any rigorous investigation.

The main points of our article include:

  • A detailed exploration of the concept of reliability, including its types and how it is assessed.
  • An in-depth look at validity, discussing its various forms and the methods used to evaluate it.
  • The relationship between reliability and validity, and why both are essential for the credibility of research.
  • Practical examples illustrating the application of reliability and validity in different research contexts.
  • Strategies for enhancing both reliability and validity in research studies.

This understanding sets the stage for a more in-depth look at these important parts of the methodology. By explaining these ideas in more detail, we can have a deeper discussion about how to use and evaluate them successfully in different research settings. For better understanding and faster coming with your research, students can look at sites like PapersOwl to get more information and help with defining these two concepts.

Understanding Reliability in Research

So, at first, let’s start with the reliability definition.

It measures how stable and consistent the results of a research tool are across a number of tests and conditions. It tells us how reliable the data we collected is, which, in turn, is important for making sure that the study results are valid.

There are several types of reliability crucial for assessing the quality of research instruments:

  • Test-retest reliability evaluates the consistency of results when the same test is administered to the same participants under similar conditions at two different points in time.
  • Inter-rater reliability measures the extent to which different observers or raters agree in their assessments, ensuring that the data collection process is unbiased and consistent across individuals.
  • Parallel-forms reliability involves comparing the results of two different but equivalent versions of a test to the same group of individuals, assessing the consistency of the scores.
  • Internal consistency reliability assesses the homogeneity of items within a test, ensuring that all parts contribute equally to what is being measured. This is closely tied to the concept of criterion validity, which evaluates how well one measure predicts an outcome based on other measures.

Methods for measuring and improving reliability include statistical techniques such as Cronbach’s alpha for internal consistency reliability, as well as ensuring standardized testing conditions and thorough training for raters to enhance inter-rater reliability.

Examples of reliability in research can be seen in educational assessments (test-retest reliability), psychological evaluations (internal consistency reliability), and health studies (inter-rater reliability).

Each context underscores the importance of reliable measurement as a precursor to assessing content validity (the extent to which a test measures all aspects of the desired content) and construct validity (the degree to which a test accurately measures the theoretical construct it is intended to measure). Both content validity and construct validity are essential components of overall validity, which refers to the accuracy of the research findings.

Need help with research writing? Get your paper written by a professional writer Get Help Reviews.io 4.9/5

Understanding Validity in Research

In research, validity is a measure of accuracy that indicates how well a method or test measures what it is designed to assess. High validity is indicative of results that closely correspond to actual characteristics, behaviors, or phenomena in the physical or social world, making it a critical aspect of any credible research endeavor.

Types of validity include:

  • Content validity, which ensures that a test comprehensively covers all aspects of the subject it aims to measure.
  • Criterion-related validity, which is divided into predictive validity (how well a test predicts future outcomes) and concurrent validity (how well a test correlates with established measures at the same time).
  • Construct validity, further broken down into convergent validity (how closely a new test aligns with existing tests of the same constructs) and discriminant validity (how well the test distinguishes between different constructs).
  • Face validity, a more subjective measure of how relevant a test appears to be at face value, without delving into its technical merits.

Validity can be measured and ensured in a number of ways, such as through expert evaluations or by doing statistical calculations. They focus on how well the test or method matches up with theoretical expectations and set standards. Validity requires careful data collection, careful test design, and regular checks to see if the test is still relevant and accurately showing the intended constructs.

Examples of validity in research are abundant and varied. In educational testing, content validity is assessed to ensure that exams or assessments fully represent the curriculum they aim to measure. In psychology, convergent validity is demonstrated when different tests of the same psychological construct yield similar results, while predictive validity might be observed in employment settings where a cognitive test predicts job performance. Each of these examples showcases how validity is assessed and achieved, highlighting its role in producing meaningful and accurate research outcomes.

Key Differences Between Reliability and Validity

pic

The key differences between reliability and validity lie in their focus and implication in research. Reliability concerns the consistency of a measurement tool, ensuring that the same measurement is obtained across different instances of its application. For instance, interrater reliability ensures consistency in observations made by different scholars. Validity, on the other hand, assesses whether the research tool accurately measures what it is intended to, aligning with established theories and meeting the research objectives. While reliability is about the repeatability of the same measurement, validity dives deeper into the accuracy and appropriateness of what is being measured, ensuring it reflects the intended constructs or realities.

The Role of Reliability and Validity in Research Design

The incorporation of reliability and validity assessments in the early stages of  research design is paramount for ensuring the credibility and applicability of research outcomes. By prioritizing these evaluations from the outset, research writers can design studies that accurately reflect and measure the phenomena of interest, leading to more trustworthy and meaningful findings.

Strategies for integrating reliability and validity checks throughout the research process include the use of established statistical methods and the continuous evaluation of research measures. For instance, employing factor analysis can help in identifying the underlying structure of data, thus aiding in the assessment of construct validity. Similarly, calculating Cronbach’s alpha can ensure the internal consistency of items within a survey, contributing to the overall reliability of the research measures.

Case studies across various disciplines underscore the critical role of reliability and validity in shaping research outcomes and influencing subsequent decisions. For example, in clinical psychology research, the use of validated instruments to assess patient symptoms ensures that the measures accurately capture the constructs of interest, such as depression or anxiety levels, which in turn supports the internal validity of the study. In the field of education, ensuring the interrater reliability of grading rubrics can lead to fairer and more consistent assessments of student performance.

Moreover, the application of rigorous statistical methods not only enhances the reliability and validity of the research but also strengthens the study’s foundation, making the findings more compelling and actionable. By systematically integrating these checks, researchers can avoid common pitfalls such as measurement errors or biases, thereby ensuring that their studies contribute valuable insights to the body of knowledge.

Challenges and Considerations in Ensuring Reliability and Validity

Ensuring reliability and validity in research is crucial for the credibility and applicability of research results. These principles guide how to design studies, collect data, and interpret findings to ensure that their work accurately reflects the underlying constructs they aim to explore.

Ensuring Reliability

To ensure reliability, researchers must focus on creating consistent, repeatable conditions and employing precise measurement tools. The test-retest correlation is a fundamental method where writers administer the same test to the same subjects under similar conditions at two different times. A high correlation between the two sets of results indicates strong reliability.

For example, in a study measuring the stress levels of first responders, using the same stress assessment tool at different intervals can validate the tool’s reliability through consistent scores.

Another strategy is ensuring reliable measurement through inter-rater reliability, where multiple observers assess the same concept to verify consistency in observations. In environmental science, when studying the impact of pollution on local ecosystems, different researchers might assess the same water samples for contaminants. The consistency of their measurements confirms the reliability of the methods used.

Ensuring Validity

Ensuring validity involves verifying that the research accurately measures the intended concept. This can be achieved through careful  questioning in research formulation, selecting valid measurement instruments, and employing appropriate statistical analyses.

For instance, when studying the effectiveness of a new educational curriculum, researchers might use standardized test scores to measure student learning outcomes. This approach ensures that the tests are a valid measurement of the educational objectives the curriculum aims to achieve.

Construct validity can be enhanced through factor analysis, which helps in identifying whether the collected data truly represent the underlying construct of interest. In health research, exploring the validity of a new diagnostic tool for a specific disease involves comparing its results with those from established diagnostic methods, ensuring that the new tool accurately identifies the disease it claims to measure.

Considerations for Specific Research Methods

Different research methods, such as qualitative vs. quantitative research, require distinct approaches to ensure validity and reliability. In qualitative research, ensuring external validity involves a detailed and transparent description of the research setting and context, allowing others to assess the applicability of the findings to similar contexts.

For instance, in-depth interviews exploring patients’ experiences with chronic pain provide rich, contextual insights that might not be generalizable without a clear articulation of the setting and participant characteristics.

In quantitative research, ensuring the validity and reliability of collecting data often involves statistical validation methods and reliability tests, such as Cronbach’s alpha for internal consistency.

more_shortcode

Ensuring Excellence in Research Through Meticulous Methodology

In summary, the fundamental takeaways from this article highlight the paramount importance of ensuring high reliability and validity in conducting research. These principles are not merely academic considerations but are crucial for the integrity and applicability of research findings. The accuracy of research instruments, the consistency of test scores, and the thoughtful design of the methods section of a research paper are all critical to achieving these goals. For researchers aiming to enhance the credibility of their work, focusing on these aspects from the outset is key. Additionally, seeking help with research can provide valuable insights and support in navigating the complexities of research design, ensuring that studies not only adhere to the highest standards of reliability and validity but also contribute meaningful knowledge to their respective fields.

Readers also enjoyed

Research Design Basics: Building Blocks of Scholarly Research

WHY WAIT? PLACE AN ORDER RIGHT NOW!

Just fill out the form, press the button, and have no worries!

We use cookies to give you the best experience possible. By continuing we’ll assume you board with our cookie policy.

how to write reliability and validity in dissertation

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on August 25, 2022 by Shona McCombes and Tegan George. Revised on November 20, 2023.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation , or research paper , the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic .

It should include:

  • The type of research you conducted
  • How you collected and analyzed your data
  • Any tools or materials you used in the research
  • How you mitigated or avoided research biases
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, other interesting articles, frequently asked questions about methodology.

Prevent plagiarism. Run a free check.

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ? How did you prevent bias from affecting your data?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalizable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalized your concepts and measured your variables. Discuss your sampling method or inclusion and exclusion criteria , as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on July 4–8, 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

  • Information bias
  • Omitted variable bias
  • Regression to the mean
  • Survivorship bias
  • Undercoverage bias
  • Sampling bias

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyze?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness store’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

  • The Hawthorne effect
  • Observer bias
  • The placebo effect
  • Response bias and Nonresponse bias
  • The Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Self-selection bias

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how to write reliability and validity in dissertation

Next, you should indicate how you processed and analyzed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analyzing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorizing and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviors, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalized beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalizable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles

Methodology

  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

In a scientific paper, the methodology always comes after the introduction and before the results , discussion and conclusion . The same basic structure also applies to a thesis, dissertation , or research proposal .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 20). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is your plagiarism score.

  • How it works

Reliability and Validity – Definitions, Types & Examples

Published by Alvin Nicolas at August 16th, 2021 , Revised On October 26, 2023

A researcher must test the collected data before making any conclusion. Every  research design  needs to be concerned with reliability and validity to measure the quality of the research.

What is Reliability?

Reliability refers to the consistency of the measurement. Reliability shows how trustworthy is the score of the test. If the collected data shows the same results after being tested using various methods and sample groups, the information is reliable. If your method has reliability, the results will be valid.

Example: If you weigh yourself on a weighing scale throughout the day, you’ll get the same results. These are considered reliable results obtained through repeated measures.

Example: If a teacher conducts the same math test of students and repeats it next week with the same questions. If she gets the same score, then the reliability of the test is high.

What is the Validity?

Validity refers to the accuracy of the measurement. Validity shows how a specific test is suitable for a particular situation. If the results are accurate according to the researcher’s situation, explanation, and prediction, then the research is valid. 

If the method of measuring is accurate, then it’ll produce accurate results. If a method is reliable, then it’s valid. In contrast, if a method is not reliable, it’s not valid. 

Example:  Your weighing scale shows different results each time you weigh yourself within a day even after handling it carefully, and weighing before and after meals. Your weighing machine might be malfunctioning. It means your method had low reliability. Hence you are getting inaccurate or inconsistent results that are not valid.

Example:  Suppose a questionnaire is distributed among a group of people to check the quality of a skincare product and repeated the same questionnaire with many groups. If you get the same response from various participants, it means the validity of the questionnaire and product is high as it has high reliability.

Most of the time, validity is difficult to measure even though the process of measurement is reliable. It isn’t easy to interpret the real situation.

Example:  If the weighing scale shows the same result, let’s say 70 kg each time, even if your actual weight is 55 kg, then it means the weighing scale is malfunctioning. However, it was showing consistent results, but it cannot be considered as reliable. It means the method has low reliability.

Internal Vs. External Validity

One of the key features of randomised designs is that they have significantly high internal and external validity.

Internal validity  is the ability to draw a causal link between your treatment and the dependent variable of interest. It means the observed changes should be due to the experiment conducted, and any external factor should not influence the  variables .

Example: age, level, height, and grade.

External validity  is the ability to identify and generalise your study outcomes to the population at large. The relationship between the study’s situation and the situations outside the study is considered external validity.

Also, read about Inductive vs Deductive reasoning in this article.

Looking for reliable dissertation support?

We hear you.

  • Whether you want a full dissertation written or need help forming a dissertation proposal, we can help you with both.
  • Get different dissertation services at ResearchProspect and score amazing grades!

Threats to Interval Validity

Threats of external validity, how to assess reliability and validity.

Reliability can be measured by comparing the consistency of the procedure and its results. There are various methods to measure validity and reliability. Reliability can be measured through  various statistical methods  depending on the types of validity, as explained below:

Types of Reliability

Types of validity.

As we discussed above, the reliability of the measurement alone cannot determine its validity. Validity is difficult to be measured even if the method is reliable. The following type of tests is conducted for measuring validity. 

Does your Research Methodology Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.

Does your Research Methodology Have the Following?

How to Increase Reliability?

  • Use an appropriate questionnaire to measure the competency level.
  • Ensure a consistent environment for participants
  • Make the participants familiar with the criteria of assessment.
  • Train the participants appropriately.
  • Analyse the research items regularly to avoid poor performance.

How to Increase Validity?

Ensuring Validity is also not an easy job. A proper functioning method to ensure validity is given below:

  • The reactivity should be minimised at the first concern.
  • The Hawthorne effect should be reduced.
  • The respondents should be motivated.
  • The intervals between the pre-test and post-test should not be lengthy.
  • Dropout rates should be avoided.
  • The inter-rater reliability should be ensured.
  • Control and experimental groups should be matched with each other.

How to Implement Reliability and Validity in your Thesis?

According to the experts, it is helpful if to implement the concept of reliability and Validity. Especially, in the thesis and the dissertation, these concepts are adopted much. The method for implementation given below:

Frequently Asked Questions

What is reliability and validity in research.

Reliability in research refers to the consistency and stability of measurements or findings. Validity relates to the accuracy and truthfulness of results, measuring what the study intends to. Both are crucial for trustworthy and credible research outcomes.

What is validity?

Validity in research refers to the extent to which a study accurately measures what it intends to measure. It ensures that the results are truly representative of the phenomena under investigation. Without validity, research findings may be irrelevant, misleading, or incorrect, limiting their applicability and credibility.

What is reliability?

Reliability in research refers to the consistency and stability of measurements over time. If a study is reliable, repeating the experiment or test under the same conditions should produce similar results. Without reliability, findings become unpredictable and lack dependability, potentially undermining the study’s credibility and generalisability.

What is reliability in psychology?

In psychology, reliability refers to the consistency of a measurement tool or test. A reliable psychological assessment produces stable and consistent results across different times, situations, or raters. It ensures that an instrument’s scores are not due to random error, making the findings dependable and reproducible in similar conditions.

What is test retest reliability?

Test-retest reliability assesses the consistency of measurements taken by a test over time. It involves administering the same test to the same participants at two different points in time and comparing the results. A high correlation between the scores indicates that the test produces stable and consistent results over time.

How to improve reliability of an experiment?

  • Standardise procedures and instructions.
  • Use consistent and precise measurement tools.
  • Train observers or raters to reduce subjective judgments.
  • Increase sample size to reduce random errors.
  • Conduct pilot studies to refine methods.
  • Repeat measurements or use multiple methods.
  • Address potential sources of variability.

What is the difference between reliability and validity?

Reliability refers to the consistency and repeatability of measurements, ensuring results are stable over time. Validity indicates how well an instrument measures what it’s intended to measure, ensuring accuracy and relevance. While a test can be reliable without being valid, a valid test must inherently be reliable. Both are essential for credible research.

Are interviews reliable and valid?

Interviews can be both reliable and valid, but they are susceptible to biases. The reliability and validity depend on the design, structure, and execution of the interview. Structured interviews with standardised questions improve reliability. Validity is enhanced when questions accurately capture the intended construct and when interviewer biases are minimised.

Are IQ tests valid and reliable?

IQ tests are generally considered reliable, producing consistent scores over time. Their validity, however, is a subject of debate. While they effectively measure certain cognitive skills, whether they capture the entirety of “intelligence” or predict success in all life areas is contested. Cultural bias and over-reliance on tests are also concerns.

Are questionnaires reliable and valid?

Questionnaires can be both reliable and valid if well-designed. Reliability is achieved when they produce consistent results over time or across similar populations. Validity is ensured when questions accurately measure the intended construct. However, factors like poorly phrased questions, respondent bias, and lack of standardisation can compromise their reliability and validity.

You May Also Like

This article provides the key advantages of primary research over secondary research so you can make an informed decision.

Ethnography is a type of research where a researcher observes the people in their natural environment. Here is all you need to know about ethnography.

Disadvantages of primary research – It can be expensive, time-consuming and take a long time to complete if it involves face-to-face contact with customers.

USEFUL LINKS

LEARNING RESOURCES

secure connection

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on 25 February 2019 by Shona McCombes . Revised on 10 October 2022.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

It should include:

  • The type of research you conducted
  • How you collected and analysed your data
  • Any tools or materials you used in the research
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, frequently asked questions about methodology.

Prevent plagiarism, run a free check.

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalisable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalised your concepts and measured your variables. Discuss your sampling method or inclusion/exclusion criteria, as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on 4–8 July 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyse?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness shop’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods here.

Next, you should indicate how you processed and analysed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analysing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorising and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviours, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalised beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalisable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives  and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).

In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/thesis-dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a literature review | guide, template, & examples, what is a theoretical framework | a step-by-step guide.

  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS

how to write reliability and validity in dissertation

Getting to the main article

Choosing your route

Setting research questions/ hypotheses

Assessment point

Building the theoretical case

Setting your research strategy

Data collection

Data analysis

Research quality

All research has limitations , which negatively impact upon the quality of the findings you arrive at from your data analysis . This is the case whether you are an undergraduate or master's level student doing a dissertation, a doctoral student, or a seasoned academic researcher. The main journal article you are interested in will also have a number of limitations, some of which will have inevitably become the justifications for your chosen route , and the approach you selected within that route.

You need to think about research quality at this stage in the dissertation process because many of the problems experienced during the dissertation process can be avoided. The trick is to (a) understand the types of research limitation you may face when doing a dissertation, (b) anticipate what these will be in your dissertation, and (c) avoid them becoming a reality (where possible). Quite simply, the better the research quality of your dissertation, (a) the fewer problems you will experience when carrying out your dissertation research, (b) the less time you will need to write up the Research Limitations section of your Discussion/Conclusions chapter (i.e., Chapter Five: Discussion/Conclusions ), and (c) the greater the likelihood of a high mark.

To improve the research quality of your dissertation, you need to follow four steps : (a) understand the five factors through which research quality is assessed - internal validity , external validity , construct validity , reliability and objectivity ; (b) assess the research quality of the main journal article; (c) consider the potential research quality of your research strategy; and (d) determine how you will overcome such weaknesses in your dissertation, considering the practical aspects of your dissertation, and the implications that these may have on the quality of your findings .

  • STEP ONE: Understand the five factors through which research quality is assessed
  • STEP TWO: Assess the research quality of the main journal article
  • STEP THREE: Consider the potential research quality of your research strategy
  • STEP FOUR: Determine how you will overcome such weaknesses in your dissertation

STEP ONE Understand the five factors through which research quality is assessed

In quantitative dissertations, research quality is assessed based on the internal validity , external validity , construct validity , reliability and objectivity of the research. Irrespective of the route that you are following, or the approach within that route, it is important that (a) your dissertation is as internally and externally validity as possible, (b) the measurement procedure you used (i.e., the research method and its measures) are construct valid and reliable, and (c) your research was carried out in an objective way. If you are already confident that you understand these five means through which the quality of quantitative research is assessed, jump to STEP TWO: Assess the research quality of the main journal article . If not, we would suggest that you learn about these terms in the Research Quality section of the Fundamentals part of Lærd Dissertation before reading on. After all, in STEP TWO below, you will need to assess the research quality of the main journal article, before being able to consider the potential weaknesses in research quality in your dissertation, and how you will overcome these weaknesses. To do this, you first need to understand these five main factors through which research quality is assessed.

STEP TWO Assess the research quality of the main journal article

Irrespective of the route that you are following, the person marking your work will expect that you have critically analysed the research strategy used in the main journal article. Even if limitations in the research strategy do not act as the main justification for your choice of route , or the approach within that route (i.e., as is the case in method and measurement-based extensions , or design-based extensions within Route C: Extension ), being able to critically analyse the research strategy used is typically a very important part of the marking scheme for dissertations. Just as you were expected to critically analyse the literature in STAGE FIVE: Building the theoretical case , you have to demonstrate an equally good knowledge of the weaknesses (and strengths) of the research strategy of the main journal article.

You can critically analyse the research strategy used by assessing the research quality of the research strategy used in the main journal article in terms of (a) the internal and external validity of the research strategy, and (b) the construct validity and reliability of the measurement procedure that was used (i.e., the research method and its measures). In most cases, since you did not witness the way that the research in the main journal article was carried out in practice, it will be difficult to assess the objectivity of the research.

Therefore, in order to assess the research quality of the main journal article, you should read up about internal validity , external validity , construct validity , reliability , and even the objectivity of research in the Research Quality section of the Fundamentals part of Lærd Dissertation. However, it is worth mentioning that:

To assess the internal and external validity of the research strategy , assess the threats to such internal and external validity in the main journal article. For example, in the article, Internal validity , we discuss 14 potential threats to internal validity, which include (a) history effects, (b) maturation, (c) testing effects, (d) instrumentation, (e) statistical regression, (f) selection biases, (g) experimental mortality, (h) causal time order, (i) diffusion (or imitation) of treatments, (j) compensation, (k) compensatory rivalry, (l) demoralization, (m) experimenter effects and (n) subject effects. Since any of these 14 threats could have affected the internal validity of the main journal article, you should briefly read up about each one, and then assess whether you think these threats were present in the main journal article. You should note that it will not always be possible to tell whether such a threat was a problem because whilst some are more evident (e.g., the authors of the main journal article should have specified how they selected individuals to be included in their sample, which could expose potential selection biases ), many are not so obvious (e.g., experimenter effects could have occurred as a result of the personal characteristics of the researchers in the main journal article, or some non-verbal cues that they gave off, which influenced the choices participants made when they were being studied, but this would be extremely difficult to spot, especially if the authors did not explicitly try to assess such bias, which is uncommon). Again, you can learn about internal validity and external validity in the Research Quality section of the Fundamentals part of Lærd Dissertation.

Construct validity and reliability are two different ways of assessing the measurement procedure used in the main journal article. Construct validity is important because we want to make sure that the measurement procedure (e.g., a survey, structured interview, structured observation, etc.) that was used to measure the constructs we are interested in (e.g., sexism, obesity, famine, outsourcing, etc.) are valid. By construct valid , we mean that there is (a) a clear link between the constructs you are interested in and the measures and interventions that are used to operationalize them (i.e., measure them), and (b) a clear distinction between different constructs. Construct validity is an overarching term used to refer to the process of assessing the validity of the measurement procedure that was used, and you will need to read up about other types of validity that you will need to consider (i.e., content validity , convergent and divergent validity , criterion validity ), especially if you are taking on a method or measurement-based extension , or design-based extension within Route C: Extension . You can learn more about construct validity in the article: Construct validity . Reliability is important because in order for the results from a study to be considered valid , the measurement procedure must first be reliable . There are a number of types of reliability that you may need to consider when assessing the main journal article, depending on whether the measurement procedure involved (a) successive measurements; (b) simultaneous measurements by more than one researcher; and/or (c) multi-measure procedures. You can learn more about these types of reliability in the article: Reliability in research . When reading up about construct validity and reliability in these articles, you will learn how to assess a piece of research (i.e., your main journal article) in terms of its construct validity and reliability.

When you understand the five factors through which research quality is assess (i.e., STEP ONE ), and have assessed the research quality of your main journal article (i.e., STEP TWO ), you will be well-equipped to consider the potential research quality of your research strategy, based on the route you adopted, and the approach within that route (i.e., STEP THREE next).

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Family Med Prim Care
  • v.4(3); Jul-Sep 2015

Validity, reliability, and generalizability in qualitative research

Lawrence leung.

1 Department of Family Medicine, Queen's University, Kingston, Ontario, Canada

2 Centre of Studies in Primary Care, Queen's University, Kingston, Ontario, Canada

In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies.

Nature of Qualitative Research versus Quantitative Research

The essence of qualitative research is to make sense of and recognize patterns among words in order to build up a meaningful picture without compromising its richness and dimensionality. Like quantitative research, the qualitative research aims to seek answers for questions of “how, where, when who and why” with a perspective to build a theory or refute an existing theory. Unlike quantitative research which deals primarily with numerical data and their statistical interpretations under a reductionist, logical and strictly objective paradigm, qualitative research handles nonnumerical information and their phenomenological interpretation, which inextricably tie in with human senses and subjectivity. While human emotions and perspectives from both subjects and researchers are considered undesirable biases confounding results in quantitative research, the same elements are considered essential and inevitable, if not treasurable, in qualitative research as they invariable add extra dimensions and colors to enrich the corpus of findings. However, the issue of subjectivity and contextual ramifications has fueled incessant controversies regarding yardsticks for quality and trustworthiness of qualitative research results for healthcare.

Impact of Qualitative Research upon Primary Care

In many ways, qualitative research contributes significantly, if not more so than quantitative research, to the field of primary care at various levels. Five qualitative studies are chosen to illustrate how various methodologies of qualitative research helped in advancing primary healthcare, from novel monitoring of chronic obstructive pulmonary disease (COPD) via mobile-health technology,[ 1 ] informed decision for colorectal cancer screening,[ 2 ] triaging out-of-hours GP services,[ 3 ] evaluating care pathways for community psychiatry[ 4 ] and finally prioritization of healthcare initiatives for legislation purposes at national levels.[ 5 ] With the recent advances of information technology and mobile connecting device, self-monitoring and management of chronic diseases via tele-health technology may seem beneficial to both the patient and healthcare provider. Recruiting COPD patients who were given tele-health devices that monitored lung functions, Williams et al. [ 1 ] conducted phone interviews and analyzed their transcripts via a grounded theory approach, identified themes which enabled them to conclude that such mobile-health setup and application helped to engage patients with better adherence to treatment and overall improvement in mood. Such positive findings were in contrast to previous studies, which opined that elderly patients were often challenged by operating computer tablets,[ 6 ] or, conversing with the tele-health software.[ 7 ] To explore the content of recommendations for colorectal cancer screening given out by family physicians, Wackerbarth, et al. [ 2 ] conducted semi-structure interviews with subsequent content analysis and found that most physicians delivered information to enrich patient knowledge with little regard to patients’ true understanding, ideas, and preferences in the matter. These findings suggested room for improvement for family physicians to better engage their patients in recommending preventative care. Faced with various models of out-of-hours triage services for GP consultations, Egbunike et al. [ 3 ] conducted thematic analysis on semi-structured telephone interviews with patients and doctors in various urban, rural and mixed settings. They found that the efficiency of triage services remained a prime concern from both users and providers, among issues of access to doctors and unfulfilled/mismatched expectations from users, which could arouse dissatisfaction and legal implications. In UK, a care pathways model for community psychiatry had been introduced but its benefits were unclear. Khandaker et al. [ 4 ] hence conducted a qualitative study using semi-structure interviews with medical staff and other stakeholders; adopting a grounded-theory approach, major themes emerged which included improved equality of access, more focused logistics, increased work throughput and better accountability for community psychiatry provided under the care pathway model. Finally, at the US national level, Mangione-Smith et al. [ 5 ] employed a modified Delphi method to gather consensus from a panel of nominators which were recognized experts and stakeholders in their disciplines, and identified a core set of quality measures for children's healthcare under the Medicaid and Children's Health Insurance Program. These core measures were made transparent for public opinion and later passed on for full legislation, hence illustrating the impact of qualitative research upon social welfare and policy improvement.

Overall Criteria for Quality in Qualitative Research

Given the diverse genera and forms of qualitative research, there is no consensus for assessing any piece of qualitative research work. Various approaches have been suggested, the two leading schools of thoughts being the school of Dixon-Woods et al. [ 8 ] which emphasizes on methodology, and that of Lincoln et al. [ 9 ] which stresses the rigor of interpretation of results. By identifying commonalities of qualitative research, Dixon-Woods produced a checklist of questions for assessing clarity and appropriateness of the research question; the description and appropriateness for sampling, data collection and data analysis; levels of support and evidence for claims; coherence between data, interpretation and conclusions, and finally level of contribution of the paper. These criteria foster the 10 questions for the Critical Appraisal Skills Program checklist for qualitative studies.[ 10 ] However, these methodology-weighted criteria may not do justice to qualitative studies that differ in epistemological and philosophical paradigms,[ 11 , 12 ] one classic example will be positivistic versus interpretivistic.[ 13 ] Equally, without a robust methodological layout, rigorous interpretation of results advocated by Lincoln et al. [ 9 ] will not be good either. Meyrick[ 14 ] argued from a different angle and proposed fulfillment of the dual core criteria of “transparency” and “systematicity” for good quality qualitative research. In brief, every step of the research logistics (from theory formation, design of study, sampling, data acquisition and analysis to results and conclusions) has to be validated if it is transparent or systematic enough. In this manner, both the research process and results can be assured of high rigor and robustness.[ 14 ] Finally, Kitto et al. [ 15 ] epitomized six criteria for assessing overall quality of qualitative research: (i) Clarification and justification, (ii) procedural rigor, (iii) sample representativeness, (iv) interpretative rigor, (v) reflexive and evaluative rigor and (vi) transferability/generalizability, which also double as evaluative landmarks for manuscript review to the Medical Journal of Australia. Same for quantitative research, quality for qualitative research can be assessed in terms of validity, reliability, and generalizability.

Validity in qualitative research means “appropriateness” of the tools, processes, and data. Whether the research question is valid for the desired outcome, the choice of methodology is appropriate for answering the research question, the design is valid for the methodology, the sampling and data analysis is appropriate, and finally the results and conclusions are valid for the sample and context. In assessing validity of qualitative research, the challenge can start from the ontology and epistemology of the issue being studied, e.g. the concept of “individual” is seen differently between humanistic and positive psychologists due to differing philosophical perspectives:[ 16 ] Where humanistic psychologists believe “individual” is a product of existential awareness and social interaction, positive psychologists think the “individual” exists side-by-side with formation of any human being. Set off in different pathways, qualitative research regarding the individual's wellbeing will be concluded with varying validity. Choice of methodology must enable detection of findings/phenomena in the appropriate context for it to be valid, with due regard to culturally and contextually variable. For sampling, procedures and methods must be appropriate for the research paradigm and be distinctive between systematic,[ 17 ] purposeful[ 18 ] or theoretical (adaptive) sampling[ 19 , 20 ] where the systematic sampling has no a priori theory, purposeful sampling often has a certain aim or framework and theoretical sampling is molded by the ongoing process of data collection and theory in evolution. For data extraction and analysis, several methods were adopted to enhance validity, including 1 st tier triangulation (of researchers) and 2 nd tier triangulation (of resources and theories),[ 17 , 21 ] well-documented audit trail of materials and processes,[ 22 , 23 , 24 ] multidimensional analysis as concept- or case-orientated[ 25 , 26 ] and respondent verification.[ 21 , 27 ]

Reliability

In quantitative research, reliability refers to exact replicability of the processes and the results. In qualitative research with diverse paradigms, such definition of reliability is challenging and epistemologically counter-intuitive. Hence, the essence of reliability for qualitative research lies with consistency.[ 24 , 28 ] A margin of variability for results is tolerated in qualitative research provided the methodology and epistemological logistics consistently yield data that are ontologically similar but may differ in richness and ambience within similar dimensions. Silverman[ 29 ] proposed five approaches in enhancing the reliability of process and results: Refutational analysis, constant data comparison, comprehensive data use, inclusive of the deviant case and use of tables. As data were extracted from the original sources, researchers must verify their accuracy in terms of form and context with constant comparison,[ 27 ] either alone or with peers (a form of triangulation).[ 30 ] The scope and analysis of data included should be as comprehensive and inclusive with reference to quantitative aspects if possible.[ 30 ] Adopting the Popperian dictum of falsifiability as essence of truth and science, attempted to refute the qualitative data and analytes should be performed to assess reliability.[ 31 ]

Generalizability

Most qualitative research studies, if not all, are meant to study a specific issue or phenomenon in a certain population or ethnic group, of a focused locality in a particular context, hence generalizability of qualitative research findings is usually not an expected attribute. However, with rising trend of knowledge synthesis from qualitative research via meta-synthesis, meta-narrative or meta-ethnography, evaluation of generalizability becomes pertinent. A pragmatic approach to assessing generalizability for qualitative studies is to adopt same criteria for validity: That is, use of systematic sampling, triangulation and constant comparison, proper audit and documentation, and multi-dimensional theory.[ 17 ] However, some researchers espouse the approach of analytical generalization[ 32 ] where one judges the extent to which the findings in one study can be generalized to another under similar theoretical, and the proximal similarity model, where generalizability of one study to another is judged by similarities between the time, place, people and other social contexts.[ 33 ] Thus said, Zimmer[ 34 ] questioned the suitability of meta-synthesis in view of the basic tenets of grounded theory,[ 35 ] phenomenology[ 36 ] and ethnography.[ 37 ] He concluded that any valid meta-synthesis must retain the other two goals of theory development and higher-level abstraction while in search of generalizability, and must be executed as a third level interpretation using Gadamer's concepts of the hermeneutic circle,[ 38 , 39 ] dialogic process[ 38 ] and fusion of horizons.[ 39 ] Finally, Toye et al. [ 40 ] reported the practicality of using “conceptual clarity” and “interpretative rigor” as intuitive criteria for assessing quality in meta-ethnography, which somehow echoed Rolfe's controversial aesthetic theory of research reports.[ 41 ]

Food for Thought

Despite various measures to enhance or ensure quality of qualitative studies, some researchers opined from a purist ontological and epistemological angle that qualitative research is not a unified, but ipso facto diverse field,[ 8 ] hence any attempt to synthesize or appraise different studies under one system is impossible and conceptually wrong. Barbour argued from a philosophical angle that these special measures or “technical fixes” (like purposive sampling, multiple-coding, triangulation, and respondent validation) can never confer the rigor as conceived.[ 11 ] In extremis, Rolfe et al. opined from the field of nursing research, that any set of formal criteria used to judge the quality of qualitative research are futile and without validity, and suggested that any qualitative report should be judged by the form it is written (aesthetic) and not by the contents (epistemic).[ 41 ] Rolfe's novel view is rebutted by Porter,[ 42 ] who argued via logical premises that two of Rolfe's fundamental statements were flawed: (i) “The content of research report is determined by their forms” may not be a fact, and (ii) that research appraisal being “subject to individual judgment based on insight and experience” will mean those without sufficient experience of performing research will be unable to judge adequately – hence an elitist's principle. From a realism standpoint, Porter then proposes multiple and open approaches for validity in qualitative research that incorporate parallel perspectives[ 43 , 44 ] and diversification of meanings.[ 44 ] Any work of qualitative research, when read by the readers, is always a two-way interactive process, such that validity and quality has to be judged by the receiving end too and not by the researcher end alone.

In summary, the three gold criteria of validity, reliability and generalizability apply in principle to assess quality for both quantitative and qualitative research, what differs will be the nature and type of processes that ontologically and epistemologically distinguish between the two.

Source of Support: Nil.

Conflict of Interest: None declared.

Reliability, Validity and Ethics

Cite this chapter.

Book cover

  • Lindy Woodrow 2  

1536 Accesses

This chapter is about writing about the procedure of the research. This includes a discussion of reliability, validity and the ethics of research and writing. The level of detail about these issues varies across texts, but the reliability and validity of the study must feature in the text. Some-times these issues are evident from the research instruments and analysis and sometimes they are referred to explicitly. This chapter includes the following sections:

Technical information

Reliability of a measure

Internal validity

External validity

Research ethics

Reporting on reliability

Writing about validity

Reporting on ethics

Writing about research procedure

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Further reading

Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative and mixed methodologies . Oxford: Oxford University Press.

Google Scholar  

Paltridge, B. & Phakiti, A. (2010) (Eds.). Continuum companion to research methods in applied linguistics . London: Continuum.

Sources of examples

Lee, J.-A. (2009). Teachers’ sense of efficacy in teaching English, perceived English proficiency and attitudes toward English language: A case of Korean public elementary teachers . PhD, Ohio State University.

Levine, G. S. (2003). Student and instructor beliefs and attitudes about target language use, first language use and anxiety: Report of a questionnaire study. Modern Language Journal , 87(3), 343–364, doi: 10.1111/1540-4781.00194.

Article   Google Scholar  

Lin, H., Chen, T., & Dwyer, F. (2006). Effects of static visuals and computer-generated animations in facilitating Immediate and delayed achievement in the EFL classroom. Foreign Language Annals , 39(2), doi: 203-219.10.1111/j.1944-9720.2006.tb02262.x.

Mills, N. (2011). Teaching assistants’ self-efficacy in teaching literature: Sources, personal assessments, and consequences. Modern Language Journal , 95(1), 61–80. doi: 10.1111/j.1540-4781.2010.01145.x.

Rai, M. K., Loschky, L. C., Harris, R. J., Peck, N. R., & Cook, L. G. (2011). Effects of stress and working memory capacity on foreign language readers’ inferential processing during comprehension. Language Learning , 61(1), 187–218. doi: 10.1111/j.1467-9922.2010.00592.x.

Rose, H. (2010). Kanji learning of Japanese language learners on a year-long study exchange at a Japanese university: An investigation of strategy use, motivation control and self regulation . PhD, University of Sydney.

Download references

Author information

Authors and affiliations.

University of Sydney, Australia

Lindy Woodrow

You can also search for this author in PubMed   Google Scholar

Copyright information

© 2014 Lindy Woodrow

About this chapter

Woodrow, L. (2014). Reliability, Validity and Ethics. In: Writing about Quantitative Research in Applied Linguistics. Palgrave Macmillan, London. https://doi.org/10.1057/9780230369955_3

Download citation

DOI : https://doi.org/10.1057/9780230369955_3

Publisher Name : Palgrave Macmillan, London

Print ISBN : 978-0-230-36997-9

Online ISBN : 978-0-230-36995-5

eBook Packages : Palgrave Language & Linguistics Collection Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

www.study-aids.co.uk

Sample Dissertations

Sample Dissertations | University Dissertations | Dissertation Examples

Reliability and Validity Academic Research

Reliability and validity, inter-rater reliability.

Reliability and Validity in Research – This is a statistical concept in the field of research whereby a particular phenomenon is being evaluated or rated by various raters. It is, therefore, the extent or degree to which there is an agreement in the rating scores amongst the multiple raters, which brings about homogeneity and unanimity amongst these raters. To measure inter-rater reliability, it entails taking the total number of ratings in a particular judgment conducted, as well as the counting the accumulative ratings are done in the rating exercise. The total number of agreements is then divided by the total number of ratings and converted into a percentage to give the inter-rater reliability. McHugh (2012) provides a good example of how inter-rater reliability is calculated by reviewing the various methods that have been stipulated by scholars previously.

Test-retest reliability

This is also another reliability aspect. Test-retest reliability is the extent or degree to which results obtained from a particular test (which is similar) and consistent over time. In test-retest reliability, a similar test is administered to the same people in two or more instances and then the results are evaluated. To measure the test-retest reliability, there are two primary formulas applied. The first formula, which is better applied in instances where two tests were conducted in the Pearson Correlation formula that tests how well two sets of data correlate.

The other method is intraclass correlation formula that is applicable where more than two tests were administered. These formulas help calculate the test-retest coefficients that range between 0 and 1. In his article on validity and reliability in social science research, Drost (2011) provides the various reliability and validity aspects and gives detailed examples of the test-retest reliability measurement.

Face validity

Face validity, which is also referred to as the logical validity, entails the extent or degree to which an evaluation or investigation intuitively seems to quantify or measure the variable or rather the theory that it is objectively meant to measure. This, therefore, means that face validity is when a specific evaluation or assessment tool does what it is meant to do to provide results. To measure face validity, one can engage in the assessment of the concepts or ideas to be measured against the theoretical and practical applications.

Predictive validity

This is the measure of how accurate or effective a given value from a research study is and can be used in the future or rather to predict future patterns in the field studied. In their research on the predictive validity of public examinations (Obioma & Salau, 2007) use the predictive validity aspect to predict how the performance of students in public examinations will affect their future academic performances in the university and college level.

Concurrent reliability and validity

This entails the degree to which current test results relate to results from a previous test. For instance, if in the measurement of an individual’s IQ test are taken at two varied intervals, concurrent validity is measured through comparing on how closely similar are these results from the two tests. A good example of research that has employed the use of concurrent validity is the research done by (Tamanini et al., 2004) on the Portuguese king’s health test performed on women after stress. The researchers indicate how this test is applied and measured by using it as their primary test in their research.

Addressing the issues of reliability and validity

On most qualitative researchers, the nature of the data is more important to the researcher than the other descriptive elements of the research. This, however, does not rule out the need for conciseness in the descriptive sections. Reliability in research entails the concerns the stability, consistency of the data as well as homogeneous repeatability of the results if several tests are done (LoBiondo-Wood & Haber 2014). On the other hand, validity entails the accuracy and integrity of the data or results collected from the various tests that a researcher performs. Various researchers address these issues of validity and reliability in different ways, based on the purpose and the kind of research they carry out.

The authors, Obioma & Salau, (2007), go down to research on the effects of public examinations on the future academic performance of students. The focus here, therefore, is more on the data validation to ensure that their conclusions, as well as the outcomes of the results, have the required accuracy and integrity to validate their arguments. The two authors and researchers have applied the aspects of predictive and concurrent validity in their research. In regards to the use of predictive validity, this is where their research question is based on.

They have made sure that the data or the arguments that they bring forth as substantially valid and convincing to attain the objective of predicting the future academic performances of the children who undertake the public examinations that are governed by the various bodies in the country. They have however not applied any reliability aspects in their research. At least not anyone that can be easily identified.

In the book by Drost, he has touched on both aspects; validity and reliability. In this book, he has not presented it in a research form but rather brought it out to the readers in the form of a review of both aspects of research, but on the dimension of social sciences. For instance, she has covered the various instances of both validity and reliability, by providing real-life examples and the various methods that can be used to measure the respective instances of both aspects. She approaches the concepts of validity and reliability from a general perspective whereby she accounts for the reasons as to why researchers, especially in education and social sciences, should adopt a culture of ensuring validity and reliability in their results. He explains the various instances of reliability and provides formulas and tools that can be effectively applied to measure these instances. She also provides the various elements that can impact the level of validity and reliability of data or results in research.

In conclusion, the concepts of validity and reliability are important in research. The researcher from various fields should adopt a culture of achieving these concepts in the results they obtain during their research. As Drost argues it, strong support for the validity and the reliability of research not only makes the research highly validated or otherwise believed in but also limits the possible critiques that the research may face. It fills the gaps that may be identifiable in the research. A researcher should be able to understand the various instances of both reliability and validity as well as know when it is appropriate to apply what instance in the research.

McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia Medica, 22(3), 276-282.

Drost, E. A. (2011). Validity and reliability of social science research. Education Research and perspectives, 38(1), 105.

Obioma, G., & Salau, M. (2007). The predictive validity of public examinations: A case study of Nigeria. Nigerian Educational Research & Development Council (NERDC) Abuja.

Tamanini, J. T., Dambros, M., D’ancona, C. A., Palma, P. C., Botega, N. J., Rios, L. A., & Netto Jr, N. R. (2004). Concurrent validity, internal consistency and responsiveness of the Portuguese version of the King’s Health Questionnaire (KHQ) in women after stress urinary incontinence surgery. International Braz j Urol, 30(6), 479-486.

LoBiondo-Wood, G., & Haber, J. (2014). Reliability and validity. G. LoBiondo-Wood & J. Haber. Nursing research. Methods and critical appraisal for evidence-based practice, 289-309.

Reliability and Validity Relevant Blog Posts

University Dissertation Topics Approaches for Research Dissertations

If you enjoyed reading this post on reliability and validity, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

Published by

' src=

Steve Jones

My name is Steve Jones and I’m the creator and administrator of the dissertation topics blog. I’m a senior writer at study-aids.co.uk and hold a BA (hons) Business degree and MBA, I live in Birmingham (just moved here from London), I’m a keen writer, always glued to a book and have an interest in economics theory. View all posts by Steve Jones

12 thoughts on “Reliability and Validity Academic Research”

Thanks for posting this. This clearly outlines how to approach academic research correctly and should provide a valuable insight to undergraduate students.

Thank you Bridgette. We do try to post as much informative and helpful material as possible.

Excellent post, this will help me with my own studies. I will reference this material.

Thanks Brett. I’m glad you found this post useful.

Hello There. I discovered your academic blog via Google. I have bookmarked this site and will come back to read more of your useful info. Thank you for the post.

Thanks for the positive comment Almeda

Quick questions… With having so much content do you ever run into any issues of plagiarism?

Hi Randell – We have no issues or infringements with our content. All content hosted in our blog, forum and website is manually written and original content.

Brilliant and thanks for posting. I think the difference in a question like this is the way you think about it. It is strange to compare the reliability and validity process of taking a life against one’s will and taking a life of one’s own free will just because the academic research result is the same.

thank you for posting good academic research

I really like your approach when outlining reliability and validity academic research. This will come in useful. Kaitlyn.

Gгeat website. Lots of useful academic research information here.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

IMAGES

  1. Examples of reliability and validity in research

    how to write reliability and validity in dissertation

  2. Validity and reliability in research example

    how to write reliability and validity in dissertation

  3. Reliability vs. Validity in Research

    how to write reliability and validity in dissertation

  4. Validity and reliability of research instrument example

    how to write reliability and validity in dissertation

  5. What is Validity And Reliability In Research? Learn from experts

    how to write reliability and validity in dissertation

  6. Reliability vs Validity

    how to write reliability and validity in dissertation

VIDEO

  1. Reliability and Validity in Research || Validity and Reliability in Research in Urdu and Hindi

  2. Validity vs Reliability || Research ||

  3. विश्वसनीयता एवं वैधता // RELIABILITY AND VALIDITY

  4. Difference between Reliability & Validity in Research

  5. Types of Validity || Research Aptitude || Naveen Sakh ||

  6. Home of Dissertations Review

COMMENTS

  1. Reliability vs Validity in Research

    Where to write about reliability and validity in a thesis. It's appropriate to discuss reliability and validity in various sections of your thesis or dissertation or research paper. Showing that you have taken them into account in planning your research and interpreting the results makes your work more credible and trustworthy.

  2. Validity & Reliability In Research

    As with validity, reliability is an attribute of a measurement instrument - for example, a survey, a weight scale or even a blood pressure monitor. But while validity is concerned with whether the instrument is measuring the "thing" it's supposed to be measuring, reliability is concerned with consistency and stability.

  3. Guide: Understanding Reliability and Validity

    Stability reliability (sometimes called test, re-test reliability) is the agreement of measuring instruments over time. To determine stability, a measure or test is repeated on the same subjects at a future date. Results are compared and correlated with the initial test to give a measure of stability.

  4. Dissertation Methodology

    Reliability and Validity: Discuss how you've ensured the reliability and validity of your research. This might include steps you took to reduce bias or increase the accuracy of your measurements. ... How to Write Dissertation Methodology. Writing a dissertation methodology requires you to be clear and precise about the way you've carried ...

  5. Reliability and validity of your thesis

    In this document you can also write down for yourself an introduction that you give each interviewee at the start of an interview, and you also mention the confidentiality. ... Describing reliability and validity in your thesis. To summarize, discuss as many aspects as possible that have led to the highest possible reliability of your study ...

  6. Reliability and Validity

    Issues of research reliability and validity need to be addressed in methodology chapter in a concise manner.. Reliability refers to the extent to which the same answers can be obtained using the same instruments more than one time. In simple terms, if your research is associated with high levels of reliability, then other researchers need to be able to generate the same results, using the same ...

  7. Reliability in research

    Reliability in research. Reliability, like validity, is a way of assessing the quality of the measurement procedure used to collect data in a dissertation. In order for the results from a study to be considered valid, the measurement procedure must first be reliable.In this article, we: (a) explain what reliability is, providing examples; (b) highlight some of the more common threats to ...

  8. Reliability vs. Validity in Research

    Understanding Validity in Research. In research, validity is a measure of accuracy that indicates how well a method or test measures what it is designed to assess. High validity is indicative of results that closely correspond to actual characteristics, behaviors, or phenomena in the physical or social world, making it a critical aspect of any credible research endeavor.

  9. What Is a Research Methodology?

    A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic. It should include: The type of research you conducted; How you collected and analyzed your data

  10. How to write a great Research Quality section

    Quite simply, the better the research quality of your dissertation, (a) the fewer problems you will experience when carrying out your dissertation research, (b) the less time you will need to write up the Research Limitations section of your Conclusions chapter (i.e., Chapter Five: Conclusions ), and (c) the greater the likelihood of a high mark.

  11. How do you write the validity section of the research proposal for a

    For instrument's validity, explain if you intend to assess face, content, construct or other kinds of validity. Then, you will have to address the internal and external validity of your study ...

  12. Reliability and Validity

    Reliability refers to the consistency of the measurement. Reliability shows how trustworthy is the score of the test. If the collected data shows the same results after being tested using various methods and sample groups, the information is reliable. If your method has reliability, the results will be valid. Example: If you weigh yourself on a ...

  13. PDF Reliability, Validity and Ethics

    Reliability, Validity and Ethics 3.1 Introduction This chapter is about writing about the procedure of the research. This includes a discussion of reliability, validity and the ethics of research and writing. The level of detail about these issues varies across texts, but the reliability and validity of the study must feature in the text. Some-

  14. What Is a Research Methodology?

    Revised on 10 October 2022. Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

  15. Step 5: Issues of research quality for your dissertation

    STEP ONE Understand the five factors through which research quality is assessed. In quantitative dissertations, research quality is assessed based on the internal validity, external validity, construct validity, reliability and objectivity of the research. Irrespective of the route that you are following, or the approach within that route, it is important that (a) your dissertation is as ...

  16. Validity, reliability, and generalizability in qualitative research

    Validity in qualitative research means "appropriateness" of the tools, processes, and data. Whether the research question is valid for the desired outcome, the choice of methodology is appropriate for answering the research question, the design is valid for the methodology, the sampling and data analysis is appropriate, and finally the ...

  17. (PDF) Validity and Reliability in Quantitative Research

    Abstract and Figures. The validity and reliability of the scales used in research are important factors that enable the research to yield healthy results. For this reason, it is useful to ...

  18. Reliability, Validity and Ethics

    This chapter is about writing about the procedure of the research. This includes a discussion of reliability, validity and the ethics of research and writing. The level of detail about these issues varies across texts, but the reliability and validity of the study must feature in the text. Some-times these issues are evident from the research ...

  19. Verification Strategies for Establishing Reliability and Validity in

    The purpose of this article is to reestablish reliability and validity as appropriate to qualitative inquiry; to identify the problems created by post hoc assessments of qualitative research; to review general verification strategies in relation to qualitative research, and to discuss the implications of returning the responsibility for the ...

  20. PDF Establishing survey validity: A practical guide

    The first and second questions are about instrument validity, and the third question is about instrument reliability. 2. EVIDENCE SUPPORTING VALIDITY What is this idea of validity? Here is an example to help illustrate the general idea of validity. If you give students a set of questions having to do with their interest in science and they

  21. PDF CHAPTER 3 VALIDITY AND RELIABILITY

    3.1 INTRODUCTION. In Chapter 2, the study's aims of exploring how objects can influence the level of construct validity of a Picture Vocabulary Test were discussed, and a review conducted of the literature on the various factors that play a role as to how the validity level can be influenced. In this chapter validity and reliability are ...

  22. Is Your Dissertation Research Reliable?

    Reliability is a characteristic of the data collected by an instrument and not the instrument itself. Therefore, instead of stating the instrument is reliable, we should say "the data collected by the instrument is reliable" or that "reliable data has been collected using the instrument in the past.". Example: The data collected by the ...

  23. Reliability and Validity Academic Research

    Reliability in research entails the concerns the stability, consistency of the data as well as homogeneous repeatability of the results if several tests are done (LoBiondo-Wood & Haber 2014). On the other hand, validity entails the accuracy and integrity of the data or results collected from the various tests that a researcher performs. Various ...