11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

Creative Commons License

Share This Book

  • Increase Font Size
  • Search This Site All UCSD Sites Faculty/Staff Search Term
  • Contact & Directions
  • Climate Statement
  • Cognitive Behavioral Neuroscience
  • Cognitive Psychology
  • Developmental Psychology
  • Social Psychology
  • Adjunct Faculty
  • Non-Senate Instructors
  • Researchers
  • Psychology Grads
  • Affiliated Grads
  • New and Prospective Students
  • Honors Program
  • Experiential Learning
  • Programs & Events
  • Psi Chi / Psychology Club
  • Prospective PhD Students
  • Current PhD Students
  • Area Brown Bags
  • Colloquium Series
  • Anderson Distinguished Lecture Series
  • Speaker Videos
  • Undergraduate Program
  • Academic and Writing Resources

Writing Research Papers

  • Research Paper Structure

Whether you are writing a B.S. Degree Research Paper or completing a research report for a Psychology course, it is highly likely that you will need to organize your research paper in accordance with American Psychological Association (APA) guidelines.  Here we discuss the structure of research papers according to APA style.

Major Sections of a Research Paper in APA Style

A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1  Many will also contain Figures and Tables and some will have an Appendix or Appendices.  These sections are detailed as follows (for a more in-depth guide, please refer to " How to Write a Research Paper in APA Style ”, a comprehensive guide developed by Prof. Emma Geller). 2

What is this paper called and who wrote it? – the first page of the paper; this includes the name of the paper, a “running head”, authors, and institutional affiliation of the authors.  The institutional affiliation is usually listed in an Author Note that is placed towards the bottom of the title page.  In some cases, the Author Note also contains an acknowledgment of any funding support and of any individuals that assisted with the research project.

One-paragraph summary of the entire study – typically no more than 250 words in length (and in many cases it is well shorter than that), the Abstract provides an overview of the study.

Introduction

What is the topic and why is it worth studying? – the first major section of text in the paper, the Introduction commonly describes the topic under investigation, summarizes or discusses relevant prior research (for related details, please see the Writing Literature Reviews section of this website), identifies unresolved issues that the current research will address, and provides an overview of the research that is to be described in greater detail in the sections to follow.

What did you do? – a section which details how the research was performed.  It typically features a description of the participants/subjects that were involved, the study design, the materials that were used, and the study procedure.  If there were multiple experiments, then each experiment may require a separate Methods section.  A rule of thumb is that the Methods section should be sufficiently detailed for another researcher to duplicate your research.

What did you find? – a section which describes the data that was collected and the results of any statistical tests that were performed.  It may also be prefaced by a description of the analysis procedure that was used. If there were multiple experiments, then each experiment may require a separate Results section.

What is the significance of your results? – the final major section of text in the paper.  The Discussion commonly features a summary of the results that were obtained in the study, describes how those results address the topic under investigation and/or the issues that the research was designed to address, and may expand upon the implications of those findings.  Limitations and directions for future research are also commonly addressed.

List of articles and any books cited – an alphabetized list of the sources that are cited in the paper (by last name of the first author of each source).  Each reference should follow specific APA guidelines regarding author names, dates, article titles, journal titles, journal volume numbers, page numbers, book publishers, publisher locations, websites, and so on (for more information, please see the Citing References in APA Style page of this website).

Tables and Figures

Graphs and data (optional in some cases) – depending on the type of research being performed, there may be Tables and/or Figures (however, in some cases, there may be neither).  In APA style, each Table and each Figure is placed on a separate page and all Tables and Figures are included after the References.   Tables are included first, followed by Figures.   However, for some journals and undergraduate research papers (such as the B.S. Research Paper or Honors Thesis), Tables and Figures may be embedded in the text (depending on the instructor’s or editor’s policies; for more details, see "Deviations from APA Style" below).

Supplementary information (optional) – in some cases, additional information that is not critical to understanding the research paper, such as a list of experiment stimuli, details of a secondary analysis, or programming code, is provided.  This is often placed in an Appendix.

Variations of Research Papers in APA Style

Although the major sections described above are common to most research papers written in APA style, there are variations on that pattern.  These variations include: 

  • Literature reviews – when a paper is reviewing prior published research and not presenting new empirical research itself (such as in a review article, and particularly a qualitative review), then the authors may forgo any Methods and Results sections. Instead, there is a different structure such as an Introduction section followed by sections for each of the different aspects of the body of research being reviewed, and then perhaps a Discussion section. 
  • Multi-experiment papers – when there are multiple experiments, it is common to follow the Introduction with an Experiment 1 section, itself containing Methods, Results, and Discussion subsections. Then there is an Experiment 2 section with a similar structure, an Experiment 3 section with a similar structure, and so on until all experiments are covered.  Towards the end of the paper there is a General Discussion section followed by References.  Additionally, in multi-experiment papers, it is common for the Results and Discussion subsections for individual experiments to be combined into single “Results and Discussion” sections.

Departures from APA Style

In some cases, official APA style might not be followed (however, be sure to check with your editor, instructor, or other sources before deviating from standards of the Publication Manual of the American Psychological Association).  Such deviations may include:

  • Placement of Tables and Figures  – in some cases, to make reading through the paper easier, Tables and/or Figures are embedded in the text (for example, having a bar graph placed in the relevant Results section). The embedding of Tables and/or Figures in the text is one of the most common deviations from APA style (and is commonly allowed in B.S. Degree Research Papers and Honors Theses; however you should check with your instructor, supervisor, or editor first). 
  • Incomplete research – sometimes a B.S. Degree Research Paper in this department is written about research that is currently being planned or is in progress. In those circumstances, sometimes only an Introduction and Methods section, followed by References, is included (that is, in cases where the research itself has not formally begun).  In other cases, preliminary results are presented and noted as such in the Results section (such as in cases where the study is underway but not complete), and the Discussion section includes caveats about the in-progress nature of the research.  Again, you should check with your instructor, supervisor, or editor first.
  • Class assignments – in some classes in this department, an assignment must be written in APA style but is not exactly a traditional research paper (for instance, a student asked to write about an article that they read, and to write that report in APA style). In that case, the structure of the paper might approximate the typical sections of a research paper in APA style, but not entirely.  You should check with your instructor for further guidelines.

Workshops and Downloadable Resources

  • For in-person discussion of the process of writing research papers, please consider attending this department’s “Writing Research Papers” workshop (for dates and times, please check the undergraduate workshops calendar).

Downloadable Resources

  • How to Write APA Style Research Papers (a comprehensive guide) [ PDF ]
  • Tips for Writing APA Style Research Papers (a brief summary) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – empirical research) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – literature review) [ PDF ]

Further Resources

How-To Videos     

  • Writing Research Paper Videos

APA Journal Article Reporting Guidelines

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 3.
  • Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 26.  

External Resources

  • Formatting APA Style Papers in Microsoft Word
  • How to Write an APA Style Research Paper from Hamilton University
  • WikiHow Guide to Writing APA Research Papers
  • Sample APA Formatted Paper with Comments
  • Sample APA Formatted Paper
  • Tips for Writing a Paper in APA Style

1 VandenBos, G. R. (Ed). (2010). Publication manual of the American Psychological Association (6th ed.) (pp. 41-60).  Washington, DC: American Psychological Association.

2 geller, e. (2018).  how to write an apa-style research report . [instructional materials]. , prepared by s. c. pan for ucsd psychology.

Back to top  

  • Formatting Research Papers
  • Using Databases and Finding References
  • What Types of References Are Appropriate?
  • Evaluating References and Taking Notes
  • Citing References
  • Writing a Literature Review
  • Writing Process and Revising
  • Improving Scientific Writing
  • Academic Integrity and Avoiding Plagiarism
  • Writing Research Papers Videos

The Professional Counselor

Guidelines and Recommendations for Writing a Rigorous Quantitative Methods Section in Counseling and Related Fields

Volume 12 - Issue 3

Michael T. Kalkbrenner

Conducting and publishing rigorous empirical research based on original data is essential for advancing and sustaining high-quality counseling practice. The purpose of this article is to provide a one-stop-shop for writing a rigorous quantitative Methods section in counseling and related fields. The importance of judiciously planning, implementing, and writing quantitative research methods cannot be understated, as methodological flaws can completely undermine the integrity of the results. This article includes an overview, considerations, guidelines, best practices, and recommendations for conducting and writing quantitative research designs. The author concludes with an exemplar Methods section to provide a sample of one way to apply the guidelines for writing or evaluating quantitative research methods that are detailed in this manuscript.

Keywords : empirical, quantitative, methods, counseling, writing

     The findings of rigorous empirical research based on original data is crucial for promoting and maintaining high-quality counseling practice (American Counseling Association [ACA], 2014; Giordano et al., 2021; Lutz & Hill, 2009; Wester et al., 2013). Peer-reviewed publication outlets play a crucial role in ensuring the rigor of counseling research and distributing the findings to counseling practitioners. The four major sections of an original empirical study usually include: (a) Introduction/Literature Review, (b) Methods, (c) Results, and (d) Discussion (American Psychological Association [APA], 2020; Heppner et al., 2016). Although every section of a research study must be carefully planned, executed, and reported (Giordano et al., 2021), scholars have engaged in commentary about the importance of a rigorous and clearly written Methods section for decades (Korn & Bram, 1988; Lutz & Hill, 2009). The Methods section is the “conceptual epicenter of a manuscript” (Smagorinsky, 2008, p. 390) and should include clear and specific details about how the study was conducted (Heppner et al., 2016). It is essential that producers and consumers of research are aware of key methodological standards, as the quality of quantitative methods in published research can vary notably, which has serious implications for the merit of research findings (Lutz & Hill, 2009; Wester et al., 2013).

Careful planning prior to launching data collection is especially important for conducting and writing a rigorous quantitative Methods section, as it is rarely appropriate to alter quantitative methods after data collection is complete for both practical and ethical reasons (ACA, 2014; Creswell & Creswell, 2018). A well-written Methods section is also crucial for publishing research in a peer-reviewed journal; any serious methodological flaws tend to automatically trigger a decision of rejection without revisions. Accordingly, the purpose of this article is to provide both producers and consumers of quantitative research with guidelines and recommendations for writing or evaluating the rigor of a Methods section in counseling and related fields. Specifically, this manuscript includes a general overview of major quantitative methodological subsections as well as an exemplar Methods section. The recommended subsections and guidelines for writing a rigorous Methods section in this manuscript (see Appendix) are based on a synthesis of (a) the extant literature (e.g., Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Giordano et al., 2021); (b) the Standards for Educational and Psychological Testing (American Educational Research Association [AERA] et al., 2014), (c) the ACA Code of Ethics (ACA, 2014), and (d) the Journal Article Reporting Standards (JARS) in the APA 7 (2020) manual.

Quantitative Methods: An Overview of the Major Sections

The Methods section is typically the second major section in a research manuscript and can begin with an overview of the theoretical framework and research paradigm that ground the study (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). Research paradigms and theoretical frameworks are more commonly reported in qualitative, conceptual, and dissertation studies than in quantitative studies. However, research paradigms and theoretical frameworks can be very applicable to quantitative research designs (see the exemplar Methods section below). Readers are encouraged to consult Creswell and Creswell (2018) for a clear and concise overview about the utility of a theoretical framework and a research paradigm in quantitative research.

Research Design      The research design should be clearly specified at the beginning of the Methods section. Commonly employed quantitative research designs in counseling include but are not limited to group comparisons (e.g., experimental, quasi-experimental, ex-post-facto), correlational/predictive, meta-analysis, descriptive, and single-subject designs (Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Leedy & Ormrod, 2019). A well-written literature review and strong research question(s) will dictate the most appropriate research design. Readers can refer to Flinn and Kalkbrenner (2021) for free (open access) commentary on and examples of conducting a literature review, formulating research questions, and selecting the most appropriate corresponding research design.

Researcher Bias and Reflexivity      Counseling researchers have an ethical responsibility to minimize their personal biases throughout the research process (ACA, 2014). A researcher’s personal beliefs, values, expectations, and attitudes create a lens or framework for how data will be collected and interpreted. Researcher reflexivity or positionality statements are well-established methodological standards in qualitative research (Hays & Singh, 2012; Heppner et al., 2016; Rovai et al., 2013). Researcher bias is rarely reported in quantitative research; however, researcher bias can be just as inherently present in quantitative as it is in qualitative studies. Being reflexive and transparent about one’s biases strengthens the rigor of the research design (Creswell & Creswell, 2018; Onwuegbuzie & Leech, 2005). Accordingly, quantitative researchers should consider reflecting on their biases in similar ways as qualitative researchers (Onwuegbuzie & Leech, 2005). For example, a researcher’s topical and methodological choices are, at least in part, based on their personal interests and experiences. To this end, quantitative researchers are encouraged to reflect on and consider reporting their beliefs, assumptions, and expectations throughout the research process.

Participants and Procedures      The major aim in the Participants and Procedures subsection of the Methods section is to provide a clear description of the study’s participants and procedures in enough detail for replication (ACA, 2014; APA, 2020; Giordano et al., 2021; Heppner et al., 2016). When working with human subjects, authors should briefly discuss research ethics including but not limited to receiving institutional review board (IRB) approval (Giordano et al., 2021; Korn & Bram, 1988). Additional considerations for the Participants and Procedures section include details about the authors’ sampling procedure, inclusion and/or exclusion criteria for participation, sample size, participant background information, location/site, and protocol for interventions (APA, 2020).

Sampling Procedure and Sample Size      Sampling procedures should be clearly stated in the Methods section. At a minimum, the description of the sampling procedure should include researcher access to prospective participants, recruitment procedures, data collection modality (e.g., online survey), and sample size considerations. Quantitative sampling approaches tend to be clustered into either probability or non-probability techniques (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). The key distinguishing feature of probability sampling is random selection, in which all prospective participants in the population have an equal chance of being randomly selected to participate in the study (Leedy & Ormrod, 2019). Examples of probability sampling techniques include simple random sampling, systematic random sampling, stratified random sampling, or cluster sampling (Leedy & Ormrod, 2019).

Non-probability sampling techniques lack random selection and there is no way of determining if every member of the population had a chance of being selected to participate in the study (Leedy & Ormrod, 2019). Examples of non-probability sampling procedures include volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, and matched sampling. In quantitative research, probability sampling procedures are more rigorous in terms of generalizability (i.e., the extent to which research findings based on sample data extend or generalize to the larger population from which the sample was drawn). However, probability sampling is not always possible and non-probability sampling procedures are rigorous in their own right. Readers are encouraged to review Leedy and Ormrod’s (2019) commentary on probability and non-probability sampling procedures. Ultimately, the selection of a sampling technique should be made based on the population parameters, available resources, and the purpose and goals of the study.

     A Priori Statistical Power Analysis . It is essential that quantitative researchers determine the minimum necessary sample size for computing statistical analyses before launching data collection (Balkin & Sheperis, 2011; Sink & Mvududu, 2010). An insufficient sample size substantially increases the probability of committing a Type II error, which occurs when the results of statistical testing reveal non–statistically significant findings when in reality (of which the researcher is unaware), significant findings do exist. Computing an a priori (computed before starting data collection) statistical power analysis reduces the chances of a Type II error by determining the smallest sample size that is necessary for finding statistical significance, if statistical significance exists (Balkin & Sheperis, 2011). Readers can consult Balkin and Sheperis (2011) as well as Sink and Mvududu (2010) for an overview of statistical significance, effect size, and statistical power. A number of statistical power analysis programs are available to researchers. For example, G*Power (Faul et al., 2009) is a free software program for computing a priori statistical power analyses.

Sampling Frame and Location      Counselors should report their sampling frame (total number of potential participants), response rate, raw sample (total number of participants that engaged with the study at any level, including missing and incomplete data), and the size of the final useable sample. It is also important to report the breakdown of the sample by demographic and other important participant background characteristics, for example, “XX.X% ( n = XXX) of participants were first-generation college students, XX.X% ( n = XXX) were second-generation . . .” The selection of demographic variables as well as inclusion and exclusion criteria should be justified in the literature review. Readers are encouraged to consult Creswell and Creswell (2018) for commentary on writing a strong literature review.

The timeframe, setting, and location during which data were collected are important methodological considerations (APA, 2020). Specific names of institutions and agencies should be masked to protect their privacy and confidentiality; however, authors can give descriptions of the setting and location (e.g., “Data were collected between April 2021 to February 2022 from clients seeking treatment for addictive disorders at an outpatient, integrated behavioral health care clinic located in the Northeastern United States.”). Authors should also report details about any interventions, curriculum, qualifications and background information for research assistants, experimental design protocol(s), and any other procedural design issues that would be necessary for replication. In instances in which describing a treatment or conditions becomes exorbitant (e.g., step-by-step manualized therapy, programs, or interventions), researchers can include footnotes, appendices, and/or references to refer the reader to more information about the intervention protocol.

Missing Data      Procedures for handling missing values (incomplete survey responses) are important considerations in quantitative data analysis. Perhaps the most straightforward option for handling missing data is to simply delete missing responses. However, depending on the percentage of data that are missing and how the data are missing (e.g., missing completely at random, missing at random, or not missing at random), data imputation techniques can be employed to recover missing values (Cook, 2021; Myers, 2011). Quantitative researchers should provide a clear rationale behind their decisions around the deletion of missing values or when using a data imputation method. Readers are encouraged to review Cook’s (2021) commentary on procedures for handling missing data in quantitative research.

Measures      Counseling and other social science researchers oftentimes use instruments and screening tools to appraise latent traits, which can be defined as variables that are inferred rather than observed (AERA et al., 2014). The purpose of the Measures (aka Instrumentation) section is to operationalize the construct(s) of measurement (Heppner et al., 2016). Specifically, the Measures subsection of the Methods in a quantitative manuscript tends to include a presentation of (a) the instrument and construct(s) of measurement, (b) reliability and validity evidence of test scores, and (c) cross-cultural fairness and norming. The Measures section might also include a Materials subsection for studies that employed data-gathering techniques or equipment besides or in addition to instruments (Heppner et al., 2016); for instance, if a research study involved the use of a biofeedback device to collect data on changes in participants’ body functions.

Instrument and Construct of Measurement      Begin the Measures section by introducing the questionnaire or screening tool, its construct(s) of measurement, number of test items, example test items, and scale points. If applicable, the Measures section can also include information on scoring procedures and cutoff criterion; for example, total score benchmarks for low, medium, and high levels of the trait. Authors might also include commentary about how test scores will be operationalized to constitute the variables in the upcoming Data Analysis section.

Reliability and Validity Evidence of Test Scores      Reliability evidence involves the degree to which test scores are stable or consistent and validity evidence refers to the extent to which scores on a test succeed in measuring what the test was designed to measure (AERA et al., 2014; Bardhoshi & Erford, 2017). Researchers should report both reliability and validity evidence of scores for each instrument they use (Wester et al., 2013). A number of forms of reliability evidence exist (e.g., internal consistency, test-retest, interrater, and alternate/parallel/equivalent forms) and the AERA standards (2014) outline five forms of validity evidence. For the purposes of this article, I will focus on internal consistency reliability, as it is the most popular and most commonly misused reliability estimate in social sciences research (Kalkbrenner, 2021a; McNeish, 2018), as well as construct validity. The psychometric properties of a test (including reliability and validity evidence) are contingent upon the scores from which they were derived. As such, no test is inherently valid or reliable; test scores are only reliable and valid for a certain purpose, at a particular time, for use with a specific sample. Accordingly, authors should discuss reliability and validity evidence in terms of scores, for example, “Stamm (2010) found reliability and validity evidence of scores on the Professional Quality of Life (ProQOL 5) with a sample of . . . ”

Internal Consistency Reliability Evidence. Internal consistency estimates are derived from associations between the test items based on one administration (Kalkbrenner, 2021a). Cronbach’s coefficient alpha (α) is indisputably the most popular internal consistency reliability estimate in counseling and throughout social sciences research in general (Kalkbrenner, 2021a; McNeish, 2018). The appropriate use of coefficient alpha is reliant on the data meeting the following statistical assumptions: (a) essential tau equivalence, (b) continuous level scale of measurement, (c) normally distributed data, (d) uncorrelated error, (e) unidimensional scale, and (f) unit-weighted scaling (Kalkbrenner, 2021a). For decades, coefficient alpha has been passed down in the instructional practice of counselor training programs. Coefficient alpha has appeared as the dominant reliability index in national counseling and psychology journals without most authors computing and reporting the necessary statistical assumption checking (Kalkbrenner, 2021a; McNeish, 2018). The psychometrically daunting practice of using alpha without assumption checking poses a threat to the veracity of counseling research, as the accuracy of coefficient alpha is threatened if the data violate one or more of the required assumptions.

Internal Consistency Reliability Indices and Their Appropriate Use . Composite reliability (CR) internal consistency estimates are derived in similar ways as coefficient alpha; however, the proper computation of CRs is not reliant on the data meeting many of alpha’s statistical assumptions (Kalkbrenner, 2021a; McNeish, 2018). For example, McDonald’s coefficient omega (ω or ω t ) is a CR estimate that is not dependent on the data meeting most of alpha’s assumptions (Kalkbrenner, 2021a). In addition, omega hierarchical (ω h ) and coefficient H are CR estimates that can be more advantageous than alpha. Despite the utility of CRs, their underuse in research practice is historically, in part, because of the complex nature of computation. However, recent versions of SPSS include a breakthrough point-and-click feature for computing coefficient omega as easily as coefficient alpha. Readers can refer to the SPSS user guide for steps to compute omega.

Guidelines for Reporting Internal Consistency Reliability. In the Measures subsection of the Methods section, researchers should report existing reliability evidence of scores for their instruments. This can be done briefly by reporting the results of multiple studies in the same sentence, as in: “A number of past investigators found internal consistency reliability evidence for scores on the [name of test] with a number of different samples, including college students (α =. XX, ω =. XX; Authors et al., 20XX), clients living with chronic back pain (α =. XX, ω =. XX; Authors et al., 20XX), and adults in the United States (α = . XX, ω =. XX; Authors et al., 20XX) . . .”

Researchers should also compute and report reliability estimates of test scores with their data set in the Measures section. If a researcher is using coefficient alpha, they have a duty to complete and report assumption checking to demonstrate that the properties of their sample data were suitable for alpha (Kalkbrenner, 2021a; McNeish, 2018). Another option is to compute a CR (e.g., ω or H ) instead of alpha. However, Kalkbrenner (2021a) recommended that researchers report both coefficient alpha (because of its popularity) and coefficient omega (because of the robustness of the estimate). The proper interpretation of reliability estimates of test scores is done on a case-by-case basis, as the meaning of reliability coefficients is contingent upon the construct of measurement and the stakes or consequences of the results for test takers (Kalkbrenner, 2021a). The following tentative interpretative guidelines for adults’ scores on attitudinal measures were offered by Kalkbrenner (2021b) for coefficient alpha: α < .70 = poor, α > .70 to .84 = acceptable, α > .85 = strong; and for coefficient omega: ω < .65 = poor, ω > .65 to .80 = acceptable, ω > .80 = strong. It is important to note that these thresholds are for adults’ scores on attitudinal measures; acceptable internal consistency reliability estimates of scores should be much stronger for high-stakes testing.

     Construct Validity Evidence of Test Scores. Construct validity involves the test’s ability to accurately capture a theoretical or latent construct (AERA et al., 2014). Construct validity considerations are particularly important for counseling researchers who tend to investigate latent traits as outcome variables. At a minimum, counseling researchers should report construct validity evidence for both internal structure and relations with theoretically relevant constructs. Internal structure (aka factorial validity) is a source of construct validity that represents the degree to which “the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” (AERA et al., 2014, p. 16). Readers can refer to Kalkbrenner (2021b) for a free (open access publishing) overview of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) that is written in layperson’s terms. Relations with theoretically relevant constructs (e.g., convergent and divergent validity) are another source of construct validity evidence that involves comparing scores on the test in question with scores on other reputable tests (AERA et al., 2014; Strauss & Smith, 2009).

     Guidelines for Reporting Validity Evidence. Counseling researchers should report existing evidence of at least internal structure and relations with theoretically relevant constructs (e.g., convergent or divergent validity) for each instrument they use. EFA results alone are inadequate for demonstrating internal structure validity evidence of scores, as EFA is a much less rigorous test of internal structure than CFA (Kalkbrenner, 2021b). In addition, EFA results can reveal multiple retainable factor solutions, which need to be tested/confirmed via CFA before even initial internal structure validity evidence of scores can be established. Thus, both EFA and CFA are necessary for reporting/demonstrating initial evidence of internal structure of test scores. In an extension of internal structure, counselors should also report existing convergent and/or divergent validity of scores. High correlations ( r > .50) demonstrate evidence of convergent validity and moderate-to-low correlations ( r < .30, preferably r < .10) support divergent validity evidence of scores (Sink & Stroh, 2006; Swank & Mullen, 2017).

In an ideal situation, a researcher will have the resources to test and report the internal structure (e.g., compute CFA firsthand) of scores on the instrumentation with their sample. However, CFA requires large sample sizes (Kalkbrenner, 2021b), which oftentimes is not feasible. It might be more practical for researchers to test and report relations with theoretically relevant constructs, though adding one or more questionnaire(s) to data collection efforts can come with the cost of increasing respondent fatigue. In these instances, researchers might consider reporting other forms of validity evidence (e.g., evidence based on test content, criterion validity, or response processes; AERA et al., 2014). In instances when computing firsthand validity evidence of scores is not logistically viable, researchers should be transparent about this limitation and pay especially careful attention to presenting evidence for cross-cultural fairness and norming.

Cross-Cultural Fairness and Norming      In a psychometric context, fairness (sometimes referred to as cross-cultural fairness) is a fundamental validity issue and a complex construct to define (AERA et al., 2014; Kane, 2010; Neukrug & Fawcett, 2015). I offer the following composite definition of cross-cultural fairness for the purposes of a quantitative Measures section: the degree to which test construction, administration procedures, interpretations, and uses of results are equitable and represent an accurate depiction of a diverse group of test takers’ abilities, achievement, attitudes, perceptions, values, and/or experiences (AERA et al., 2014; Educational Testing Service [ETS], 2016; Kane, 2010; Kane & Bridgeman, 2017). Counseling researchers should consider the following central fairness issues when selecting or developing instrumentation: measurement bias, accessibility, universal design, equivalent meaning (invariance), test content, opportunity to learn, test adaptations, and comparability (AERA et al., 2014; Kane & Bridgeman, 2017). Providing a comprehensive overview of fairness is beyond the scope of this article; however, readers are encouraged to read Chapter 3 in the AERA standards (2014) on Fairness in Testing.

In the Measures section, counseling researchers should include commentary on how and in what ways cross-cultural fairness guided their selection, administration, and interpretation of procedures and test results (AERA et al., 2014; Kalkbrenner, 2021b). Cross-cultural fairness and construct validity are related constructs (AERA et al., 2014). Accordingly, citing construct validity of test scores (see the previous section) with normative samples similar to the researcher’s target population is one way to provide evidence of cross-cultural fairness. However, construct validity evidence alone might not be a sufficient indication of cross-cultural fairness, as the latent meaning of test scores are a function of test takers’ cultural context (Kalkbrenner, 2021b). To this end, when selecting instrumentation, researchers should review original psychometric studies and consider the normative sample(s) from which test scores were derived.

Commentary on the Danger of Using Self-Developed and Untested Scales      Counseling researchers have an ethical duty to “carefully consider the validity, reliability, psychometric limitations, and appropriateness of instruments when selecting assessments” (ACA, 2014, p. 11). Quantitative researchers might encounter instances in which a scale is not available to measure their desired construct of measurement (latent/inferred variable). In these cases, the first step in the line of research is oftentimes to conduct an instrument development and score validation study (AERA et al., 2014; Kalkbrenner, 2021b). Detailing the protocol for conducting psychometric research is outside the scope of this article; however, readers can refer to the MEASURE Approach to Instrument Development (Kalkbrenner, 2021c) for a free (open access publishing) overview of the steps in an instrument development and score validation study. Adapting an existing scale can be option in lieu of instrument development; however, according to the AERA standards (2014), “an index that is constructed by manipulating and combining test scores should be subjected to the same validity, reliability, and fairness investigations that are expected for the test scores that underlie the index” (p. 210). Although it is not necessary that all quantitative researchers become psychometricians and conduct full-fledged psychometric studies to validate scores on instrumentation, researchers do have a responsibility to report evidence of the reliability, validity, and cross-cultural fairness of test scores for each instrument they used. Without at least initial construct validity testing of scores (calibration), researchers cannot determine what, if anything at all, an untested instrument actually measures.

Data Analysis      Counseling researchers should report and explain the selection of their data analytic procedures (e.g., statistical analyses) in a Data Analysis (or Statistical Analysis) subsection of the Methods or Results section (Giordano et al., 2021; Leedy & Ormrod, 2019). The placement of the Data Analysis section in either the Methods or Results section can vary between publication outlets; however, this section tends to include commentary on variables, statistical models and analyses, and statistical assumption checking procedures.

Operationalizing Variables and Corresponding Statistical Analyses      Clearly outlining each variable is an important first step in selecting the most appropriate statistical analysis for answering each research question (Creswell & Creswell, 2018). Researchers should specify the independent variable(s) and corresponding levels as well as the dependent variable(s); for example, “The first independent variable, time, was composed of the three following levels: pre, middle, and post. The dependent variables were participants’ scores on the burnout and compassion satisfaction subscales of the ProQOL 5.” After articulating the variables, counseling researchers are tasked with identifying each variable’s scale of measurement (Creswell & Creswell, 2018; Field, 2018; Flinn & Kalkbrenner, 2021). Researchers can select the most appropriate statistical test(s) for answering their research question(s) based on the scale of measurement for each variable and referring to Table 8.3 on page 159 in Creswell and Creswell (2018), Figure 1 in Flinn and Kalkbrenner (2021), or the chart on page 1072 in Field (2018).

Assumption Checking      Statistical analyses used in quantitative research are derived based on a set of underlying assumptions (Field, 2018; Giordano et al., 2021). Accordingly, it is essential that quantitative researchers outline their protocol for testing their sample data for the appropriate statistical assumptions. Assumptions of common statistical tests in counseling research include normality, absence of outliers (multivariate and/or univariate), homogeneity of covariance, homogeneity of regression slopes, homoscedasticity, independence, linearity, and absence of multicollinearity (Flinn & Kalkbrenner, 2021; Giordano et al., 2021). Readers can refer to Figure 2 in Flinn and Kalkbrenner (2021) for an overview of statistical assumptions for the major statistical analyses in counseling research.

Exemplar Quantitative Methods Section

The following section includes an exemplar quantitative methods section based on a hypothetical example and a practice data set. Producers and consumers of quantitative research can refer to the following section as an example for writing their own Methods section or for evaluating the rigor of an existing Methods section. As stated previously, a well-written literature review and research question(s) are essential for grounding the study and Methods section (Flinn & Kalkbrenner, 2021). The final piece of a literature review section is typically the research question(s). Accordingly, the following research question guided the following exemplar Methods section: To what extent are there differences in anxiety severity between college students who participate in deep breathing exercises with progressive muscle relaxation, group exercise program, or both group exercise and deep breathing with progressive muscle relaxation?

——-Exemplar——-

A quantitative group comparison research design was employed based on a post-positivist philosophy of science (Creswell & Creswell, 2018). Specifically, I implemented a quasi-experimental, control group pretest/posttest design to answer the research question (Leedy & Ormrod, 2019). Consistent with a post-positivist philosophy of science, I reflected on pursuing a probabilistic objective answer that is situated within the context of imperfect and fallible evidence. The rationale for the present study was grounded in Dr. David Servan-Schreiber’s (2009) theory of lifestyle practices for integrated mental and physical health. According to Servan-Schreiber, simultaneously focusing on improving one’s mental and physical health is more effective than focusing on either physical health or mental wellness in isolation. Consistent with Servan-Schreiber’s theory, the aim of the present study was to compare the utility of three different approaches for anxiety reduction: a behavioral approach alone, a physiological approach alone, and a combined behavioral approach and physiological approach.

I am in my late 30s and identify as a White man. I have a PhD in counselor education as well as an MS in clinical mental health counseling. I have a deep belief in and an active line of research on the utility of total wellness (combined mental and physical health). My research and clinical experience have informed my passion and interest in studying the utility of integrated physical and psychological health services. More specifically, my personal beliefs, values, and interest in total wellness influenced my decision to conduct the present study. I carefully followed the procedures outlined below to reduce the chances that my personal values biased the research design.

Participants and Procedures      Data collection began following approval from the IRB. Data were collected during the fall 2022 semester from undergraduate students who were at least 18 years or older and enrolled in at least one class at a land grant, research-intensive university located in the Southwestern United States. An a priori statistical power analysis was computed using G*Power (Faul et al., 2009). Results revealed that a sample size of at least 42 would provide an 80% power estimate, α = .05, with a moderate effect size, f = 0.25.

I obtained an email list from the registrar’s office of all students enrolled in a section of a Career Excellence course, which was selected to recruit students in a variety of academic majors because all undergraduate students in the College of Education are required to take this course. The focus of this study (mental and physical wellness) was also consistent with the purpose of the course (success in college). A non-probability, convenience sampling procedure was employed by sending a recruitment message to students’ email addresses via the Qualtrics online survey platform. The response rate was approximately 15%, with a total of 222 prospective participants indicating their interest in the study by clicking on the electronic recruitment link, which automatically sent them an invitation to attend an information session about the study. One hundred forty-four students showed up for the information session, 129 of which provided their voluntary informed consent to enroll in the study. Participants were given a confidential identification number to track their pretest/posttest responses, and then they completed the pretest (see the Measures section below). Respondents were randomly assigned in equal groups to either (a) deep breathing with progressive muscle relaxation condition, (b) group exercise condition, or (c) both exercise and deep breathing with progressive muscle relaxation condition.

A missing values analysis showed that less than 5% of data was missing for all cases. Expectation maximization was used to impute missing values, as Little’s Missing Completely at Random (MCAR) test revealed that the data could be treated as MCAR ( p = .367). Data from five participants who did not return to complete the posttest at the end of the semester were removed, yielding a robust sample of N = 124. Participants ( N = 124) ranged in age from 18 to 33 ( M = 21.64, SD = 3.70). In terms of gender identity, 65.0% ( n = 80) self-identified as female, 32.2% ( n = 40) as male, 0.8% ( n = 1) as transgender, and 2.4% ( n = 3) did not specify their gender identity. For ethnic identity, 50.0% ( n = 62) identified as White, 26.7% ( n = 33) as Latinx, 12.1% ( n = 15) as Asian, 9.6% ( n = 12) as Black, 0.8% ( n = 1) as Alaskan Native, and 0.8% ( n = 1) did not specify their ethnic identity. In terms of generational status, 36.3% ( n = 45) of participants were first-generation college students and 63.7% ( n = 79) were second-generation or beyond.

Group Exercise and Deep Breathing Programs      I was awarded a small grant to offer on-campus deep breathing with progressive muscle relaxation and group exercise programs. The structure of the group exercise program was based on Patterson et al. (2021), which consisted of more than 50 available exercise classes each week (e.g., cycling, yoga, swimming, dance). There was no limit to the number of classes that participants could attend; however, attending at least one class each week was required for participation in the study. Readers can refer to Patterson et al. for more information about the group exercise programming.

Neeru et al.’s (2015) deep breathing and progressive muscle relaxation programming was used in the present study. Participants completed daily deep breathing and Jacobson Progressive Muscle Relaxation (JPMR). JPMR was selected because of its documented success with treating anxiety disorders (Neeru et al., 2015). Specifically, the program consisted of four deep breathing steps completed five times and JPMR for approximately 25 minutes daily. Participants attended a weekly deep breathing and JPMR session facilitated by a licensed professional counselor. Participants also practiced deep breathing and JPMR on their own daily and kept a log to document their practice sessions. Readers can refer to Neeru et al. for more information about JPMR and the deep breathing exercises.

Measures      Prospective participants read an informed consent statement and indicated their voluntary informed consent by clicking on a checkbox. Next, participants confirmed that they met the following inclusion criteria: (a) at least 18 years old and (b) currently enrolled in at least one undergraduate college class. The instrumentation began with demographic items regarding participants’ gender identity, ethnic identity, age, and confidential identification number to track their pretest and posttest scores. Lastly, participants completed a convergent validity measure (Mental Health Inventory – 5) and the Generalized Anxiety Disorder (GAD)-7 to measure the outcome variable (anxiety severity).

Reliability and Validity Evidence of Test Scores      Tests of internal consistency were computed to test the reliability of scores on the screening tool for appraising anxiety severity with undergraduate students in the present sample. For internal consistency reliability of scores, coefficient alpha (α) and coefficient omega (ω) were computed with the following minimum thresholds for adults’ scores on attitudinal measures: α > .70 and ω > .65, based on the recommendations of Kalkbrenner (2021b).

The Mental Health Inventory–5. Participants completed the Mental Health Inventory (MHI)-5 to test the convergent validity of undergraduate students in the present samples’ scores on the GAD-7, which was used to measure the outcome variable in this study, anxiety severity. The MHI-5 is a 5-item measure for appraising overall mental health (Berwick et al., 1991). Higher MHI-5 scores reflect better mental health. Participants responded to test items (example: “How much of the time, during the past month, have you been a very nervous person?”) on the following Likert-type scale: 0 = none of the time , 1 = a little of the time , 2 = some of the time , 3 = a good bit of the time , 4 = most of the time , or 5 = all of the time . The MHI-5 has particular utility as a convergent validity measure because of its brief nature (5 items) coupled with the myriad of support for its psychometric properties (e.g., Berwick et al., 1991; Rivera-Riquelme et al., 2019; Thorsen et al., 2013). As just a few examples, Rivera-Riquelme et al. (2019) found acceptable internal consistency reliability evidence (α = .71, ω = .78) and internal structure validity evidence of MHI-5 scores. In addition, the findings of Thorsen et al. (2013) demonstrated convergent validity evidence of MHI-5 scores. Findings in the extant literature (e.g., Foster et al., 2016; Vijayan & Joseph, 2015) established an inverse relationship between anxiety and mental health. Thus, a strong negative correlation ( r > −.50; Sink & Stroh, 2006) between the MHI-5 and GAD-7 would support convergent validity evidence of scores.

     The Generalized Anxiety Disorder–7. The GAD-7 is a 7-item screening tool for appraising anxiety severity (Spitzer et al., 2006). Participants respond to test items based on the following prompt: “Over the last 2 weeks, how often have you been bothered by the following problems?” and anchor definitions: 0 = not at all , 1 = several days , 2 = more than half the days , or 3 = nearly every day (Spitzer et al., 2006, p. 1739). Sample test items include “being so restless that it’s hard to sit still” and “feeling afraid as if something awful might happen.” The GAD-7 items can be summed into an interval-level composite score, with higher scores indicating greater levels of Anxiety Severity. GAD-7 scores can range from 0 to 21 and are classified as mild (0–5), moderate (6–10), moderately severe (11–15), or severe (16–21).

In the initial score validation study, Spitzer et al. (2006) found evidence for internal consistency (α = .92) and test-retest reliability (intraclass correlation = .83) of GAD-7 scores among adults in the United States who were receiving services in primary care clinics. In more recent years, a number of additional investigators found internal consistency reliability evidence for GAD-7 scores, including samples of undergraduate college students in the southern United States (α = .91; Sriken et al., 2022), Black and Latinx adults in the United States (α = .93, ω = .93; Kalkbrenner, 2022), and English-speaking college students living in Ethiopia (ω = .77; Manzar et al., 2021). Similarly, the data set in the present study displayed acceptable internal consistency reliability evidence for GAD-7 scores (α = .82, ω = .81).

Spitzer et al. (2006) used factor analysis to establish internal structure validity, correlations with established screening tools for convergent validity, and criterion validity evidence by demonstrating the capacity of GAD-7 scores for detecting likely cases of generalized anxiety disorder. A number of subsequent investigators found internal structure validity evidence of GAD-7 scores via CFA and multiple-group CFA (Kalkbrenner, 2022; Sriken et al., 2022). In addition, the findings of Sriken et al. (2022) supported both the convergent and divergent validity of GAD-7 scores with other established tests. The data set in the present study ( N = 124) was not large enough for internal structure validity testing. However, a strong negative correlation ( r = −.78) between the GAD-7 and MHI-5 revealed convergent validity evidence of GAD-7 scores with the present sample of undergraduate students.

In terms of norming and cross-cultural fairness, there were qualitative differences between the normative GAD-7 sample in the original score validation study (adults in the United States receiving services in primary care clinics) and the non-clinical sample of young adult college students in the present study. However, the demographic profile of the present sample is consistent with Sriken et al. (2022), who validated GAD-7 scores with a large sample ( N = 414) of undergraduate college students. For example, the demographic profile of the sample in the current study for gender identity closely resembled the composition of Sriken et al.’s sample, which included 66.7% women, 33.1% men, and 0.2% transgender individuals. In terms of ethnic identity, the demographic profile of the present sample was consistent with Sriken et al. for White and Black participants, although the present sample reflected a somewhat smaller proportion of Asian students (19.6%) and a greater proportion of Latinx students (5.3%).

Data Analysis and Assumption Checking      The present study included two categorical-level independent variables and one continuous-level dependent variable. The first independent variable, program, consisted of three levels: (a) deep breathing with progressive muscle relaxation, (b) group exercise, or (c) both exercise and deep breathing with progressive muscle relaxation. The second independent variable, time, consisted of two levels: the beginning of the semester and the end of the semester. The dependent variable was participants’ interval-level score on the GAD-7. Accordingly, a 3 (program) X 2 (time) mixed-design analysis of variance (ANOVA) was the most appropriate statistical test for answering the research question (Field, 2018).

The data were examined for the following statistical assumptions for a mixed-design ANOVA: absence of outliers, normality, homogeneity of variance, and sphericity of the covariance matrix based on the recommendations of Field (2018). Standardized z -scores revealed an absence of univariate outliers ( z > 3.29). A review of skewness and kurtosis values were highly consistent with a normal distribution, with the majority of values less than ± 1.0. The results of a Levene’s test demonstrated that the data met the assumption of homogeneity of variance, F (2, 121) = 0.73, p = .486. Testing the data for sphericity was not applicable in this case, as the within-subjects IV (time) only comprised two levels.

——- End Exemplar ——-

The current article is a primer on guidelines, best practices, and recommendations for writing or evaluating the rigor of the Methods section of quantitative studies. Although the major elements of the Methods section summarized in this manuscript tend to be similar across the national peer-reviewed counseling journals, differences can exist between journals based on the content of the article and the editorial board members’ preferences. Accordingly, it can be advantageous for prospective authors to review recently published manuscripts in their target journal(s) to look for any similarities in the structure of the Methods (and other sections). For instance, in one journal, participants and procedures might be reported in a single subsection, whereas in other journals they might be reported separately. In addition, most journals post a list of guidelines for prospective authors on their websites, which can include instructions for writing the Methods section. The Methods section might be the most important section in a quantitative study, as in all likelihood methodological flaws cannot be resolved once data collection is complete, and serious methodological flaws will compromise the integrity of the entire study, rendering it unpublishable. It is also essential that consumers of quantitative research can proficiently evaluate the quality of a Methods section, as poor methods can make the results meaningless. Accordingly, the significance of carefully planning, executing, and writing a quantitative research Methods section cannot be understated.

Conflict of Interest and Funding Disclosure The authors reported no conflict of interest or funding contributions for the development of this manuscript.

American Counseling Association. (2014). ACA code of ethics .

American Psychological Association. (2020). Publication manual of the American Psychological Association: The official guide to APA style (7th ed.).

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). The standards for educational and psychological testing . https://www.aera.net/Publications/Books/Standards-for-Educational-Psychological-Testing-2014-Editio n

Balkin, R. S., & Sheperis, C. J. (2011). Evaluating and reporting statistical power in counseling research. Journal of Counseling & Development , 89 (3), 268–272. https://doi.org/10.1002/j.1556-6678.2011.tb00088.x

Bardhoshi, G., & Erford, B. T. (2017). Processes and procedures for estimating score reliability and precision. Measurement and Evaluation in Counseling and Development , 50 (4), 256–263. https://doi.org/10.1080/07481756.2017.1388680

Berwick, D. M., Murphy, J. M., Goldman, P. A., Ware, J. E., Jr., Barsky, A. J., & Weinstein, M. C. (1991). Performance of a five-item mental health screening test. Medical Care , 29 (2), 169–176. https://doi.org/10.1097/00005650-199102000-00008

Cook, R. M. (2021). Addressing missing data in quantitative counseling research. Counseling Outcome Research and Evaluation , 12 (1), 43–53. https://doi.org/10.1080/21501378.2019.171103

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.

Educational Testing Service. (2016). ETS international principles for fairness review of assessments: A manual for developing locally appropriate fairness review guidelines for various countries . https://www.ets.org/content/dam/ets-org/pdfs/about/fairness-review-international.pdf

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods , 41 (4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149

Field, A. (2018). Discovering statistics using IBM SPSS Statistics (5th ed.). SAGE.

Flinn, R. E., & Kalkbrenner, M. T. (2021). Matching variables with the appropriate statistical tests in counseling research. Teaching and Supervision in Counseling , 3 (3), Article 4. https://doi.org/10.7290/tsc030304

Foster, T., Steen, L., O’Ryan, L., & Nelson, J. (2016). Examining how the Adlerian life tasks predict anxiety in first-year counseling students. The Journal of Individual Psychology , 72 (2), 104–120. https://doi.org/10.1353/jip.2016.0009

Giordano, A. L., Schmit, M. K., & Schmit, E. L. (2021). Best practice guidelines for publishing rigorous research in counseling. Journal of Counseling & Development , 99 (2), 123–133. https://doi.org/10.1002/jcad.12360

Hays, D. G., & Singh, A. A. (2012). Qualitative inquiry in clinical and educational settings . Guilford.

Heppner, P. P., Wampold, B. E., Owen, J., Wang, K. T., & Thompson, M. N. (2016). Research design in counseling (4th ed.). Cengage.

Kalkbrenner, M. T. (2021a). Alpha, omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation . https://doi.org/10.1080/21501378.2021.1940118

Kalkbrenner, M. T. (2021b). Enhancing assessment literacy in professional counseling: A practical overview of factor analysis. The Professional Counselor , 11 (3), 267–284. https://doi.org/10.15241/mtk.11.3.267

Kalkbrenner, M. T. (2021c). A practical guide to instrument development and score validation in the social sciences: The MEASURE Approach. Practical Assessment, Research & Evaluation , 26 (1), Article 1. https://doi.org/10.7275/svg4-e671

Kalkbrenner, M. T. (2022). Validation of scores on the Lifestyle Practices and Health Consciousness Inventory with Black and Latinx adults in the United States: A three-dimensional model. Measurement and Evaluation in Counseling and Development , 55 (2), 84–97. https://doi.org/10.1080/07481756.2021.1955214

Kane, M. (2010). Validity and fairness. Language Testing , 27 (2), 177–182. https://doi.org/10.1177/0265532209349467

Kane, M., & Bridgeman, B. (2017). Research on validity theory and practice at ETS . In R. E. Bennett & M. von Davier (Eds.), Advancing human assessment: The methodological, psychological and policy contributions of ETS (pp. 489–552). Springer. https://doi.org/10.1007/978-3-319-58689-2_16

Korn, J. H., & Bram, D. R. (1988). What is missing in the Method section of APA journal articles? American Psychologist , 43 (12), 1091–1092. https://doi.org/10.1037/0003-066X.43.12.1091

Leedy, P. D., & Ormrod, J. E. (2019). Practical research: Planning and design (12th ed.). Pearson.

Lutz, W., & Hill, C. E. (2009). Quantitative and qualitative methods for psychotherapy research: Introduction to special section. Psychotherapy Research , 19 (4–5), 369–373. https://doi.org/10.1080/10503300902948053

Manzar, M. D., Alghadir, A. H., Anwer, S., Alqahtani, M., Salahuddin, M., Addo, H. A., Jifar, W. W., & Alasmee, N. A. (2021). Psychometric properties of the General Anxiety Disorders-7 Scale using categorical data methods: A study in a sample of university attending Ethiopian young adults. Neuropsychiatric Disease and Treatment , 17 (1), 893–903. https://doi.org/10.2147/NDT.S295912

McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods , 23 (3), 412–433. https://doi.org/10.1037/met0000144

Myers, T. A. (2011). Goodbye, listwise deletion: Presenting hot deck imputation as an easy and effective tool for handling missing data. Communication Methods and Measures , 5 (4), 297–310. https://doi.org/10.1080/19312458.2011.624490

Neeru, Khakha, D. C., Satapathy, S., & Dey, A. B. (2015). Impact of Jacobson Progressive Muscle Relaxation (JPMR) and deep breathing exercises on anxiety, psychological distress and quality of sleep of hospitalized older adults. Journal of Psychosocial Research , 10 (2), 211–223.

Neukrug, E. S., & Fawcett, R. C. (2015). Essentials of testing and assessment: A practical guide for counselors, social workers, and psychologists (3rd ed.). Cengage.

Onwuegbuzie, A. J., & Leech, N. L. (2005). On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. International Journal of Social Research Methodology , 8 (5), 375–387. https://doi.org/10.1080/13645570500402447

Patterson, M. S., Gagnon, L. R., Vukelich, A., Brown, S. E., Nelon, J. L., & Prochnow, T. (2021). Social networks, group exercise, and anxiety among college students. Journal of American College Health , 69 (4), 361–369. https://doi.org/10.1080/07448481.2019.1679150

Rivera-Riquelme, M., Piqueras, J. A., & Cuijpers, P. (2019). The Revised Mental Health Inventory-5 (MHI-5) as an ultra-brief screening measure of bidimensional mental health in children and adolescents. Psychiatry Research , 247 (1), 247–253. https://doi.org/10.1016/j.psychres.2019.02.045

Rovai, A. P., Baker, J. D., & Ponton, M. K. (2013). Social science research design and statistics: A practitioner’s guide to research methods and SPSS analysis . Watertree Press.

Servan-Schreiber, D. (2009). Anticancer: A new way of life (3rd ed.). Viking Publishing.

Sink, C. A., & Mvududu, N. H. (2010). Statistical power, sampling, and effect sizes: Three keys to research relevancy. Counseling Outcome Research and Evaluation , 1 (2), 1–18. https://doi.org/10.1177/2150137810373613

Sink, C. A., & Stroh, H. R. (2006). Practical significance: The use of effect sizes in school counseling research. Professional School Counseling , 9 (5), 401–411. https://doi.org/10.1177/2156759X0500900406

Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication , 25 (3), 389–411. https://doi.org/10.1177/0741088308317815

Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Löwe, B. (2006). A brief measure for assessing Generalized Anxiety Disorder: The GAD-7. Archives of Internal Medicine , 166 (10), 1092–1097. https://doi.org/10.1001/archinte.166.10.1092

Sriken, J., Johnsen, S. T., Smith, H., Sherman, M. F., & Erford, B. T. (2022). Testing the factorial validity and measurement invariance of college student scores on the Generalized Anxiety Disorder (GAD-7) Scale across gender and race. Measurement and Evaluation in Counseling and Development , 55 (1), 1–16. https://doi.org/10.1080/07481756.2021.1902239

Stamm, B. H. (2010). The Concise ProQOL Manual (2nd ed.). bit.ly/StammProQOL

Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology , 5 , 1–25. https://doi.org/10.1146/annurev.clinpsy.032408.153639

Swank, J. M., & Mullen, P. R. (2017). Evaluating evidence for conceptually related constructs using bivariate correlations. Measurement and Evaluation in Counseling and Development , 50 (4), 270–274. https://doi.org/10.1080/07481756.2017.1339562

Thorsen, S. V., Rugulies, R., Hjarsbech, P. U., & Bjorner, J. B. (2013). The predictive value of mental health for long-term sickness absence: The Major Depression Inventory (MDI) and the Mental Health Inventory (MHI-5) compared. BMC Medical Research Methodology , 13 (1), Article 115. https://doi.org/10.1186/1471-2288-13-115

Vijayan, P., & Joseph, M. I. (2015). Wellness and social interaction anxiety among adolescents. Indian Journal of Health and Wellbeing , 6 (6), 637–639.

Wester, K. L., Borders, L. D., Boul, S., & Horton, E. (2013). Research quality: Critique of quantitative articles in the Journal of Counseling & Development . Journal of Counseling & Development , 91 (3), 280–290. https://doi.org/10.1002/j.1556-6676.2013.00096.x

Appendix Outline and Brief Overview of a Quantitative Methods Section

  • Research design (e.g., group comparison [experimental, quasi-experimental, ex-post-facto], correlational/predictive) and conceptual framework
  • Researcher bias and reflexivity statement

Participants and Procedures

  • Recruitment procedures for data collection in enough detail for replication
  • Research ethics including but not limited to receiving institutional review board (IRB) approval
  • Sampling procedure: Researcher access to prospective participants, recruitment procedures, and data collection modality (e.g., online survey)
  • Sampling technique: Probability sampling (e.g., simple random sampling, systematic random sampling, stratified random sampling, cluster sampling) or non-probability sampling (e.g., volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, matched sampling)
  • A priori statistical power analysis
  • Sampling frame, response rate, raw sample, missing data, and the size of the final useable sample
  • Demographic breakdown for participants
  • Timeframe, setting, and location where data were collected
  • Introduction of the instrument and construct(s) of measurement (include sample test items)
  • * Note : At a minimum, internal structure validity evidence of scores should include both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).
  • *Note : Only using coefficient alpha without completing statistical assumption checking is insufficient. Compute both coefficient omega and alpha or alpha with proper assumption checking.
  • Review and citations of original psychometric studies and normative samples

Data Analysis

  • Operationalized variables and scales of measurement
  • Procedures for matching variables with appropriate statistical analyses
  • Assumption checking procedures

Note . This appendix is a brief summary and not a substitute for the narrative in the text of this article.

Michael T. Kalkbrenner , PhD, NCC, is an associate professor at New Mexico State University. Correspondence may be addressed to Michael T. Kalkbrenner, 1780 E. University Ave., Las Cruces, NM 88003, [email protected].

major sections of a research report used in counseling research

Recent Publications

  • 2024 Dissertation Excellence Awards August 8, 2024
  • Career Counselors Addressing Social Determinants of Mental Health in Rural Communities May 22, 2024
  • The More, the Merrier? A Phenomenological Investigation of Counselor-in-Training Simultaneous Supervision May 22, 2024
  • Body Neutral Parenting: A Grounded Theory of How to Help Cultivate Healthy Body Image in Children and Adolescents May 22, 2024

AllPsych

This type of report is likely to be discarded well before the end.  If it had contained important new knowledge this information never made it to its intended destination.  The journal would likely not publish it and had it gotten published, would have frustrated the reader to the point of confusion and disregard.  Therefore, we follow a specific writing style to avoid this type of mess.  And, while following a style may seem time consuming and frustrating in itself, it helps assure that your newfound knowledge makes its way into the world.

The American Psychological Association [APA] has developed what is the most well known and most used manual of publication style in any of the social sciences.  The most recent version was published in 2002 and marks the fifth edition.  While the text is somewhat daunting at first glance, the style does assure that your knowledge will be disseminated in an organized and understood fashion.

Most research reports follow a specific list a sections as recommended by this manual.  These sections include: Title Page, Abstract, Introduction, Methods, Results, Discussion, References, Appendices, and Author Note.  Each of these areas will be summarized below, but for any serious researcher understanding the specifics of the APA manual is imperative.

Title Page.

The title page of a research report serves two important functions.  First, it provides a quick summary of the research, including the title of the article, authors’ names, and affiliation.  Second, it provides a means for a blind evaluation.  When submitted to a professional journal, a short title is placed on the title page and carried throughout the remainder of the paper.  Since the authors’ names and affiliation are only on the title page, removing this page prior to review reduces the chance of bias by the journal reviewers.  Once the reviews are complete, the title page is once again attached and the recommendations of the reviewers can be returned to the authors.

The abstract is the second page of the research report.  Consider the abstract a short summary of the article.  It is typically between 100 and 150 words and includes a summary of the major areas of the paper.  Often included in an abstract are the problem or original theory, a one or two sentence explanation of previous research in this area, the characteristics of the present study, the results, and a brief discussion statement.  An abstract allows the reader to quickly understand what the article is about and help him or her decide if further reading will be helpful.

Introduction.

The main body of the paper has four sections, with the introduction being the first.  The purpose of the introduction is to introduce the reader to the topic and discuss the background of the issue at hand.  For instance, in our article on work experience, the introduction would likely include a statement of the problem, for example: “prior work experience may play an important role in student achievement in college.”

The introduction also includes a literature review, which typically follows the introduction of the topic.  All of the research you completed while developing your study goes here.  It is important to bring the reader up to date and lead them into why you decided to conduct this study.  You may cite research related to motivation and success after college and argue that gaining prior work experience may delay college graduation but also helps to improve the college experience and may ultimately further an individual’s career.  You may also review research that argues against your theory.  The goal of the introduction is to lead the reader into your study so that he has a solid background of the material and an understanding of your rationale.

The methods section is the second part of the body of the article.  Methods refers to the actual procedures used to perform the research.  Areas discussed will usually include subject recruitment and assignment to groups, subject attributes, and possibly pretest findings.  Any surveys or treatments will also be discussed in this section.  The main point of the methods section is to allow others to critique your research and replicate it if desired.  The methods section is often the most systematic section in that small details are typically included in order to help others critique, evaluate, and/or replicate the research process.

Most experimental studies include a statistical analysis of the results, which is the major focus of the results section.  Included here are the procedures and statistical analyses performed, the rationale for choosing specific procedures, and ultimately the results.  Charts, tables, and graphs are also often included to better explain the treatment effects or the differences and similarities between groups.  Ultimately, the end of the results section reports the acceptance or rejection of the null hypothesis.  For example, is there a difference between the grades of students with prior work experience and students without prior work experience?

Discussion.

While the first three sections of the body are specific in terms of what is included, the discussion section can be less formal.  This section allows the authors to critique the research, discuss how the results are applicable to real life or even how they don’t support the original theory.  Discussion refers to the authors opportunity to discuss in a less formal manner the results and implications of the research and is often used to suggest needs for additional research on specific areas related to the current study.

References.

Throughout the paper and especially in the introduction section, articles from other authors are cited.  The references section includes a list of all articles used in the development of the hypothesis that were cited in the literature review section.  You many also see a sections that includes recommended readings, referring to important articles related to the topic that were not cited in the actual paper.

Appendices.

Appendices are always included at the end of the paper.  Graphs, charts, and tables are also included at the end, in part due to changes that may take place when the paper is formatted for publication.  Appendices should include only material that is relevant and assists the reader in understanding the current study.  Actual raw data is rarely included in a research paper.

Author Note.

Finally, the authors are permitted to include a short note at the end of the paper.  This note is often personal and may be used to thank colleagues who assisted in the research but not to the degree of warranting co-authorship.  This section can also be used to inform the reader that the current study is part of a larger study or represents the results of a dissertation.  The author note is very short, usually no more than a few sentences.

American Psychological Association Logo

Journal of Counseling Psychology

  • Read this journal
  • Read free articles
  • Journal snapshot
  • Advertising information

Journal scope statement

The Journal of Counseling Psychology ® publishes empirical research in the areas of

  • counseling activities (including assessment, interventions, consultation, supervision, training, prevention, psychological education, and advocacy)
  • career and educational development and vocational psychology
  • diversity and underrepresented populations in relation to counseling activities
  • the development of new measures to be used in counseling activities
  • professional issues in counseling psychology

In addition, the Journal of Counseling Psychology considers reviews or theoretical contributions that have the potential for stimulating further research in counseling psychology, and conceptual or empirical contributions about methodological issues in counseling psychology research.

The Journal of Counseling Psychology considers manuscripts that deal with clients who are not severely disturbed, who have problems with living, or who are experiencing developmental crises. Manuscripts that deal with the strengths or healthy aspects of more severely disturbed clients also are considered. The Journal of Counseling Psychology also considers manuscripts that focus on optimizing the potentials, accelerating the development, or enhancing the well-being of non-client populations.

Both quantitative and qualitative methods are appropriate. Extensions of previous studies, implications for public policy or social action, and counseling research and applications are encouraged.

Disclaimer: APA and the editors of Journal of Counseling Psychology assume no responsibility for statements and opinions advanced by the authors of its articles.

Equity, diversity, and inclusion

Journal of Counseling Psychology supports equity, diversity, and inclusion (EDI) in its practices. More information on these initiatives is available under EDI Efforts .

Open science

The APA Journals Program is committed to publishing transparent, rigorous research; improving reproducibility in science; and aiding research discovery. Open science practices vary per editor discretion. View the initiatives implemented by this journal .

Editor’s Choice

Each issue of the Journal of Counseling Psychology will honor an article as the “ Editor’s Choice ”. Selection of the “Editor’s Choice” article is based on nominations by the associate editors. Selection criteria are having a large potential impact on the field of counseling psychology specifically and psychology generally and/or elevating an important future direction for scientific inquiry.

Author and editor spotlights

Explore journal highlights : free article summaries, editor interviews and editorials, journal awards, mentorship opportunities, and more.

Prior to submission, please carefully read and follow the submission guidelines detailed below. Manuscripts that do not conform to the submission guidelines may be returned without review.

The completion of a Manuscript Submission Checklist (PDF, 42KB) that signifies that authors have read this material and agree to adhere to the guidelines is now required. The checklist should follow the cover letter as part of the submission.

To submit to the editorial office of William Ming Liu, PhD, please submit manuscripts electronically through the Manuscript Submission Portal in Microsoft Word (.docx), or Open Office format or LaTex (.tex) as a zip file with an accompanied Portable Document Format (.pdf) of the manuscript file.

Prepare manuscripts according to the Publication Manual of the American Psychological Association using the 7 th edition. Manuscripts may be copyedited for bias-free language (see Chapter 5 of the Publication Manual ). APA Style and Grammar Guidelines for the 7 th edition are available.

Submit Manuscript

General correspondence may be directed to:

William Ming Liu, PhD Department of Counseling, Higher Education & Special Education University of Maryland 3214 Benjamin Building College Park, MD 20742 United States of America

General correspondence may be directed to the editorial office via email.

In addition to addresses, phone numbers, and the names of all coauthors, please supply electronic mail addresses and fax numbers of the corresponding author for potential use by the editorial office and later by the production office.

The Journal of Counseling Psychology ® is now using a software system to screen submitted content for similarity with other published content. The system compares the initial version of each submitted manuscript against a database of 40+ million scholarly documents, as well as content appearing on the open web. This allows APA to check submissions for potential overlap with material previously published in scholarly journals (e.g., lifted or republished material).

Manuscript details

The Journal of Counseling Psychology publishes theoretical, empirical, and methodological articles on multicultural aspects of counseling, counseling interventions, assessment, consultation, prevention, career development, and vocational psychology and features studies on the supervision and training of counselors.

Particular attention is given to empirical studies on the evaluation and application of counseling interventions and the applications of counseling with diverse and underrepresented populations.

  • View Guidelines for Reviewing Manuscripts

Manuscripts should be concisely written in simple, unambiguous language, using bias-free language. Present material in logical order, starting with a statement of purpose and progressing through an analysis of evidence to conclusions and implications. The conclusions should be clearly related to the evidence presented.

Manuscript title

The manuscript title should be accurate, fully explanatory, and preferably no longer than 12 words.

Manuscripts must be accompanied by an abstract of no more than 250 words. The abstract should clearly and concisely describe the hypotheses or research questions, research participants, and procedure. The abstract should not be used to present the rationale for the study, but instead should provide a summary of key research findings.

All results described in the abstract should accurately reflect findings reported in the body of the paper and should not characterize findings in stronger terms than the article. For example, hypotheses described in the body of the paper as having received mixed support should be summarized similarly in the abstract.

One double spaced line below the abstract, please provide up to five keywords as an aid to indexing.

Public significance statement

Authors submitting manuscripts to the Journal of Counseling Psychology are required to provide a short statement of one to two sentences to summarize the article's findings and significance to the educated public (e.g., understanding human thought, feeling, and behavior and/or assisting with solutions to psychological or societal problems). This description should be included within the manuscript on the abstract/keywords page.

  • View Guidance for Translational Abstracts and Public Significance Statements

Equity, Diversity, and Inclusion in Journal of Counseling Psychology

Journal of Counseling Psychology  is committed to improving equity, diversity, and inclusion (EDI) in scientific research, in line with the APA Publishing EDI framework and APA’s trio of 2021 resolutions to address systemic racism in psychology.

The journal encourages submissions which extend beyond Western, educated, industrialized, rich, and democratic (WEIRD) samples ( Henrich, et al., 2010 ). The journal welcomes submissions which feature Black, Indigenous, and People of Color (BIPOC) and other marginalized communities. The journal particularly welcomes submissions which feature collaborative research models (e.g., community-based participatory research [CBPR]; see Collins, et al., 2018 ) and study designs that address heterogeneity within diverse samples.

The Journal of Counseling Psychology encourages authors to consider the ways in which power, justice, equity, marginalization, liberation, and healing are intertwined with people from diverse and thriving communities. Often, people from communities that have been (and continue to be) marginalized because of racism, anti-Blackness, sexism, classism, immigration-status, ageism, homophobia, ableism, heterosexism, transphobia, etc. and have had their ways of knowing and living minimized, erased, and not considered science or worthy of scholarship (epistemic exclusion) (Settles et al., 2021). As a journal, we encourage authors from these communities to submit studies, critical reviews, conceptualizations, and theorizations to challenge our foundational assumptions and advance our research. Additionally, we encourage authors to use theories like intersectionality to ground their use and interpretation of concepts and to examine systems and processes (i.e., racism not just race).

To promote a more equitable research and publication process, Journal of Counseling Psychology has adopted the following standards for inclusive research reporting.

Author contribution statements using CRediT

The APA Publication Manual ( 7th ed. ) stipulates that “authorship encompasses…not only persons who do the writing but also those who have made substantial scientific contributions to a study.” In the spirit of transparency and openness, Journal of Counseling Psychology has adopted the Contributor Roles Taxonomy (CRediT) to describe each author's individual contributions to the work. CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to a manuscript.

Submitting authors will be asked to identify the contributions of all authors at initial submission according to the CRediT taxonomy. If the manuscript is accepted for publication, the CRediT designations will be published as an author contributions statement in the author note of the final article. All authors should have reviewed and agreed to their individual contribution(s) before submission.

CRediT includes 14 contributor roles, as described below:

  • Conceptualization : Ideas; formulation or evolution of overarching research goals and aims.
  • Data curation : Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later re-use.
  • Formal analysis : Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data.
  • Funding acquisition : Acquisition of the financial support for the project leading to this publication.
  • Investigation : Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection.
  • Methodology : Development or design of methodology; creation of models.
  • Project administration : Management and coordination responsibility for the research activity planning and execution.
  • Resources : Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools.
  • Software : Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components.
  • Supervision : Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team.
  • Validation : Verification, whether as a part of the activity or separate, of the overall replication/reproducibility of results/experiments and other research outputs.
  • Visualization : Preparation, creation and/or presentation of the published work, specifically visualization/data presentation.
  • Writing—original draft : Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation).
  • Writing—review & editing : Preparation, creation and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision—including pre- or post-publication stages.

Authors can claim credit for more than one contributor role, and the same role can be attributed to more than one author. Not all roles will be applicable to a particular scholarly work.

Masked review policy

This journal has adopted a policy of masked review for all submissions.

The cover letter should include all authors' names and institutional affiliations. Author notes providing this information should also appear at the bottom of the title page, which will be removed before the manuscript is sent for masked review.

Make every effort to see that the manuscript itself contains no clues to the authors' identity, including grant numbers, names of institutions providing IRB approval, self-citations, and links to online repositories for data, materials, code, or preregistrations (e.g., Create a View-only Link for a Project ).

Cover letter

The cover letter accompanying the manuscript submission must include all authors' names and affiliations to avoid potential conflicts of interest in the review process. Provide addresses and phone numbers, as well as electronic mail addresses and fax numbers, if available, for all authors for use by the editorial office and later by the production office.

The cover letter must clearly state the order of authorship and confirm that this order corresponds to the authors' relative contributions to the research effort reported in the manuscript.

Fragmented (or piecemeal) publication involves dividing the report of a research project into multiple articles. In some circumstances, it may be appropriate to publish more than one report based on overlapping data. However, the authors of such manuscripts must inform the editor in the cover letter about any other previous publication or manuscript currently in review that is based—even in part—on data reported in the present manuscript.

Authors are obligated to inform the editor about the existence of other reports from the same research project in the cover letter accompanying the current submission. Manuscripts found to have violated this policy may be returned without review.

Length and style of manuscripts

Full-length manuscripts reporting results of a single quantitative study generally should not exceed 35 pages total (including cover page, abstract, text, references, tables, and figures), with margins of at least 1 inch on all sides and a standard font (e.g., Times New Roman) of 12 points (no smaller). The entire paper (text, references, tables, etc.) must be double spaced.

Reports of qualitative studies generally should not exceed 45 pages. For papers that exceed these page limits, authors must provide a rationale to justify the extended length in their cover letter (e.g., multiple studies are reported). Papers that do not conform to these guidelines may be returned with instructions to revise before a peer review is invited.

The Journal of Counseling Psychology encourages direct replications, preferably with an extension. Submissions should include “A Replication of XX Study” in the subtitle of the manuscript as well as in the abstract.

Brief reports

In addition to full-length manuscripts, the journal will consider brief reports. The brief reports format may be appropriate for empirically sound studies that are limited in scope, reports of preliminary findings that need further replication, or replications and extensions of prior published work.

Authors should indicate in the cover letter that they wish to have their manuscript considered as a brief report, and they must agree not to submit the full report to another journal.

The brief report should give a clear, condensed summary of the procedure of the study and as full an account of the results as space permits.

Brief reports are generally 20–25 pages in total length (including cover page, abstract, text, references, tables, and figures) and must follow the same format requirements as full-length manuscripts. Brief reports that exceed 25 pages will not be considered.

Manuscript preparation

Prepare manuscripts according to the Publication Manual of the American Psychological Association (6th or  7th edition). Manuscripts may be copyedited for bias-free language (see Chapter 3 of the 6th edition or Chapter 5 of the 7th edition).

Double-space all copy. Other formatting instructions, as well as instructions on preparing tables, figures, references, metrics, and abstracts, appear in the Manual . Additional guidance on APA Style is available on the APA Style website .

Below are additional instructions regarding the preparation of display equations, computer code, and tables.

Display equations

We strongly encourage you to use MathType (third-party software) or Equation Editor 3.0 (built into pre-2007 versions of Word) to construct your equations, rather than the equation support that is built into Word 2007 and Word 2010. Equations composed with the built-in Word 2007/Word 2010 equation support are converted to low-resolution graphics when they enter the production process and must be rekeyed by the typesetter, which may introduce errors.

To construct your equations with MathType or Equation Editor 3.0:

  • Go to the Text section of the Insert tab and select Object.
  • Select MathType or Equation Editor 3.0 in the drop-down menu.

If you have an equation that has already been produced using Microsoft Word 2007 or 2010 and you have access to the full version of MathType 6.5 or later, you can convert this equation to MathType by clicking on MathType Insert Equation. Copy the equation from Microsoft Word and paste it into the MathType box. Verify that your equation is correct, click File, and then click Update. Your equation has now been inserted into your Word file as a MathType Equation.

Use Equation Editor 3.0 or MathType only for equations or for formulas that cannot be produced as Word text using the Times or Symbol font.

Computer code

Because altering computer code in any way (e.g., indents, line spacing, line breaks, page breaks) during the typesetting process could alter its meaning, we treat computer code differently from the rest of your article in our production process. To that end, we request separate files for computer code.

In online supplemental material

We request that runnable source code be included as supplemental material to the article. For more information, visit Supplementing Your Article With Online Material .

In the text of the article

If you would like to include code in the text of your published manuscript, please submit a separate file with your code exactly as you want it to appear, using Courier New font with a type size of 8 points. We will make an image of each segment of code in your article that exceeds 40 characters in length. (Shorter snippets of code that appear in text will be typeset in Courier New and run in with the rest of the text.) If an appendix contains a mix of code and explanatory text, please submit a file that contains the entire appendix, with the code keyed in 8-point Courier New.

Use Word's insert table function when you create tables. Using spaces or tabs in your table will create problems when the table is typeset and may result in errors.

Academic writing and English language editing services

Authors who feel that their manuscript may benefit from additional academic writing or language editing support prior to submission are encouraged to seek out such services at their host institutions, engage with colleagues and subject matter experts, and/or consider several vendors that offer discounts to APA authors .

Please note that APA does not endorse or take responsibility for the service providers listed. It is strictly a referral service.

Use of such service is not mandatory for publication in an APA journal. Use of one or more of these services does not guarantee selection for peer review, manuscript acceptance, or preference for publication in any APA journal.

Submitting supplemental materials

APA can place supplemental materials online, available via the published article in the PsycArticles ® database. Please see Supplementing Your Article With Online Material for more details.

List references in alphabetical order. Each listed reference should be cited in text, and each text citation should be listed in the references section.

Examples of basic reference formats:

Journal article

McCauley, S. M., & Christiansen, M. H. (2019). Language learning as language use: A cross-linguistic model of child language development. Psychological Review , 126 (1), 1–51. https://doi.org/10.1037/rev0000126

Authored book

Brown, L. S. (2018). Feminist therapy (2nd ed.). American Psychological Association. https://doi.org/10.1037/0000092-000

Chapter in an edited book

Balsam, K. F., Martell, C. R., Jones. K. P., & Safren, S. A. (2019). Affirmative cognitive behavior therapy with sexual and gender minority people. In G. Y. Iwamasa & P. A. Hays (Eds.), Culturally responsive cognitive behavior therapy: Practice and supervision (2nd ed., pp. 287–314). American Psychological Association. https://doi.org/10.1037/0000119-012

Data set citation

Alegria, M., Jackson, J. S., Kessler, R. C., & Takeuchi, D. (2016). Collaborative Psychiatric Epidemiology Surveys (CPES), 2001–2003 [Data set]. Inter-university Consortium for Political and Social Research. https://doi.org/10.3886/ICPSR20240.v8

Software/Code citation

Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package.  Journal of Statistical Software , 36(3), 1–48. https://www.jstatsoft.org/v36/i03/

Wickham, H. et al., (2019). Welcome to the tidyverse. Journal of Open Source Software, 4 (43), 1686, https://doi.org/10.21105/joss.01686

All data, program code and other methods must be appropriately cited in the text and listed in the references section.

Preferred formats for graphics files are TIFF and JPG, and preferred format for vector-based files is EPS. Graphics downloaded or saved from web pages are not acceptable for publication. Multipanel figures (i.e., figures with parts labeled a, b, c, d, etc.) should be assembled into one file. When possible, please place symbol legends below the figure instead of to the side.

  • All color line art and halftones: 300 DPI
  • Black and white line tone and gray halftone images: 600 DPI

Line weights

  • Color (RGB, CMYK) images: 2 pixels
  • Grayscale images: 4 pixels
  • Stroke weight: 0.5 points

APA offers authors the option to publish their figures online in color without the costs associated with print publication of color figures.

The same caption will appear on both the online (color) and print (black and white) versions. To ensure that the figure can be understood in both formats, authors should add alternative wording (e.g., “the red (dark gray) bars represent”) as needed.

For authors who prefer their figures to be published in color both in print and online, original color figures can be printed in color at the editor's and publisher's discretion provided the author agrees to pay:

  • $900 for one figure
  • An additional $600 for the second figure
  • An additional $450 for each subsequent figure

Journal Article Reporting Standards

Authors should review to the APA Style Journal Article Reporting Standards (JARS) for quantitative, qualitative, and mixed methods. The standards offer ways to improve transparency in reporting to ensure that readers have the information necessary to evaluate the quality of the research and to facilitate collaboration and replication.

  • Recommend the division of hypotheses, analyses, and conclusions into primary, secondary, and exploratory groupings to allow for a full understanding of quantitative analyses presented in a manuscript and to enhance reproducibility;
  • Offer modules for authors reporting on replications, clinical trials, longitudinal studies, and observational studies, as well as the analytic methods of structural equation modeling and Bayesian analysis;
  • Include guidelines on reporting of study preregistration (including making protocols public); participant characteristics (including demographic characteristics); inclusion and exclusion criteria); psychometric characteristics of outcome measures and other variables; and planned data diagnostics and analytic strategy.

The guidelines focus on transparency in methods reporting, recommending descriptions of how the researcher’s own perspective affected the study, as well as the contexts in which the research and analysis took place.

Transparency and openness

APA endorses the Transparency and Openness Promotion (TOP) Guidelines by a community working group in conjunction with the Center for Open Science ( Nosek et al. 2015 ). Effective July 1, 2021, empirical research, including meta-analyses, submitted to the  Journal of Counseling Psychology  must meet the “disclosure” level for all eight aspects of research planning and reporting. Authors should include a subsection in the Method section titled “Transparency and Openness.” This subsection should detail the efforts the authors have made to comply with the TOP guidelines. For example:

  • We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study, and we follow JARS (Kazak, 2018). All data, analysis code, and research materials are available at [stable link to repository]. Data were analyzed using R, version 4.0.0 (R Core Team, 2020) and the package ggplot , version 3.2.1 (Wickham, 2016). This study’s design and its analysis were not pre-registered.

Links to preregistrations and data, code, and materials should also be included in the author note.

Data, materials, and code

Authors must state whether data and study materials are available and, if so, where to access them. Recommended repositories include APA’s repository on the Open Science Framework (OSF), or authors can access a full list of other recommended repositories .

In both the author note and at the end of the method section, specify whether and where the data and material will be available or include a statement noting that they are not available. For submissions with quantitative or simulation analytic methods, state whether the study analysis code is available, and, if so, where to access it.

For example:

  • All data have been made publicly available at the [repository name] and can be accessed at [persistent URL or DOI].
  • Materials and analysis code for this study are available by emailing the corresponding author.
  • Materials and analysis code for this study are not available.
  • The code behind this analysis/simulation has been made publicly available at the [repository name] and can be accessed at [persistent URL or DOI].

Preregistration of studies and analysis plans

Preregistration of studies and specific hypotheses can be a useful tool for making strong theoretical claims. Likewise, preregistration of analysis plans can be useful for distinguishing confirmatory and exploratory analyses. Investigators are encouraged to preregister their studies and analysis plans prior to conducting the research (e.g., ClinicalTrials.gov or the Preregistration for Quantitative Research in Psychology template) via a publicly accessible registry system (e.g., OSF , ClinicalTrials.gov, or other trial registries in the WHO Registry Network).

Articles must state whether or not any work was preregistered and, if so, where to access the preregistration. If any aspect of the study is preregistered, include the registry link in the method section and the author note.

  • This study’s design was preregistered; see [STABLE LINK OR DOI].
  • This study’s design and hypotheses were preregistered; see [STABLE LINK OR DOI].
  • This study’s analysis plan was preregistered; see [STABLE LINK OR DOI].
  • This study was not preregistered.

Permissions

Authors of accepted papers must obtain and provide to the editor on final acceptance all necessary permissions to reproduce in print and electronic form any copyrighted work, including test materials (or portions thereof), photographs, and other graphic images (including those used as stimuli in experiments).

On advice of counsel, APA may decline to publish any image whose copyright status is unknown.

  • Download Permissions Alert Form (PDF, 13KB)

Publication policies

For full details on publication policies, including use of Artificial Intelligence tools, please see APA Publishing Policies .

APA policy prohibits an author from submitting the same manuscript for concurrent consideration by two or more publications.

See also APA Journals ® Internet Posting Guidelines .

APA requires authors to reveal any possible conflict of interest in the conduct and reporting of research (e.g., financial interests in a test or procedure, funding by pharmaceutical companies for drug research).

  • Download Full Disclosure of Interests Form (PDF, 41KB)

In light of changing patterns of scientific knowledge dissemination, APA requires authors to provide information on prior dissemination of the data and narrative interpretations of the data/research appearing in the manuscript (e.g., if some or all were presented at a conference or meeting, posted on a listserv, shared on a website, including academic social networks like ResearchGate, etc.). This information (2–4 sentences) must be provided as part of the author note.

Ethical Principles

It is a violation of APA Ethical Principles to publish "as original data, data that have been previously published" (Standard 8.13).

In addition, APA Ethical Principles specify that "after research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release" (Standard 8.14).

APA expects authors to adhere to these standards. Specifically, APA expects authors to have their data available throughout the editorial review process and for at least 5 years after the date of publication.

Authors are required to state in writing that they have complied with APA ethical standards in the treatment of their sample, human or animal, or to describe the details of treatment.

  • Download Certification of Compliance With APA Ethical Principles Form (PDF, 26KB)

The APA Ethics Office provides the full Ethical Principles of Psychologists and Code of Conduct electronically on its website in HTML, PDF, and Word format. You may also request a copy by emailing or calling the APA Ethics Office (202-336-5930). You may also read "Ethical Principles," December 1992, American Psychologist , Vol. 47, pp. 1597–1611.

Other information

See APA’s Publishing Policies page for more information on publication policies, including information on author contributorship and responsibilities of authors, author name changes after publication, the use of generative artificial intelligence, funder information and conflict-of-interest disclosures, duplicate publication, data publication and reuse, and preprints.

Visit the Journals Publishing Resource Center for more resources for writing, reviewing, and editing articles for publishing in APA journals.

William Ming Liu, PhD University of Maryland, College Park, United States

Associate editors

Germán A. Cadenas, PhD Rutgers University New Brunswick, United States

Cirleen DeBlaere, PhD Georgia State University, United States

Lisa Y. Flores, PhD University of Missouri, Columbia, United States

Candice Nicole Hargons, PhD Emory University, Atlanta, United States

Matthew J. Miller, PhD Loyola University Chicago, United States

Brandon L. Velez, PhD Teachers College, Columbia University, United States

Editorial fellows

Nuha Alshabani, PhD Boston University Chobanian & Avedisian School of Medicine, United States

Whitney J. Erby, PhD Teachers College, Columbia University, United States

Kiet D. Huynh, PhD University of North Texas, United States

Vivian L. Tamkin, PhD Santa Clara University, United States

Consulting editors

Roberto L. Abreu, PhD University of Florida, United States

Hector Y. Adames, PsyD The Chicago School of Professional Psychology, United States

Alexis V. Arczynski, PhD University of La Verne, United States

Dana Atzil-Slonim, PhD Bar-Ilan University, Ramat-Gan, Israel

Kim A. Baranowski, PhD, ABPP Icahn School of Medicine, Mount Sinai, United States

Eran Bar-Kalifa, PhD Ben Gurion University of the Negev, Israel

Theodore T. Bartholomew, PhD Scripps College, United States

Samuel T. Beasley, PhD Western Michigan University, United States

Margit I. Berman, PhD, LP Augsburg University, Minneapolis, United States

Klaus E. Cavalhieri, PhD University of Albany, State University of New York, United States

Norah Chapman, PhD Spalding University, United States

Collette Chapman-Hillard, PhD University of Georgia, United States

Na-Yeun Choi, PhD Dankook University, United States

Tsz-yeung Harold Chui, PhD Chinese University of Hong Kong, China

Ayşe Çiftçi, PhD Arizona State University, United States

Noah M. Collins, PhD University of Maryland, College Park, United States

Andres Consoli University of California, Santa Barbara, United States

Marilyn A. Cornish, PhD Auburn University, United States

Maria Teresa Coutinho, PhD Boston University, United States

Alice E. Coyne Case Western Reserve University, United States

Don E. Davis, PhD Georgia State University, United States

Joanna M. Drinane, PhD University of Utah, United States

Melissa M. Ertl, PhD University of Minnesota, United States

Anna Kawennison Fetter, PhD The University of North Carolina at Chapel Hill, United States

Jillian Fish, PhD Macalester College, United States

Keri A. Frantell, PhD University of North Dakota, United States

Kirsten A. Gonzalez, PhD University of Tennessee, Knoxville, United States

Carlton E. Green Green Psychological Services, United States

Joseph H. Hammer, PhD University of Kentucky, United States

Erin E. Hardin, PhD University of Tennessee, Knoxville, United States

Joshua N. Hook, PhD University of North Texas, United States

Evelyn A. Hunter, PhD Auburn University, United States

Neeta Kantamneni, PhD University of Nebraska, Lincoln, United States

Brian T.H. Keum, PhD Boston College, United States

Bryan S. K. Kim, PhD University of Hawai‘i, Hilo, United States

Dennis Martin Kivlighan III, PhD University of Iowa, United States

Debbiesiu Lee, PhD University of Miami, United States

Tyler Lefevor, PhD Utah State University, United States

Robert W. Lent, PhD University of Maryland, United States

Jioni A. Lewis, PhD University of Maryland, College Park, United States

Xu Li, PhD University of Wisconsin, Milwaukee, United States

Yun Lu, PhD Zhejiang University, Hangzhou, China

P. Priscilla Lui, PhD University of Washington, United States

Em Matsuno, PhD Arizona State University, United States

Laurie Lali Dawn McCubbin, PhD University of Kentucky, United States

Caitlin M. Mercier, PhD Illinois State University, United States

Joseph R. Miles, PhD University of Tennessee, Knoxville, United States

Della Mosely WELLS Healing Center, United States

Rachel L. Navarro, PhD University of North Dakota, United States

Viann Nguyen-Feng, PhD, MPH University of Minnesota, Duluth, United States

Rhea L. Owens, PhD University of Minnesota, Duluth, United States

Jill D. Paquin, PhD Chatham University, United States

Mike C. Parent, PhD University of Texas, Austin, United States

Andrés E. Pérez-Rojas, PhD Indiana University, United States

Kristin M. Perrone, PhD Ball State University, United States

Trisha L. Raque, PhD University of Denver, United States

Delida Sanchez, PhD University of Maryland, College Park, United States

Francisco J. Sánchez, PhD Arizona State University, United States

Hung-Bin Sheu, PhD University at Albany, State University of New York, United States

Richard Q. Shin, PhD University of Maryland, College Park, United States

Steven Stone-Sabali, PhD Ohio State University, United States

Han Na Suh, PhD Georgia State University, United States

Karen W. Tao, PhD University of Utah, United States

Alexander K. Tatum, PhD Ball State University, United States

Elliot A. Tebbe, PhD University of Wisconsin-Madison, United States

Femina P. Varghese, PhD University of Central Arkansas, United States

Laurel B. Watson, PhD University of Missouri, Kansas City, United States

Melanie M. Wilcox, PhD, ABPP Augusta University, United States

Joel Wong, PhD Indiana University, Bloomington, United States

Peer review coordinator

Lorie Van Olst American Psychological Association, United States

Abstracting and indexing services providing coverage of Journal of Counseling Psychology ®

  • ABI/INFORM Complete
  • ABI/INFORM Global
  • ABI/INFORM Professional Advanced
  • ABI/INFORM Professional Standard
  • ABI/INFORM Research
  • Academic OneFile
  • Academic Search Alumni Edition
  • Academic Search Complete
  • Academic Search Elite
  • Academic Search Index
  • Academic Search Premier
  • Advanced Placement Psychology Collection
  • ASSIA: Applied Social Sciences Index & Abstracts
  • Business Source Alumni Edition
  • Business Source Complete
  • Business Source Corporate
  • Business Source Corporate Plus
  • Business Source Elite
  • Business Source Index
  • Business Source Premier
  • Cabell's Directory of Publishing Opportunities in Psychology
  • CINAHL Complete
  • CINAHL Plus
  • Communication & Mass Media Complete
  • Communication Source
  • Current Abstracts
  • Current Contents: Social & Behavioral Sciences
  • EBSCO MegaFILE
  • Education Abstracts
  • Education Full Text
  • Education Research Complete
  • Education Source
  • Educational Research Abstracts Online
  • Educator's Reference Complete
  • ERIH (European Reference Index for the Humanities and Social Sciences)
  • Expanded Academic ASAP
  • General OneFile
  • Higher Education Abstracts
  • Humanities and Social Sciences Index Retrospective
  • IBZ / IBR (Internationale Bibliographie der Rezensionen Geistes- und Sozialwissenschaftlicher Literatur)
  • InfoTrac Custom
  • Journal Citations Report: Social Sciences Edition
  • Mosby's Nursing Consult
  • NSA Collection
  • OmniFile Full Text Mega
  • Professional Collection
  • Professional ProQuest Central
  • ProQuest Central
  • ProQuest Discovery
  • ProQuest Education Journals
  • ProQuest Platinum Periodicals
  • ProQuest Psychology Journals
  • ProQuest Research Library
  • ProQuest Social Science Journals
  • Psychology Collection
  • Social Sciences Abstracts
  • Social Sciences Citation Index
  • Social Sciences Full Text
  • Social Sciences Index Retrospective
  • Social Work Abstracts
  • Studies on Women and Gender Abstracts
  • TOC Premier
  • Women's Studies International

Special issue of the APA's Journal of Counseling Psychology, Vol. 67, No. 4, July 2020. This special issue seeks to serve as a sourcebook for implementing a given approach in counseling research, in such areas as the assessment of coregulation processes, language processing, physiology, motion synchrony, event-related potentials, hormonal measures, and sociometric signals captured by a badge.

Transparency and Openness Promotion

APA endorses the Transparency and Openness Promotion (TOP) Guidelines by a community working group in conjunction with the Center for Open Science ( Nosek et al. 2015 ). The TOP Guidelines cover eight fundamental aspects of research planning and reporting that can be followed by journals and authors at three levels of compliance.

  • Level 1: Disclosure—The article must disclose whether or not the materials are available.
  • Level 2: Requirement—The article must share materials when legally and ethically permitted (or disclose the legal and/or ethical restriction when not permitted).
  • Level 3: Verification—A third party must verify that the standard is met.

As of July 1, 2021, empirical research, including meta-analyses, submitted to the  Journal of Counseling Psychology  must meet the “disclosure” level (Level 1) for all eight aspects of research planning and reporting. Authors should include a subsection in the method section titled “Transparency and openness.” This subsection should detail the efforts the authors have made to comply with the TOP Guidelines.

A list of participating journals is also available from APA.

The following list presents the eight fundamental aspects of research planning and reporting, the TOP level required by the  Journal of Counseling Psychology , and a brief description of the journal's policy.

  • Citation: Level 1, Disclosure—All data, program code, and other methods developed by others should be appropriately cited in the text and listed in the references section.
  • Data Transparency: Level 1, Disclosure—Article states whether the raw and/or processed data on which study conclusions are based are available and, if so, where to access them.
  • Analytic Methods (Code) Transparency: Level 1, Disclosure—Article states whether computer code or syntax needed to reproduce analyses in an article is available and, if so, where to access it.
  • Research Materials Transparency: Level 1, Disclosure—Article states whether materials described in the method section are available and, if so, where to access them.
  • Design and Analysis Transparency (Reporting Standards): Level 1, Disclosure—The journal strongly encourages the use of APA Style Journal Article Reporting Standards ([JARS-Quant, JARS-Qual, and/or MARS]).
  • Study Preregistration: Level 1, Disclosure—Article states whether the study design and (if applicable) hypotheses of any of the work reported was preregistered and, if so, where to access it. Authors may submit a masked copy via stable link or supplemental material or may provide a link after acceptance.
  • Analysis Plan Preregistration: Level 1, Disclosure—Article states whether any of the work reported preregistered an analysis plan and, if so, where to access it. Authors may submit a masked copy via stable link or supplemental material or may provide a link after acceptance.

Other open science initiatives

  • Open Science badges: Not offered
  • Public significance statements: Offered
  • Author contribution statements using CRediT: Not required
  • Registered Reports: Not published
  • Replications: Published

Explore open science at APA .

Journal equity, diversity, and inclusion statement

Journal of Counseling Psychology is committed to improving equity, diversity, and inclusion (EDI) in scientific research, in line with the APA Publishing EDI framework and APA’s trio of 2021 resolutions to address systemic racism in psychology.

Inclusive study designs

  • Diverse samples

Definitions and further details on inclusive study designs are available on the Journals EDI homepage .

Inclusive reporting standards

  • Bias-free language and community-driven language guidelines (required)
  • Author contribution roles using CRediT (required)
  • Reflexivity (recommended)
  • Positionality statements (recommended)
  • Data sharing and data availability statements (recommended)
  • Impact statements (required)
  • Sample justifications (recommended)
  • Constraints on Generality (COG) statements (recommended)
  • Inclusive reference lists (recommended)

More information on this journal’s reporting standards is listed under the submission guidelines tab .

Pathways to authorship and editorship

Editorial fellowships.

Editorial fellowships help early-career psychologists gain firsthand experience in scholarly publishing and editorial leadership roles. This journal offers an editorial fellowship program for early-career psychologists from historically excluded communities.

Reviewer mentorship program

This journal encourages reviewers to submit co-reviews with their students and trainees. The journal likewise offers a formal reviewer mentorship program where graduate students and postdoctoral fellows from historically excluded groups are matched with a senior reviewer to produce an integrated review.

Other EDI offerings

Orcid reviewer recognition.

Open Research and Contributor ID (ORCID) Reviewer Recognition provides a visible and verifiable way for journals to publicly credit reviewers without compromising the confidentiality of the peer-review process. This journal has implemented the ORCID Reviewer Recognition feature in Editorial Manager, meaning that reviewers can be recognized for their contributions to the peer-review process.

Masked peer review

This journal offers masked peer review (where both the authors’ and reviewers’ identities are not known to the other). Research has shown that masked peer review can help reduce implicit bias against traditionally female names or early-career scientists with smaller publication records (Budden et al., 2008; Darling, 2015).

Announcements

  • Guidelines for reviewing manuscripts

Editor Spotlight

  • Read an interview with William Ming Liu, PhD
  • “Opening up the scholarly publishing process in counseling psychology: From inception to publication” , a webinar featuring past and present associate editors of Journal of Counseling Psychology , was held November 17.

Journal Alert

Sign up to receive email alerts on the latest content published.

Welcome! Thank you for subscribing.

Subscriptions and access

  • Pricing and individual access
  • APA PsycArticles database

Calls for Papers

Access options

  • APA publishing resources
  • Educators and students
  • Editor resource center

APA Publishing Insider

APA Publishing Insider is a free monthly newsletter with tips on APA Style, open science initiatives, active calls for papers, research summaries, and more.

Social media

Twitter icon

Contact Journals

Processing Therapy

What are research reports in counseling?

Table of Contents

A research report explains the investigation and results of a single research question (or small set of highly-related questions). Research reports are published in a format we are very familiar with, the IMRD, that plays nicely with an idealized version of the scientific method (see figure below). Types of Research Reports: The results of a research investigation can be presented in several ways such as a technical report, popular report, article, monograph, or oral presentation. A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. Major components of research paper are selection of title, abstract, introduction, literature review, research methodology, results, discussion, managerial implications, conclusion, limitations and future scope.

What do you mean by research report?

A research report is a document prepared by an analyst or strategist who is a part of the investment research team in a stock brokerage or investment bank. A research report may focus on a specific stock or industry sector, a currency, commodity or fixed-income instrument, or on a geographic region or country. The general purpose of research report is to convey the sufficient details of research works. It not only convinces the readers but let them known about the findings of already carried out research or project work or the purpose of the work have been done. What is a research objective? Research objectives describe what your research project intends to accomplish. They should guide every step of the research process, including how you collect data, build your argument, and develop your conclusions. Research is a dynamic process that can be organized into four stages: Exploring, Investigating, Processing, and Creating.

What is research report and its types?

Research reports are recorded data prepared by researchers or statisticians after analyzing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods. A research report is a reliable source to recount details about a conducted research. Research report is a medium to communicate research work with relevant people. It is also a good source of preservation of research work for the future reference. Many times, research findings are not followed because of improper presentation. Preparation of research report is not an easy task. It is an art. There are two main categories of research methods: qualitative research methods and quantitative research methods. Quantitative research methods involve using numbers to measure data. Researchers can use statistical analysis to find connections and meaning in the data. Research methodology is the specific procedures or techniques used to identify, select, process, and analyze information about a topic. In a research paper, the methodology section allows the reader to critically evaluate a study’s overall validity and reliability. Informational versus Analytical Reports Informal reports and formal reports have two major categories: informational and analytical reports.

What is meant by research report?

A research report is a document prepared by an analyst or strategist who is a part of the investment research team in a stock brokerage or investment bank. A research report may focus on a specific stock or industry sector, a currency, commodity or fixed-income instrument, or on a geographic region or country. Research reports are recorded data prepared by researchers or statisticians after analyzing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods. A research report is a reliable source to recount details about a conducted research. A research report is one type that is often used in the sciences, engineering and psychology. Here your aim is to write clearly and concisely about your research topic so that the reader can easily understand the purpose and results of your research. Good research is replicable, reproducible, and transparent. Replicability, reproducibility, and transparency are some of the most important characteristics of research. The replicability of a research study is important because this allows other researchers to test the study’s findings.

What are the methods of research report?

The methods section should describe what was done to answer the research question, describe how it was done, justify the experimental design, and explain how the results were analyzed. Scientific writing is direct and orderly. Psychologists use descriptive, correlational, and experimental research designs to understand behavior. The five stages are choosing a topic, identifying a problem, formulating a research process, creating a research design, and writing a research proposal. There are four main types of Quantitative research: Descriptive, Correlational, Causal-Comparative/Quasi-Experimental, and Experimental Research.

What is the structure of a research report?

The basic structure of a typical research paper is the sequence of Introduction, Methods, Results, and Discussion (sometimes abbreviated as IMRAD). Each section addresses a different objective. A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. Step 1: Identify and develop your topic Selecting a topic can be the most challenging part of a research assignment. Since this is the very first step in writing a paper, it is vital that it be done correctly. The methods section should describe what was done to answer the research question, describe how it was done, justify the experimental design, and explain how the results were analyzed. Scientific writing is direct and orderly. The six critical types of research include exploratory research, descriptive research, explanatory research, correlational research, and causal research. There are two main research methodologies: quantitative and qualitative.

Related Posts

Why is art journaling therapeutic, what is art journal therapy, why is art journaling important, does journaling help with mental health, what is the goal of expressive arts therapy, what is dbt art therapy, what is gestalt art therapy, what are 3 writing prompts, what are four benefits of art therapy, leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Please enter an answer in digits: nineteen − eleven =

University Library

  • Research Guides
  • Literature Reviews
  • Articles & Databases
  • Books & Media
  • Methods/Tests/Stats
  • Annotated Bibliographies

What is a Literature Review?

The scholarly conversation.

A literature review provides an overview of previous research on a topic that critically evaluates, classifies, and compares what has already been published on a particular topic. It allows the author to synthesize and place into context the research and scholarly literature relevant to the topic. It helps map the different approaches to a given question and reveals patterns. It forms the foundation for the author’s subsequent research and justifies the significance of the new investigation.

A literature review can be a short introductory section of a research article or a report or policy paper that focuses on recent research. Or, in the case of dissertations, theses, and review articles, it can be an extensive review of all relevant research.

  • The format is usually a bibliographic essay; sources are briefly cited within the body of the essay, with full bibliographic citations at the end.
  • The introduction should define the topic and set the context for the literature review. It will include the author's perspective or point of view on the topic, how they have defined the scope of the topic (including what's not included), and how the review will be organized. It can point out overall trends, conflicts in methodology or conclusions, and gaps in the research.
  • In the body of the review, the author should organize the research into major topics and subtopics. These groupings may be by subject, (e.g., globalization of clothing manufacturing), type of research (e.g., case studies), methodology (e.g., qualitative), genre, chronology, or other common characteristics. Within these groups, the author can then discuss the merits of each article and analyze and compare the importance of each article to similar ones.
  • The conclusion will summarize the main findings, make clear how this review of the literature supports (or not) the research to follow, and may point the direction for further research.
  • The list of references will include full citations for all of the items mentioned in the literature review.

Key Questions for a Literature Review

A literature review should try to answer questions such as

  • Who are the key researchers on this topic?
  • What has been the focus of the research efforts so far and what is the current status?
  • How have certain studies built on prior studies? Where are the connections? Are there new interpretations of the research?
  • Have there been any controversies or debate about the research? Is there consensus? Are there any contradictions?
  • Which areas have been identified as needing further research? Have any pathways been suggested?
  • How will your topic uniquely contribute to this body of knowledge?
  • Which methodologies have researchers used and which appear to be the most productive?
  • What sources of information or data were identified that might be useful to you?
  • How does your particular topic fit into the larger context of what has already been done?
  • How has the research that has already been done help frame your current investigation ?

Examples of Literature Reviews

Example of a literature review at the beginning of an article: Forbes, C. C., Blanchard, C. M., Mummery, W. K., & Courneya, K. S. (2015, March). Prevalence and correlates of strength exercise among breast, prostate, and colorectal cancer survivors . Oncology Nursing Forum, 42(2), 118+. Retrieved from http://go.galegroup.com.sonoma.idm.oclc.org/ps/i.do?p=HRCA&sw=w&u=sonomacsu&v=2.1&it=r&id=GALE%7CA422059606&asid=27e45873fddc413ac1bebbc129f7649c Example of a comprehensive review of the literature: Wilson, J. L. (2016). An exploration of bullying behaviours in nursing: a review of the literature.   British Journal Of Nursing ,  25 (6), 303-306. For additional examples, see:

Galvan, J., Galvan, M., & ProQuest. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (Seventh ed.). [Electronic book]

Pan, M., & Lopez, M. (2008). Preparing literature reviews: Qualitative and quantitative approaches (3rd ed.). Glendale, CA: Pyrczak Pub. [ Q180.55.E9 P36 2008]

Useful Links

  • Write a Literature Review (UCSC)
  • Literature Reviews (Purdue)
  • Literature Reviews: overview (UNC)
  • Review of Literature (UW-Madison)

Evidence Matrix for Literature Reviews

The  Evidence Matrix  can help you  organize your research  before writing your lit review.  Use it to  identify patterns  and commonalities in the articles you have found--similar methodologies ?  common  theoretical frameworks ? It helps you make sure that all your major concepts covered. It also helps you see how your research fits into the context  of the overall topic.

  • Evidence Matrix Special thanks to Dr. Cindy Stearns, SSU Sociology Dept, for permission to use this Matrix as an example.
  • << Previous: APA 6th
  • Next: Annotated Bibliographies >>
  • Last Updated: Jan 8, 2024 2:58 PM
  • URL: https://libguides.sonoma.edu/counseling

Writing and Record Keeping in Counseling

Cite this chapter.

major sections of a research report used in counseling research

  • Linda Seligman 2  

138 Accesses

The growth and professionalism of the counseling field as well as the increasing emphasis on accountability has led to an expanding need for counselors to document and substantiate the value of their work. A survey of administrators of mental health agencies indicated that “ability to write clearly, concisely, and in a professional style” was one of the three most important skills they sought in masters level counselors (Cook, Berman, Genco, Repka, & Shrider, 1986, p. 150).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Similar content being viewed by others

major sections of a research report used in counseling research

Evidence-Based Practices in Community Corrections: Officers’ Perceptions of Professional Relevance and Personal Competence

major sections of a research report used in counseling research

Developing and Implementing Supervisory Standards

major sections of a research report used in counseling research

Critical Dilemmas and Challenges in Professional Supervision

Author information, authors and affiliations.

Center for Counseling and Consultation, George Mason University, Fairfax, Virginia, USA

Linda Seligman

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Plenum Press, New York

About this chapter

Seligman, L. (1996). Writing and Record Keeping in Counseling. In: Diagnosis and Treatment Planning in Counseling. Springer, Boston, MA. https://doi.org/10.1007/978-1-4684-0013-7_10

Download citation

DOI : https://doi.org/10.1007/978-1-4684-0013-7_10

Publisher Name : Springer, Boston, MA

Print ISBN : 978-0-306-45352-6

Online ISBN : 978-1-4684-0013-7

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

How should we evaluate research on counselling and the treatment of depression? A case study on how the National Institute for Health and Care Excellence's draft 2018 guideline for depression considered what counts as best evidence

Michael barkham.

1 Centre for Psychological Services Research, University of Sheffield, Sheffield, UK

Naomi P. Moller

2 Open University, Milton Keynes, UK

3 British Association for Counselling and Psychotherapy, Lutterworth, UK

Joanne Pybis

Health guidelines are developed to improve patient care by ensuring the most recent and ‘best available evidence’ is used to guide treatment recommendations. The National Institute for Health and Care Excellence's ( NICE 's ) guideline development methodology acknowledges that evidence needed to answer one question (treatment efficacy) may be different from evidence needed to answer another (cost‐effectiveness, treatment acceptability to patients). This review uses counselling in the treatment of depression as a case study, and interrogates the constructs of ‘best’ evidence and ‘best’ guideline methodologies.

The review comprises six sections: (i) implications of diverse definitions of counselling in research; (ii) research findings from meta‐analyses and randomised controlled trials ( RCT s); (iii) limitations to trials‐based evidence; (iv) findings from large routine outcome datasets; (v) the inclusion of qualitative research that emphasises service‐user voices; and (vi) conclusions and recommendations.

Research from meta‐analyses and RCT s contained in the draft 2018 NICE Guideline is limited but positive in relation to the effectiveness of counselling in the treatment for depression. The weight of evidence suggests little, if any, advantage to cognitive behaviour therapy ( CBT ) over counselling once risk of bias and researcher allegiance are taken into account. A growing body of evidence from large NHS data sets also evidences that, for depression, counselling is as effective as CBT and cost‐effective when delivered in NHS settings.

Specifications in NICE 's updated guideline procedures allow for data other than RCT s and meta‐analyses to be included. Accordingly, there is a need to include large standardised collected data sets from routine practice as well as the voice of patients via high‐quality qualitative research.

Introduction

English health guidelines are created and regularly updated with the aim of improving patient care by ensuring that the most recent and ‘best available evidence’ is used to guide treatment (National Institute for Health and Care Excellence Guidance, 2017a ). As stated on its website: ‘National Institute for Health and Care Excellence (NICE) guidelines are evidence‐based recommendations for health and care in England’ (NICE Guidelines, 2017b ). Although some NICE guidance is also adopted by Wales, Scotland and Northern Ireland, a separate UK‐based body equivalent to NICE exists; namely the Scottish Intercollegiate Guidelines Network ( 2017 ). Mental health treatment guidelines are also developed by other international organisations, such as the World Health Organization ( 2017 ) and professional/scientific bodies such as the American Psychiatric Association ( 2017 ), and by European and other countries (Vlayen, Aertgeerts, Hannes, Sermeus & Ramaeker, 2005 ).

This article focuses on: (i) NICE guidelines because of the organisation's impact in shaping mental health care, not only in the UK but internationally (Hernandez‐Villafuerte, Garau & Devlin, 2014 ); (ii) depression, as NICE is currently updating their depression guideline (NICE, 2017d ), and; (iii) counselling as the intervention, as different guidelines have drawn different conclusions (Moriana, Gálvez‐Lara & Corpas, 2017 ). Specially, we focus on the selection and use of evidence. In terms of overall methodology, in their procedural manual NICE state: ‘Guidance is based on the best available evidence of what works, and what it costs’ (NICE, 2014 /2017, p. 14). Although the procedural manual states that randomised controlled trials (RCTs) are often the most appropriate design, it also states: ‘However, other study designs (including observational, experimental or qualitative) may also be used to assess effectiveness, or aspects of effectiveness’ (NICE, 2014 /2017, p. 15). Accordingly, we assess the extent to which NICE has adhered to its own methods manual in drawing up the draft guideline. While NICE's depression guideline is used as the example, arguments in this article are intended to have broad relevance for any organisation developing guidelines across mental health treatments.

The new revision of the NICE Guideline for Depression in Adults: Recognition and Management is scheduled to be published in January 2018 and available as a consultation document at the time of writing (NICE, 2017d ). The previous 2009 NICE Guideline stated: ‘ For people with depression who decline an antidepressant, CBT [cognitive behaviour therapy], IPT [interpersonal psychotherapy], behavioural activation and behavioural couples therapy, consider: counselling for people with persistent subthreshold depressive symptoms or mild to moderate depression’ (NICE, 2009 , p. 23). Counselling was included in the 2009 Guidelines but only for those who declined other recommended treatments; the guidelines were accordingly critiqued on the basis of limiting patient choice (British Association for Counselling and Psychotherapy, 2009 ). In addition, practitioners offering counselling to adults with depression were recommended to: ‘Discuss with the person the uncertainty of the effectiveness of counselling and psychodynamic psychotherapy in treating depression’ (p. 24). This recommendation was criticised as research suggests that both patient hope and a good therapeutic relationship are important in creating good patient outcomes (Barber, Connoll, Crits‐Christoph, Gladis & Siqueland, 2000 ). Accordingly, this recommendation would likely have negatively impacted on early engagement in counselling as well as on outcomes for counselling, if practitioners had implemented this guidance.

The consultation document for the 2018 proposed guideline states: ‘Consider counselling if a person with less severe depression would like help for significant psychosocial, relationship or employment problems and has had group CBT, exercise or facilitated self‐help, antidepressant medication, individual CBT or BA for a previous episode of depression, but this did not work well for them, or does not want group CBT, exercise or facilitated self‐help, antidepressant medication, individual CBT or BA’ (NICE, 2017d ; Recommendation 64, p. 252). It also recommends that the counselling ‘is based on a model developed specifically for depression, consists of up to 16 individual sessions each lasting up to an hour, and takes place over 12–16 weeks, including follow‐up’ (NICE, 2017d ; Recommendation 65, p. 252). Importantly, the ‘uncertainty’ directive has been removed. Hence, the proposed guideline is arguably an improvement on before, as it moves towards a principle of matching counselling with specific issues (i.e., psychosocial, relationship and employment) together with a crucial note about the specificity of the counselling model to be adopted.

Historically, the NICE Guideline for Depression has been highly influential in shaping healthcare provision for those experiencing depression. As described by Clark ( 2011 ), the NICE recommendations for depression from 2004 onwards contributed to the development and roll‐out of the Improving Access to Psychological Therapies (IAPT) programme, which in England now provides the bulk of treatment for depression in primary care (Gyani, Pumphrey, Parker, Shafran & Rose, 2012 ). One example of the impact of the revised 2009 Guideline appears to have been the cutting of counselling jobs in the NHS, with IAPT workforce census data suggesting a 35% decline in the number of qualified counsellors working as high‐intensity therapists between 2012 and 2015, in a period where the total IAPT workforce grew by almost 18% (IAPT Programme, 2013 ; NHS England & Health Education England, 2016 ). Workforce shifts that apparently follow revised NICE guidelines (e.g., counselling not being recommended as a first‐line treatment for depression) underline the importance of scrutinising guideline recommendations since a core assumption is that using ‘best’ evidence and guideline methodologies will lead to NICE recommendations that improve patient care. An implicit question in the remainder of this article is whether the positioning of counselling as a second‐tier treatment for mild‐to‐moderate depression (only) through NICE recommendations is likely to lead to improved outcomes for clients with depression.

Defining counselling as a psychological intervention

The NICE depression guidelines (2009, 2017d) have included recommendations for ‘counselling’, but the definition of ‘counselling’ is unclear. The British Association for Counselling and Psychotherapy (BACP) adopts a generic definition for both counselling and psychotherapy as umbrella terms for ‘a range of talking therapies’ (BACP, 2017 ). Equivalent professional organisations, such as the American Counseling Association (ACA) and the European Association for Counselling (EAC) define counselling in terms of a professional relationship that seeks to aid patients (ACA, 2017 ; EAC, 2017 ). What these definitions have in common is that they are nonspecific: counselling is a broad family of interventions that includes subtypes of counselling such as person‐centred therapy (PCT) or cognitive behaviour therapy (CBT). However – and problematically – the 2009 NICE Guideline for Depression directly compared ‘counselling’ with subtypes of counselling.

The 2009 NICE Guideline for Depression did not specify a definition of counselling; however, various definitions for counselling are provided in the empirical literature. For example, King, Marston and Bower ( 2014 ) reported on a reanalysis of the Health Technology Assessment‐funded trial (Ward et al., 2000 ), comprising a head‐to‐head RCT comparing ‘nondirective counselling’ and cognitive behaviour therapy (CBT), and defined the counselling used in their study as ‘a nondirective, inter‐personal approach’ (p. 1836) derived from the work of Carl Rogers. In this context, the therapy ‘counselling’ has clear theoretical and empirical roots and is a synonym for a type of talking therapy.

In contrast, a 2012 meta‐analytic study by Cuijpers et al. examined the efficacy of ‘nondirective supportive therapy’ (NDST) – which they stated is ‘commonly described in the literature as counselling’ (p. 281). They defined NDST as an approach that utilises the shared attributes (or common factors) of all talking therapies ‘without (utilizing) specific psychological techniques…’ (p. 281), which characterise particular types of therapy. Cuijpers et al. ( 2012 ) point out that many RCTs that include counselling do so as a nonspecific control group and suggest researchers appear to treat counselling as not being a bona fide active treatment. In this context ‘counselling’ is neither a category nor an example of a category, but a shared nonspecific attribute of psychological therapies in general.

The outcome of the 2009 NICE guidance recommendations spurred the development of a model of counselling for the treatment of depression designed to be effective as a high‐intensity intervention within IAPT that took the form of a person‐centred experiential therapy named Counselling for Depression (CfD; Sanders & Hill, 2014 ). The aim was to develop a bona fide psychological therapy using an established methodology that involved defining a range of basic, generic, specific and meta‐competencies for this model of therapy (Roth, Hill & Pilling, 2009 ). The CfD (person‐centred experiential) model, which is now available to IAPT patients (NHS England, 2017 ), also meets the recommendations in the 2018 draft guidelines for a model of counselling developed for depression.

The reviewed definitions suggest there are potentially two distinct forms of counselling: a nonspecific counselling that utilises generic and basic competences common to all forms of therapy, and a model‐specific form of counselling, such as person‐centred experiential counselling, which includes CfD. This distinction between generic counselling and a bona fide/active intervention potentially implies critical differences in the level of training and competencies of a practitioner (comparable to the differences between low and high‐intensity treatment in IAPT) and in the specificity of the model of intervention used. The 2018 proposed guideline does not utilise such distinctions, however, the only recommendation in the draft guidelines is that the counselling intervention should be one developed specifically for depression (yet CfD is not named). This suggests that guideline developers need to make a concerted effort to use definitions that specify the theoretical approach and potentially the level of professional training or competencies.

The current evidence for the clinical efficacy and effectiveness of counselling in the treatment of depression

NICE guidelines for depression draw on two main classes of data to arrive at clinical recommendations, namely meta‐analyses and RCTs. NICE's methodological procedures state: ‘NICE prefers data from head‐to‐head RCTs to compare the effectiveness of interventions’ (NICE, 2014 /2017, p. 103). Further, the procedures require the detailing of the methods and results of individual trials. If direct evidence from treatment comparisons is not available, then indirect comparisons can be made using network meta‐analysis (see Mills, Thorlund & Ioannidis, 2013 ). This procedure, which combines direct and indirect treatment comparisons, focuses on classes of interventions (i.e., broader headings of approaches rather than specific therapy brands) to arrive at recommendations when comparing multiple interventions. The interventions are judged against an appropriate comparator, that is, a common standard. The draft 2018 Guideline uses a pill placebo condition as the appropriate comparator. The Guideline also considers the cost‐effectiveness of interventions. In this section, we provide an overview of the current status of evidence regarding counselling as derived from meta‐analyses and RCTs.

Meta‐analyses of counselling in the treatment of depression

In terms of meta‐analyses, the aim is to combine data from multiple studies and to statistically synthesise the results to create conclusions that are more robust. There are three meta‐analyses of direct relevance.

First, Cape, Whittington, Buszewicz, Wallace and Underwood ( 2010 ) carried out a meta‐analysis and meta‐regression of 34 studies focusing on brief psychological interventions for anxiety and depression, involving 3962 patients. Most interventions were brief cognitive behaviour therapy (CBT; n  =   13), counselling ( n  =   8) or problem solving therapy (PST; n  =   12). Results showed effectiveness for all three types of therapy: studies of CBT for depression ( d: −.33, 95% CI: −.60 to −.06) and studies of CBT for mixed anxiety and depression ( d : −.26, 95% CI: −.44 to −.08); counselling in the treatment of depression alone as well as mixed anxiety and depression ( d : −.32, 95% CI: −.52 to −.11); and PST for depression and mixed anxiety and depression ( d : −.21, 95% CI: −.37 to −.05). Controlling for diagnosis, meta‐regression found no difference between CBT, counselling and PST. The authors concluded that brief CBT, counselling and PST are all effective treatments in primary care, but that effect sizes are low compared to longer length treatments. Nonetheless, it should be pointed out that for the analysis of the four studies of counselling for the treatment of depression only, the results were not statistically significant. However, four studies are not sufficient to yield reliable results.

Second, Cuijpers et al. ( 2012 ) found that studies in which NDST was compared with CBT resulted in a small and nonsignificant difference between NDST and CBT. The authors commented that NDST has been treated as a proxy for counselling, although it specifically excludes active elements that may be present in bona fide counselling interventions. However, they found that the studies with researcher allegiance in favour of the alternative psychotherapy resulted in a considerably larger effect size than studies without researcher allegiance. Moreover, in studies without an indication of researcher allegiance, the difference between NDST and other therapies was virtually zero. The authors argued that such results suggested that NDST is effective and deserved more respect from the research community.

Third, the most recent relevant study by Barth et al. ( 2013 ) adopted a network meta‐analysis – the same method used by the NICE Guideline Development Group – using 198 trials comparing seven forms of psychotherapeutic interventions, one of which was ‘supportive counselling’. The analysis found significant effects for supportive counselling compared against waitlist and that the evidence base for supportive counselling was broad. However, when that analysis focused only on the network of large trials, for four of the interventions, including supportive counselling, significant effects were no longer found. Barth et al. ( 2013 ) themselves invoked the results of the Cuijpers et al. ( 2012 ) meta‐analysis that found no difference between NDST and other treatments. They stated it was ‘unjustified’ to dismiss supportive counselling as a suboptimal treatment because, although the evidence for this intervention was less strong, the size of the differences between the interventions studied was small. They concluded that different psychotherapeutic interventions for depression have comparable, moderate‐to‐large effects.

In summary, when studies with a low researcher allegiance against counselling together with evidence from bona fide counselling interventions are considered, the meta‐analytic studies comparing counselling with CBT for depression suggest either broad equivalence of patient outcomes or, where differences do exist, that they are small.

RCTs of counselling in the treatment of depression

As a tradition, counselling in the UK is often associated with Humanistic/Experiential therapies, and there are a few RCTs which report evidence for the efficacy for these therapies with depressed patients (Goldman, Greenberg & Angus, 2006 ), including one that compared process‐experiential therapy (now referred to as emotion‐focused therapy) with CBT and found comparable outcomes (Watson, Gordon, Stermac, Kalogerakos & Steckley, 2003 ). However, only one recent report directly compared counselling (defined as nondirective person‐centred counselling) to CBT in the treatment of depression. The original study reported comparisons between nondirective counselling and CBT for mixed anxiety and depression and found no significant difference in outcomes for the two therapies (Ward et al., 2000 ). A subsequent reanalysis of the subsample of patients meeting a diagnosis of depression only, found similar results with both therapies being equally effective and both being superior to usual General Practice care at 4 months but not at 12 months (King et al., 2014 ).

The findings from this study are important because of the lack of RCT research that might provide direct head‐to‐head trial evidence for the efficacy of counselling. The 2009 NICE Guideline for Depression development process identified six relevant studies for consideration. One was excluded due to the mixed diagnosis (Ward et al., 2000 ) although, as stated, a subanalysis focusing on patients reporting depression only was considered (and subsequently published as King et al., 2014 ). Data from five other trials were also used (Bedi et al., 2000 ; Goldman et al., 2006 ; Greenberg & Watson, 1998 ; Simpson, Corney, Fitzgerald & Beecham, 2000 ; Watson et al., 2003 ). However, they were all either low powered in terms of patient numbers, had patient samples drawn from the mild‐to‐moderate range of depression only with some including subthreshold patients, or compared outcomes for similar (Humanistic/Experiential) therapies. The 2009 guideline recommendation was that counselling should not be considered as a first‐line intervention, as it had more limited evidence, and should only be considered for patients experiencing subthreshold, mild or moderate depression who declined the other treatments available. As stated, the guideline also added the qualification about the uncertainty of the evidence for counselling, and suggested patients should be advised on this matter.

In summary, while there is minimal recent RCT evidence comparing counselling as a bona fide intervention with CBT, the evidence that does exist supports the general efficacy of counselling. However, apart from the Ward/King reports, RCT studies are generally small‐scale and lack a standard comparator such as CBT. The lack of new data may explain why the recommendations for counselling in the 2009 published and 2018 draft guidelines are broadly similar. However, unlike the 2009 Guideline, the draft 2018 Guideline is based on network meta‐analyses. As some commentators have noted: ‘Nonetheless, a network meta‐analysis is not a substitute for a well conducted randomized controlled trial’ (Kanters et al., 2016 , p. 783). More immediately, perhaps, there needs to be a debate as to the appropriateness of using pill placebo as the appropriate comparator in relation to decision‐making. To use a nonclinically viable intervention as the appropriate comparator – something a patient experiencing depression would never be offered – does not appear to be the most useful benchmark for informing decision‐making regarding differing interventions (see Dias, 2013 ).

Yet, beyond meta‐analyses and RCTs, other potentially valuable sources of evidence exist that are defined by NICE as within the scope of evidence that could be considered but, unfortunately, have not been in the 2018 draft recommendations. In the next section, we argue that there has been an overreliance on the RCT design, before then presenting a case for including relevant non‐RCT data.

The limitations of currently considered evidence in guideline development

An overreliance on rcts.

Within the counselling and psychotherapy outcomes literature, there has been a long‐standing debate regarding what counts as evidence (Kazdin, 2008 ). Evidence from RCTs has traditionally been favoured due to specific features that control for systematic biases, leading them to be judged as providing the most stringent form of evidence. In short, randomisation protects against any systematic biases in the assignment of patients to treatments. The component of randomisation is probably the hallmark most often cited as underpinning the superiority of trials data in the field of the psychological therapies. However, the other central element of RCTs – participants being double‐blinded – can only be utilised in drug trials where the content of the drug can be hidden to patients and to the professional providing the medication. Hence, while trial designs in the psychological therapies are not the strongest form that the RCT design allows, it has long been held as the design that yields the most reliable and valid findings (Wessley, 2007 ).

While the strengths of RCT designs are well accepted, no research method is immune from criticism and one of the abiding criticisms of RCTs concerns their lack of generalisability (Kennedy‐Martin, Curtis, Faries, Robinson & Johnston, 2015 ). While statistical work is taking place to develop procedures in an attempt to address this issue (Stuart, Bradshaw & Leaf, 2015 ), by design, RCTs involve the careful screening of patients to ensure that all trial participants fully meet diagnostic criteria for the presenting condition under study. Typically, this involves screening out patients presenting with any comorbidities, something that leads to the criticism that RCT participants are atypical of patients in actual practice, since, for example, depression is highly comorbid with anxiety (Kaufman & Charney, 2000 ). In addition, by their very nature RCTs draw on a specific subgroup of the population of patients, namely those who are willing to be trial participants. A major reason patients decline to be participants in trials is that they do not wish to be research subjects (Barnes et al., 2013 ). In addition, there has been a long‐term concern about the lack or underrepresentation of minorities in research studies (Hussain‐Gambles, Atkin & Leese, 2004 ; Stronks, Wieringa & Hardon, 2013 ). Hence, while a well‐conducted RCT will state that the intention to offer treatment X (from an intent‐to‐treat analysis) or receipt of treatment X (from a per‐protocol analysis) is better than treatment Y in a specific setting, it will not address the question a commissioner asks, namely: will it work for us? (Cartwright & Munro, 2010 ).

Jadad and Enkin ( 2007 ), the authors of the standard guide to designing RCTs, state: ‘… randomized trials are not divine revelations, they are human constructs, and like all human constructs, are fallible. They are valuable, useful tools that should be used wisely and well’ (p. 44). Indeed, Jadad and Enkin list over 50 specific biases that are possible when carrying out a trial and go on to provide a strong warning that unless their weaknesses are acknowledged, there is a ‘risk of fundamentalism and intolerance of criticism, or alternative views (that) can discourage innovation’ (p. 45).

Despite such criticisms, trials have become the dominant source for informing clinical guidelines. Yet, as the previous Chairman of NICE, Sir Mike Rawlins, stated: ‘Awarding such prominence to the results of RCTs, however, is unreasonable’ (2008, p. 2159). Rawlins further argued in relation to the hierarchy of evidence used by NICE that privileges trials data, that ‘Hierarchies of evidence should be replaced by accepting a diversity of approaches.’ (p. 2159). And indeed, the word hierarchy does not appear at all in the NICE methods manual (NICE, 2014 /2017). Rawlins’ argument was not to abandon RCTs in favour of observational studies; rather what he sought was for researchers to improve their methods and for decision makers to avoid adopting entrenched positions about the nature of evidence. However, given the dominance of RCT evidence and the absence of relevant and available observational data in the draft 2018 guidelines, it would appear that Rawlins’ call has not been heeded.

Considering statistical power and nonindependence of patients in RCTs

A separate but major issue concerning trials, as identified earlier, is the extent to which they are appropriately powered to detect any hypothesised differences. To have confidence in the findings from RCTs that test the superiority, noninferiority or equivalence of one treatment condition against another, studies must have the required statistical power (sufficient numbers of patients in the trial) to detect such a difference if one exists. The standard criterion that defines sufficient power for a superiority trial requires that a study will have at least an 80% chance of detecting a difference at p  < .05 if one exists.

Cuijpers ( 2016 ) reviewed the statistical power needed both for individual RCTs and for meta‐analytic studies focused on adult depression. His analysis should be considered alongside the three classes of between‐group effect sizes traditionally postulated by Cohen ( 1992 ): small ( d  = .2), medium ( d  = .5), and large ( d  = .8). He identified that a sample size of 90 trial patients (i.e., 45 patients per arm) was required to find a differential effect size of d  = .6 (i.e., a medium effect size). Having established in an earlier article that an effect size of d  = .24 could be considered as a ‘minimally important difference’ from the patient's perspective (Cuijpers, Turner, Koole, van Dijke & Smit, 2014 ), he calculated that for a trial to determine such a minimally important difference between two active treatments for depression would require 548 patients – that is, 274 patients in each arm of the trial.

Yet in Cuijpers’ ( 2016 ) analysis, the mean number of patients included in RCT comparisons between CBT and another psychotherapy for depression was 52, with a range from 13 to 178. The effect size that can be detected with the average trial comprising 52 patients was d  = .79, an effect size similar to that comparing CBT with untreated control groups (i.e., d  = .71). For nondirective counselling, the analysis found that the largest study had sufficient power to detect a differential effect size of d  = .34. The largest comparative trial found in three comprehensive meta‐analyses of major types of psychotherapy comprised 221 patients. This is about 40% of the 548 patients needed to detect a clinically relevant effect size of d  = .24. Taking these statistics together, it is uncertain whether there can be sufficient confidence in the results of RCTs for adult depression conducted to date that compare CBT with another therapy because they likely lack sufficient statistical power (Cuijpers, 2016 ).

Meta‐analyses are, like single RCTs, subject to considerations of power. For meta‐analyses of RCTs focused on treatment of depression, Cuijpers ( 2016 ) suggests that for CBT (based on a mean of 52 patients per study), 18 trials would be needed to detect a significant effect of d  = .24 with a power of .8, or 24 trials with a power of .9. According to his analysis, the actual number of trials was 46, which was sufficient to detect a clinically relevant effect. However, he concluded that only 13 of these trials had a low risk of bias. This is important, as ‘bias’ is an agreed index of factors that reduce confidence in the results of RCTs. For example, a potential source of bias is the degree to which assessors or data analysts have prior knowledge of the specific intervention any individual study participant received. Hence, meta‐analyses are also vulnerable to low power once only studies with a low risk of bias are considered.

For nondirective supportive counselling (based on a slightly higher mean of 59 patients per trial), 16 trials would be needed to detect an effect of d  = .24 with a power of .8 or 21 trials with a power of .9. The 32 trials comparing counselling with other therapies therefore had sufficient power to detect a clinically relevant effect. However, only 14 trials had low risk of bias, yielding the same conclusion that there were not enough trials to detect such an effect.

In addition to issues of bias and low power, the statistical analysis applied to the data assumes that the data – that is, patients – are independent of each other. However, patients are not independent of each other as they are nested within therapists. Patient outcomes for one therapist will be correlated with the other patients from the same therapist and differ from the outcomes with other therapists. It is likely that there will be variability between the outcomes of therapists, a phenomenon known as therapist effects (Barkham, Lutz, Lambert & Saxon, 2017 ). Failure to take account of therapist effects results in this effect being attributed to the treatment effect and, thereby, inflating it (or deflating it if the therapists are not effective).

In summary, despite numerous comparative trials being conducted, from this data it is unclear whether one therapy for adult depression is more effective than another to an extent that is clinically relevant . Trials are underpowered and require much greater statistical power and less bias to determine differential effectiveness. In the light of this position, we now consider arguments for including very large data sets from routine practice.

Incorporating very large routine practice‐based data sets in guideline development for depression

As stated earlier, the NICE methods manual states that while RCTs may often be the most appropriate design, ‘other study designs (including observational, experimental or qualitative) may also be used to assess effectiveness, or aspects of effectiveness’ (NICE, 2014 /2017, p. 15). And in terms of the development work in network meta‐analysis, the aim is to move towards ‘the inclusion of studies of various designs, including observational studies, within one analysis’ (Kanters et al., 2016 , p. 783). Accordingly, there appears to be little reason, if any, for NICE not to consider high‐quality and relevant observational data.

One key development over the past decade or more has been the growth in the availability of very large data sets. For the psychological therapies, this is best exemplified by the implementation of the IAPT programme in England (London School of Economics and Political Science, 2006 ). The IAPT programme comprises a stepped care approach in which patients are initially referred for low‐intensity interventions such as psychoeducational interventions delivered by psychological wellbeing practitioners (PWPs). If not successful, these are ‘stepped up’ to high‐intensity interventions comprising CBT and several non‐CBT therapies, including CfD (person‐centred experiential therapy), a standardised model of intervention focused on depression with standards of training and supervision. Some patients, based on their presenting issues, are assigned directly to high‐intensity interventions. The IAPT programme, which was piloted in 2006 and independently evaluated (Parry et al., 2011 ), has been rolled out nationally and has focused largely on patients experiencing depression and anxiety but is being expanded to other patient groups.

A key feature of the IAPT programme is the administration of a common set of outcome measures – a minimum data set (MDS) – at each attended session. The MDS comprises the following: the Patient Health Questionnaire‐9 (PHQ‐9: Kroenke, Spitzer & Williams, 2001 ), which acts as a proxy measure for depression; the General Anxiety Disorder‐7 (GAD‐7; Spitzer, Kroenke, Williams & Löwe, 2006 ); and the Work and Social Adjustment Scale (WSAS; Mundt, Marks, Shear & Greist, 2002 ). The per‐session administration of the PHQ‐9, GAD‐7 and WSAS in IAPT has yielded potential standardised data sets from routine practice of unprecedented size. In 2015–2016 (the last year for which there is currently data), almost a million people entered IAPT treatment, with over half a million completing a course of treatment (NHS Digital, 2016 ).

The numbers of IAPT patients for whom systematic data has been collected potentially makes this one of the largest standardised data sets on the psychological therapies in the world. Kazdin ( 2008 ), on observing the general waste from data in practice settings not being used stated: ‘we are letting knowledge from practice drip through the holes of a colander’ (p. 155). Indeed, the collection and use of such large‐scale routinely collected standardised data are a hallmark of the research paradigm termed practice‐based evidence (Barkham & Margison, 2007 ; Barkham, Stiles, Lambert & Mellor‐Clark, 2010 ). While the privileging of trials data ahead of observational data may have been appropriate when the latter comprised small‐scale and unsystematic studies, this is no longer the case. In the same way that narrative reviews have developed a clear and systematic methodological underpinning to yield systematic reviews, the methods of collection and analyses of ‘routine data’ have developed a level of sophistication that can arguably no longer be dismissed (or labelled) as simply observational data.

Consistent with this practice‐based paradigm, the proposed 2018 Guideline states: ‘For all interventions for people with depression: use sessional outcome measures; review how well the treatment is working with the person; and monitor and evaluate treatment adherence’ (NICE, 2017d; Recommendation 37, p. 248). In addition, healthcare professionals delivering interventions for people with depression should: ‘receive regular high‐quality supervision; and have their competence monitored and evaluated, for example, by using video and audio tapes, and external audit’ (NICE, 2017d; Recommendation 38, p. 248). These recommendations provide the underpinning not only for enhancing the quality of clinical practice but also of ensuring the collection of high‐quality standardized data that would complement trials‐based data. However, despite the potential size of the IAPT data set and its quality, the data are not currently considered in NICE guideline developments. Given that the IAPT initiative was shaped by iterations of the NICE Guidelines for depression, the IAPT data itself may contribute to a better linkage between practice in routine settings, the yield from RCTs, and guideline development. It also enables practitioners in routine practice to contribute directly via their standardised data to informing the very guidelines that they will have to implement.

The IAPT data set: effectiveness of counselling in the treatment of depression in the NHS

The potential value of the IAPT data set in contributing to the evidence base on effective treatment for depression in adults is illustrated by examining reports and studies derived from IAPT data. Since 2013–14, IAPT have published annual reports comparing the number of referrals, average number of sessions and recovery rates between the available psychological therapies (NHS Digital, 2014 , 2015 , 2016 ). As demonstrated in Table  1 , whilst a greater proportion of referrals (approximately 60–65%) received CBT as compared with counselling, patient outcomes (i.e., recovery rates) have been virtually equivalent between the two interventions.

Data extracted from successive NHS digital reports on comparisons between cognitive behaviour therapy (CBT) and counselling/counselling for depression (CfD)

YearInterventionNumber of referrals for depressive disorderAverage number of sessionsRecovery rate (%)
2013–14CBT21,6225.745.1
Counselling13,3695.4
2014–15CBT28,3505.144.1
Counselling14,9944.445.2
2015–16CBT35,5895.845.9
Counselling (CfD)20,0115.347.6

Research studies carried out by different academic groups that have accessed different portions of the IAPT data set to undertake more detailed analyses have also reported comparable outcomes between CBT and counselling in relation to the treatment of depression (Gyani, Shafran, Layard & Clark, 2013 ; Pybis, Saxon, Hill & Barkham, 2017 ). In more sophisticated studies using multilevel modelling to account for patient case mix and the nested nature of data, where differences have been observed these have been small and clinically insignificant (Pybis et al., 2017 ; Saxon, Firth & Barkham, 2017 ). These data demonstrate that for patients accessing psychological therapy throughout the NHS, counselling is, to all intents and purposes, as effective as CBT in the treatment of depression for both moderate and severe levels of depression. These studies, as well as the publicly available evidence from NHS Digital, confirm the findings of earlier studies using the Clinical Outcomes in Routine Evaluation measure (CORE‐OM; Evans et al., 2002 ). These studies used routinely collected CORE‐OM data from naturalistic settings before the implementation of IAPT and yielded comparable patient outcomes between counselling and CBT (Stiles, Barkham, Mellor‐Clark & Connell, 2008 ; Stiles, Barkham, Twigg, Mellor‐Clark & Cooper, 2006 ).

In summary, the evidence from the IAPT data set is that counselling is as effective as CBT as an intervention for depression. This evidence of effectiveness in NHS practice settings across England accords with the conclusions of Cuijpers ( 2017 ), who reviewed over 500 depression RCTs from four decades of research, and concluded that there were no significant differences between the main interventions, once biases and allegiances were considered. The consistency of the trials‐based and practice‐based findings is important in supporting the value of counselling as an intervention for depression offered in the NHS in England. However, we argue that the key conclusion for guideline development from these findings is that focus of research attention should not be on repeatedly re‐evaluating the evidence for different interventions. Instead the focus should move to other factors such as therapist effects or site effects where there appear to be noticeable differences in patients’ outcomes (e.g., Saxon & Barkham, 2012 ). This refocusing away from treatment differences and towards other factors is a position endorsed by the American Psychological Association ( 2012 ).

The IAPT data set: efficiency and cost‐effectiveness of counselling in the treatment of depression

A 2010 report calculated the annual cost of depression in England to be almost £11 billion in lost earnings, demands on the health service and the cost of prescribing drugs to address the depression (Cost of Depression in England, 2010 ). In this context, the cost‐effectiveness of treatment is important to consider. Determining cost‐effectiveness with acceptable degrees of certainty requires large samples, which the IAPT data set offers in a way that trials do not. Given the NICE procedural manual states that, for example, observational data can be used for ‘aspects of effectiveness’, the potential contribution of the IAPT data set to considerations of cost‐effectiveness is significant.

Improving Access to Psychological Therapies data suggest patients accessing counselling attend fewer sessions on average than those accessing CBT (NHS Digital, 2014 , 2015 , 2016 ; Pybis et al., 2017 ; Saxon, Firth et al., 2017 ). This suggests counselling may well be cheaper and therefore more cost‐efficient than CBT as it achieves comparable patient outcomes. To consider this in more detail, a study exploring the cost‐effectiveness of IAPT as a service reported data collected from five Primary Care Trusts and found the cost of a high‐intensity session was £177 (Radhakrishnan et al., 2013 ). Using this estimate alongside figures from the latest IAPT report that counselling is typically seeing patients for 5.9 sessions, whereas CBT is seeing patients for 7.1 sessions (NHS Digital, 2016 ), this would suggest counselling costs approximately £1044 per patient and CBT approximately £1256 per patient. In 2015–16, 152,452 patients completed a course of CBT at an estimated cost of £191 million. If those same patients had received counselling the cost saving could have been over £30 million.

The potential saving of £30 million is calculated only from the fewer sessions (on average) received by counselling patients in IAPT. However, given that counsellors in IAPT are often paid a grade lower than ‘IAPT‐qualified’ therapists (Perren, 2009 ), this figure may underestimate the potential saving. Moreover, while counselling training is typically self‐funded, IAPT CBT trainings have been government funded, initially centrally and more recently locally. This illustrates the potential financial implications of how research evidence is weighed up and then synthesised into guideline recommendations for the treatment of depression.

In summary, the vast data set derived from the IAPT programme needs to be used to complement data from RCTs. And this is particularly true for questions concerning cost‐effectiveness that cannot be adequately addressed by RCTs alone. Within years, there will be patient data on millions of patients within IAPT services. Its inclusion in the scope of NICE guideline reviews would be wholly consistent with the NICE guidelines procedure manual.

Considering the role of service users’ voices via qualitative research in guideline development

The previous section has argued for guideline developers to consider very large patient data sets. In this section, we argue for guideline developers to incorporate qualitative evidence that gives voice to service users. Doing so would be in accordance with NHS England's business plan for 2016/2017, which sets out a commitment: ‘to make a genuine shift to place patients at the centre, shaping services around their preferences and involving them at all stages’ (NHS England, 2016 , p. 49). NICE has a similar commitment (NICE Patient and Public Involvement Policy, 2017c ). Currently, while qualitative research is included in guideline development, NICE processes do not allow such data to be included in the final summative analyses that shape key recommendations. Yet a number of researchers (Hill, Chui & Baumann, 2013 ; Midgley, Ansaldo & Target, 2014 ) argue that qualitative outcome studies are important to consider because they ‘offer a significant challenge to assumptions about outcome that derive from mainstream quantitative research on this topic, in relation to two questions: how the outcome is conceptualised, and the overall effectiveness of therapy’ (McLeod, 2013 , p. 65). Reviewing existing literature, McLeod suggested patients themselves conceptualise outcome much more broadly than in terms of symptom or behavioural change (Binder, Holgersen & Nielsen, 2010 ). Typically, patients acknowledge ways in which therapy has been helpful but also where it has failed, suggesting that quantitative outcome research may overstate therapeutic effectiveness. Qualitative studies can also help answer questions about patient experience and expectations of NHS services, including whether treatments are credible and acceptable to them, which have an impact on outcomes.

Turning to qualitative research focused on depression, there is a growing literature on understanding the experiences of patient populations such as minority ethnic groups (e.g., Lawrence et al., 2006a ), women (e.g., Stoppard & McMullen, 1999 ), men (e.g., Emslie, Ridge, Ziebland & Hunt, 2006 ) and older adults (e.g., Lawrence et al., 2006b ). Such studies elucidate population‐specific depression experiences that can be useful in understanding why certain populations benefit less from treatment. There is also a literature that seeks to describe the experience of aspects of depression such as recovery (e.g., Ridge & Ziebland, 2006 ) or types of depression such as postnatal depression (e.g., Beck, 2002 ). However, currently relatively little research focuses on patients’ experiences of depression treatment. There is some research on depressed patients’ experiences of computer‐mediated depression treatment (e.g., Beattie, Shaw, Kaur & Kessler, 2009 ; Lillevoll et al., 2013 ), and mindfulness (e.g., Mason & Hargreaves, 2001 ; Smith, Graham & Senthinathan, 2007 ). However, there is less research on the major modalities such as CBT (e.g., Barnes et al., 2013 ), psychodynamic (e.g., Valkonen, Hänninen & Lindfors, 2011 ) and process‐experiential therapies (e.g., Timulak & Elliott, 2003 ). The lack matters because such qualitative research focusing on treatment experiences provides a method by which theoretical assumptions about how a therapy ‘works’ can be evaluated against the patient perspective.

Even more rare are comparative qualitative outcome studies (e.g., Nilsson, Svensson, Sandell & Clinton, 2007 ). Such studies focusing on depression are valuable because they can foster understanding of whether patients experience outcomes differently in different therapies. One example is Straarup and Poulsen's ( 2015 ) study, which compared patients’ experiences of CBT and metacognitive therapy and found evidence of different understandings of the causes of depression and what had changed as a result of therapy.

In summary, qualitative research has considerable value in terms of capturing patients’ experiences of psychotherapy that can inform practice (see Levitt, Pomerville & Surace, 2016 ). This suggests the need: (1) to consider qualitative outcome studies in guideline development and recommendations, and (2) encouraging further research focused on guideline‐recommended treatments and differential patient experiences.

Towards a broader spectrum of best evidence

Whatever the potential pool of data, guideline organisations need to establish and implement procedures for making recommendations. A recent review considered how different national organisations produce clinical guidelines. Moriana et al. ( 2017 ) analysed and compiled lists of evidence‐based psychological treatments by disorder using data provided by RCTs, meta‐analyses, guidelines and systematic reviews of NICE, Cochrane, Division 12 of the American Psychological Association and the Australian Psychological Society. For depression, they found poor agreement with no single intervention obtaining positive consensus agreement from all four organisations. The authors suggested one possible cause for the lack of agreement might be subtle biases in committee procedures, while evidence considered by both NICE and Cochrane may be overinfluenced by the key meta‐analyses that both organisations commission to support their decision‐making. Whilst one organisation might favour its own procedures in this way, the process lacks standardisation across the different bodies and leads to discrepancies in guidance.

The finding that guideline processes have led to different treatment recommendations for the same condition underlines the criticisms of an approach to synthesising evidence that rigidly prioritises RCTs. We argue that a rigorous and relevant knowledge base of the psychological therapies cannot be built on one research paradigm or type of data alone but should incorporate both evidence‐based practice (i.e., trials) and practice‐based evidence (i.e., routine practice data; Barkham & Margison, 2007 ). In this conceptualisation, trials provide evidence from a top‐down model (RCT evidence generating national guidelines that are implemented in practice settings) while practice‐based evidence builds upwards using data from routine practice settings to guide interventions and inform guideline development. Both paradigms are complementary and, most importantly, the results from one paradigm can be tested out in the other. Further, a synthesis of evidence from both paradigms ensures that the data from trials remain directly connected and relevant to routine practice, creating a continual cycle between practice and research and between practitioners and researchers.

Given the points made here, there is little justification for relying solely on trials data and dismissing evidence from large standardised routine datasets delivering NICE recommended and IAPT approved psychological therapies. There are issues and vulnerabilities with both paradigms and the evidence they provide, but it is no longer credible to suggest that the term best applies only to trials data. To abide by the advice of Rawlins ( 2008 ) as well as Jadad and Enkin ( 2007 ), views concerning nontrial data need to become more accommodating. Overall, a collective move to a position of considering the weight of evidence from a wider bandwidth or spectrum provides a more rounded and inclusive view of available high‐quality data. By applying the concept of teleoanalysis – that is, the synthesis of different categories of evidence to obtain a quantitative summary – it is possible to arrive at more robust and relevant conclusions (Clarke & Barkham, 2009 ; Wald & Morris, 2003 ). This, we would suggest, is an approach that would yield both better and more relevant evidence. Accordingly, IAPT data now needs to be considered alongside evidence from trials to form a more complete and accurate picture of the comparative effectiveness of psychological therapies. Further, high‐quality qualitative data require inclusion in arriving at recommendations, particularly as it is a primary source for patients’ perspectives and experiences.

Conclusions and recommendations

We have argued for greater precision in defining the profession and practice of counselling, provided an overview of research on counselling for the treatment of depression from meta‐analyses and RCTs, raised issues arising from a sole reliance on trials, and put the case for broadening the bandwidth of high‐quality evidence using large routine standardised data sets and the consideration of high‐quality qualitative studies. Overall, with regard to depression, counselling is effective. Some analyses suggest it is somewhat less effective than other therapies for depression (e.g., CBT), but when research findings are adjusted for researcher allegiance and low risk of bias, such differences are minimal and not clinically relevant (Cuijpers, 2017 ). Results from (very) large standardised data sets in routine practice show counselling to be as effective as CBT in the treatment of patient‐reported depression and with a suggestion that it may be more cost‐efficient. However, such data are not considered by NICE even though it is consistent with the scope of data defined in their guideline development procedural manual (NICE, 2014 /2017).

One clear observation concerning RCTs in the field of depression is the paucity of high‐quality head‐to‐head trials relating to counselling. In addition, there are calls from advocates of RCTs for trials to be larger and pragmatic (Wessley, 2007 ). In response to such calls, there is a large pragmatic noninferiority RCT comparing CfD (Person‐centred experiential therapy) with CBT as the benchmark treatment that will yield initial results late in 2018 (Saxon, Ashley et al., 2017 ). Particularly significant is the trial's focus on patients diagnosed as experiencing moderate or severe depression. The results regarding any differential effectiveness of counselling between moderate and severe depression will address a key issue as to whether CfD could be considered as a front‐line intervention. Funders should call for other therapeutic approaches to be evaluated using CBT as a benchmark – to determine whether another therapy is, in any clinically meaningful way, noninferior to CBT. In this way, a robust and relevant knowledge base will be constructed that aims to ensure quality and standards of psychological interventions for the treatment of depression while providing choice to patients. This is important giving the mounting empirical evidence that improving patient treatment choice improves therapy outcomes (Lindhiem, Bennett, Trentacosta & McLear, 2014 ; Williams et al., 2016 ).

Finally, in this article, we have sought to make an argument about re‐evaluating the definition of best evidence for guideline development. Using the evidence base for counselling in the treatment of depression as an example, we have argued that guideline developers should move towards integrating differing forms of high‐quality evidence rather than relying on trials alone. But this requires change for all stakeholders: for individual researchers in counselling to be strategic and ensure their work builds cumulatively on the work of others; for researchers in organisations to yield larger and more substantive studies; for service providers to collaborate in collating common data through, for example, building practice research networks; for counselling bodies to devise, fund and implement research strategies that will deliver a robust evidence base for practice; and for guideline developers to accept a diversity of substantive research approaches that, combined, will yield best evidence. In doing so, not only will it be possible to draw more robust conclusions about the cost‐effectiveness of depression treatment in the NHS and the clinical efficacy and effectiveness of different interventions, but also potentially the community, service, therapist, and patient variables that significantly impact on patient outcomes.

Acknowledgements

We would like to thank the anonymous reviewers for their helpful comments on an earlier draft.

Biographies

Michael Barkham is Professor of Clinical Psychology and Director of the Centre for Psychological Services Research at the University of Sheffield.

Naomi P. Moller is Joint Head of Research for the British Association for Counselling and Psychotherapy and Senior Lecturer in the School of Psychology at the Open University.

Joanne Pybis is Senior Research Fellow for the British Association for Counselling and Psychotherapy.

The views expressed in this article are our own and do not necessarily reflect the views of our respective organisations.

  • American Counseling Association (2017). What is counseling? Retrieved from https://www.counseling.org/aca-community/learn-about-counseling/what-is-counseling/overview
  • American Psychiatric Association (2017). APA practice guidelines . Retrieved from http://psychiatryonline.org/guidelines
  • American Psychological Association (2012). Recognition of psychotherapy effectiveness . Retrieved from http://www.apa.org/about/policy/resolution-psychotherapy.aspx
  • Barber, J. P. , Connoll, M. B. , Crits‐Christoph, P. , Gladis, L. , & Siqueland, L. (2000). Alliance predicts patients’ outcome beyond in‐treatment change in symptoms . Journal of Consulting and Clinical Psychology , 68 , 1027–1032. [ PubMed ] [ Google Scholar ]
  • Barkham, M. , Lutz, W. , Lambert, M. J. , & Saxon, D. (2017). Therapist effects, effective therapists, and the law of variability In Castonguay L. G., & Hill C. E. (Eds.), How and why are some therapists better than others?: Understanding therapist effects (pp. 13–36). Washington, DC: American Psychological Association. [ Google Scholar ]
  • Barkham, M. , & Margison, F. (2007). Practice‐based evidence as a complement to evidence‐based practice: from dichotomy to chiasmus In Freeman C., & Power M. (Eds.), Handbook of evidence‐based psychotherapies: A guide for research and practice (pp. 443–476). Chichester, United Kingdom: Wiley. [ Google Scholar ]
  • Barkham, M. , Stiles, W. B. , Lambert, M. J. , & Mellor‐Clark, J. (2010). Building a rigorous and relevant knowledge‐base for the psychological therapies In Barkham M., Hardy G. E., & Mellor‐Clark J. (Eds.), Developing and delivering practice‐based evidence: A guide for the psychological therapies (pp. 21–61). Chichester, United Kingdom: Wiley. [ Google Scholar ]
  • Barnes, M. , Sherlock, S. , Thomas, L. , Kessler, D. , Kuyken, W. , Owen‐Smith, A. , … Turner, K. (2013). No pain, no gain: depressed clients’ experiences of cognitive behavioural therapy . British Journal of Clinical Psychology , 52 , 347–364. [ PubMed ] [ Google Scholar ]
  • Barth, J. , Munder, T. , Gerger, H. , Nüesch, E. , Trelle, S. , Znoj, H. , … Cuijpers, P. (2013). Comparative efficacy of seven psychotherapeutic interventions for patients with depression: a network meta‐analysis . PLoS Medicine , 10 , e1001454. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Beattie, A. , Shaw, A. , Kaur, S. , & Kessler, D. (2009). Primary‐care patients’ expectations and experiences of online cognitive behavioural therapy for depression: a qualitative study . Health Expectations , 12 , 45–59. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Beck, C. T. (2002). Postpartum depression: a metasynthesis . Qualitative Health Research , 12 , 453–472. [ PubMed ] [ Google Scholar ]
  • Bedi, N. , Chilvers, C. , Churchill, R. , Dewey, M. , Duggan, C. , Fielding, K. , … Williams, I. (2000). Assessing effectiveness of treatment of depression in primary care . British Journal of Psychiatry , 177 , 312–328. [ PubMed ] [ Google Scholar ]
  • Binder, P. E. , Holgersen, H. , & Nielsen, G. H. S. (2010). What is a ‘good outcome’ in psychotherapy? A qualitative exploration of former patients’ point of view . Psychotherapy Research , 20 , 285–294. [ PubMed ] [ Google Scholar ]
  • British Association for Counselling and Psychotherapy (2009). BACP issues warning that new depression guidelines may harm patients , October 9 2009. Retrieved from http://www.bacp.co.uk/media/index.php?newsId=1610
  • British Association for Counselling and Psychotherapy (2017). What is counselling? Retrieved from http://www.bacp.co.uk/crs/Training/whatiscounselling.php
  • Cape, J. , Whittington, C. , Buszewicz, M. , Wallace, P. , & Underwood, L. (2010). Brief psychological therapies for anxiety and depression in primary care: meta‐analysis and meta‐regression . BMC Medicine , 8 , 38. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cartwright, N. , & Munro, E. (2010). The limitations of randomized controlled trials in predicting effectiveness . Journal of Evaluation in Clinical Practice , 16 , 260–266. [ PubMed ] [ Google Scholar ]
  • Clark, D. M. (2011). Implementing NICE guidelines for the psychological treatment of depression and anxiety disorders: the IAPT experience . International Review of Psychiatry , 23 , 318–327. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Clarke, J. , & Barkham, M. (2009). Tribute to Phil Richardson ‐ Evidence de rigueur: the shape of evidence in psychological therapies and the modern practitioner as teleoanalyst . Clinical Psychology Form , 202 , 7–11. [ Google Scholar ]
  • Cohen, J. (1992). A power primer . Psychological Bulletin , 112 , 155–159. [ PubMed ] [ Google Scholar ]
  • Cost of Depression in England (2010). All party parliamentary Group on wellbeing economics . Retrieved from https://wellbeingeconomics.wordpress.com/reports/Retrieved July 19 2017.
  • Cuijpers, P. (2016). Are all psychotherapies equally effective in the treatment of adult depression? The lack of statistical power of comparative outcome studies . Evidence Based Mental Health , 19 , 39–42. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cuijpers, P. (2017). Four decades of outcome research on psychotherapies for adult depression: an overview of a series of meta‐analyses . Canadian Psychology/Psychologie Canadienne , 58 , 7–19. [ Google Scholar ]
  • Cuijpers, P. , Driessen, E. , Hollon, S. D. , van Oppen, P. , Barth, J. , & Andersson, G. (2012). The efficacy of non‐directive supportive therapy for adult depression: a meta‐analysis . Clinical Psychology Review , 32 , 280–291. [ PubMed ] [ Google Scholar ]
  • Cuijpers, P. , Turner, E. H. , Koole, S. L. , van Dijke, A. , & Smit F. (2014). What is the threshold for a clinically relevant effect? The case of major depressive disorders Depression and Anxiety , 31 , 374–378. [ PubMed ] [ Google Scholar ]
  • Dias S. (2013). Using network meta‐analysis (NMA) for decision making . Paper presented at the 38th International Society for Clinical Biostatistics, Munich, Germany. [ Google Scholar ]
  • Emslie, C. , Ridge, D. , Ziebland, S. , & Hunt, K. (2006). Men's accounts of depression: reconstructing or resisting hegemonic masculinity? Social Science & Medicine , 62 , 2246–2257. [ PubMed ] [ Google Scholar ]
  • European Association for Counselling (2017). Definition of counselling . Retrieved from http://eac.eu.com/standards-ethics/definition-counselling/ . Cited 19 June 2017.
  • Evans, C. , Connell, J. , Barkham, M. , Margison, F. , Mellor‐Clark, J. , McGrath, G. , & Audin, K. (2002). Towards a standardised brief outcome measure: psychometric properties and utility of the CORE‐OM . British Journal of Psychiatry , 180 , 51–60. [ PubMed ] [ Google Scholar ]
  • Goldman, R. N. , Greenberg, L. S. , & Angus, L. (2006). The effects of adding emotion‐focussed interventions to the client‐centred relationship conditions in the treatment of depression . Psychotherapy Research , 16 , 537–549. [ Google Scholar ]
  • Greenberg, L. S. , & Watson, J. C. (1998). Experiential therapy of depression: differential effects of client‐centred relationship conditions and process experiential interventions . Psychotherapy Research , 8 , 210–224. [ Google Scholar ]
  • Gyani, A. , Pumphrey, N. , Parker, H. , Shafran, R. , & Rose, S. (2012). Investigating the use of NICE guidelines and IAPT services in the treatment of depression . Mental Health in Family Medicine , 9 , 149–160. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gyani, A. , Shafran, R. , Layard, R. , & Clark, D. M. (2013). Enhancing recovery rates: lessons from year one of IAPT . Behaviour Research and Therapy , 51 , 597–606. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hernandez‐Villafuerte, K. , Garau, M. , & Devlin, N. (2014). Do NICE decisions affect decisions in other countries? Value Health , 17 , A418. [ PubMed ] [ Google Scholar ]
  • Hill, C. E. , Chui, H. , & Baumann, E. (2013). Revisiting and reenvisioning the outcome problem in psychotherapy: an argument to include individualized and qualitative measurement . Psychotherapy , 50 , 68–76. [ PubMed ] [ Google Scholar ]
  • Hussain‐Gambles, M. , Atkin, K. , & Leese, B. (2004). Why ethnic minority groups are underrepresented in clinical trials: a review of the literature . Health and Social Care in the Community , 12 , 382–389. [ PubMed ] [ Google Scholar ]
  • IAPT Programme (2013). Census of the IAPT Workforce as at August 2012 . Retrieved from https://www.uea.ac.uk/documents/246046/11919343/iapt-workforce-education-and-training-2012-census-report.pdf/907e15d0-b36a-432c-8058-b2452d3628de . Cited 19 June 2017.
  • Jadad, A. R. , & Enkin, M. W. (2007). Randomized control trials: Questions, answers and musings , 2nd edn Oxford, United Kingdom: Blackwell. [ Google Scholar ]
  • Kanters, S. , Ford, N. , Druyts, E. , Thorlund, K. , Mills, E. J. , & Bansback, N. (2016). Use of network meta‐analysis in clinical guidelines . Bulletin of the World Health Organization , 94 , 782–784. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kaufman, J. , & Charney, D. (2000). Comorbidity of mood and anxiety disorders . Depression and Anxiety , 12 ( Suppl. 1 ), 69–74. [ PubMed ] [ Google Scholar ]
  • Kazdin, A. E. (2008). Evidence‐based treatment and practice. New opportunities to bridge clinical research and practice, enhance the knowledge base and improve patient care . American Psychologist , 63 , 146–159. [ PubMed ] [ Google Scholar ]
  • Kennedy‐Martin, T. , Curtis, S. , Faries, D. , Robinson, S. , & Johnston, J. (2015). A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results . Trials , 16 , 495. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • King, M. , Marston, L. , & Bower, P. (2014). Comparison of non‐directive counselling and cognitive behaviour therapy for patients presenting in general practice with an ICD‐10 depressive episode: a randomized control trial . Psychological Medicine , 44 , 1835–1844. [ PubMed ] [ Google Scholar ]
  • Kroenke, K. , Spitzer, R. L. , & Williams, J. B. W. (2001). The PHQ‐9: validity of a brief depression severity measure . Journal of General Internal Medicine , 16 , 606–613. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lawrence, V. , Banerjee, S. , Bhugra, D. , Sangha, K. , Turner, S. , & Murray, J. (2006a). Coping with depression in later life: a qualitative study of help‐seeking in three ethnic groups . Psychological Medicine , 36 , 1375–1383. [ PubMed ] [ Google Scholar ]
  • Lawrence, V. , Murray, J. , Banerjee, S. , Turner, S. , Sangha, K. , Byng, R. , … Macdonald, A. (2006b). Concepts and causation of depression: a cross‐cultural study of the beliefs of older adults . The Gerontologist , 46 , 23–32. [ PubMed ] [ Google Scholar ]
  • Levitt, H. M. , Pomerville, A. , & Surace, F. I. (2016). A qualitative meta‐analysis examining clients’ experiences of psychotherapy: a new agenda . Psychological Bulletin , 142 , 801–830. [ PubMed ] [ Google Scholar ]
  • Lillevoll, K. R. , Wilhelmsen, M. , Kolstrup, N. , Høifødt, R. S. , Waterloo, K. , Eisemann, M. , & Risør, M. B. (2013). Patients’ experiences of helpfulness in guided internet‐based treatment for depression: qualitative study of integrated therapeutic dimensions . Journal of Medical Internet Research , 15 ( 6 ), e126. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lindhiem, O. , Bennett, C. B. , Trentacosta, C. J. , & McLear, C. (2014). Client preferences affect treatment satisfaction, completion, and clinical outcome: a meta‐analysis . Clinical Psychology Review , 34 , 506–517. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • London School of Economics and Political Science. Centre for Economic Performance. Mental Health Policy Group (2006). The depression report: A new deal for depression and anxiety disorders [online]. London, UK: LSE Research Online. [ Google Scholar ]
  • Mason, O. , & Hargreaves, I. (2001). A qualitative study of mindfulness‐based cognitive therapy for depression . Psychology and Psychotherapy: Theory, Research and Practice , 74 , 197–212. [ PubMed ] [ Google Scholar ]
  • McLeod, J. (2013). Qualitative research: methods and contributions In Lambert M. J. (Ed.), Bergin and Garfield's handbook of psychotherapy and behavior change , 6th edn (pp. 49–84). Hoboken, NJ: Wiley. [ Google Scholar ]
  • Midgley, N. , Ansaldo, F. , & Target, M. (2014). The meaningful assessment of therapy outcomes: incorporating a qualitative study into a randomized controlled trial evaluating the treatment of adolescent depression . Psychotherapy , 51 , 128–137. [ PubMed ] [ Google Scholar ]
  • Mills, E. J. , Thorlund, K. , & Ioannidis, J. P. A. (2013). Demystifying trial networks and network meta‐analysis . BMJ , 346 ( f2914 ), 1–6. [ PubMed ] [ Google Scholar ]
  • Moriana, J. A. , Gálvez‐Lara, M. , & Corpas, J. (2017). Psychological treatments for mental disorders in adults: a review of the evidence of leading international organizations . Clinical Psychology Review , 54 , 29–43. [ PubMed ] [ Google Scholar ]
  • Mundt, J. C. , Marks, I. M. , Shear, M. K. , & Greist, J. M. (2002). The work and social adjustment scale: a simple measure of impairment in functioning . British Journal of Psychiatry , 180 , 461–464. [ PubMed ] [ Google Scholar ]
  • National Institute for Health and Care Excellence (2009). NICE guidelines for depression in adults: recognition and management . Retrieved from https://www.nice.org.uk/guidance/cg90 [ PubMed ]
  • National Institute for Health and Care Excellence (2014/updated 2017). Developing NICE guidelines: the manual: Process and methods . Retrieved from https://www.nice.org.uk/process/pmg20 [ PubMed ]
  • National Institute for Health and Care Excellence (2017a). NICE guidance . Retrieved from https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance
  • National Institute for Health and Care Excellence (2017b). NICE guidelines . Retrieved from https://www.nice.org.uk/About/What-we-do/Our-Programmes/NICE-guidance/NICE-guidelines
  • National Institute for Health and Care Excellence (2017c). Patient and public involvement policy . Retrieved from https://www.nice.org.uk/about/nice-communities/public-involvement/patient-and-public-involvement-policy
  • National Institute for Heath and Care Excellence (2017d). Depression in adults: recognition and management: Draft guidance consultation . Retrieved from https://www.nice.org.uk/guidance/GID-CGWAVE0725/documents/draft-guideline [ PubMed ]
  • NHS Digital (2014). Psychological therapies: Annual report on the use of IAPT services . England, 2013‐2014. Retrieved from http://content.digital.nhs.uk/catalogue/PUB14899 [ Google Scholar ]
  • NHS Digital (2015). Psychological therapies: Annual report on the use of IAPT services . England, 2014‐2015. Retrieved from http://content.digital.nhs.uk/catalogue/PUB19098 [ Google Scholar ]
  • NHS Digital (2016). Psychological therapies: Annual report on the use of IAPT services . England, 2015‐2016. Retrieved from http://content.digital.nhs.uk/pubs/psycther1516 [ Google Scholar ]
  • NHS England (2016). Our 2016/2017 business plan . Retrieved from https://www.england.nhs.uk/wp-content/uploads/2016/03/bus-plan-16.pdf
  • NHS England (2017). IAPT workforce . Retrieved from https://www.england.nhs.uk/mental-health/adults/iapt/workforce/
  • NHS England and Health Education England (2016). 2015 Adult IAPT workforce census report . Retrieved from https://www.england.nhs.uk/mentalhealth/wp-content/uploads/sites/29/2016/09/adult-iapt-workforce-census-report-15.pdf
  • Nilsson, T. , Svensson, M. , Sandell, R. , & Clinton, D. (2007). Patients’ experiences of change in cognitive–behavioral therapy and psychodynamic therapy: a qualitative comparative study . Psychotherapy Research , 17 , 553–566. [ Google Scholar ]
  • Parry, G. , Barkham, M. , Brazier, J. , Dent‐Brown, K. , Hardy, G. , Kendrick, T. , … Lovell, K. (2011). An evaluation of a new service model: Improving access to psychological therapies demonstration sites 2006–2009 . Final report. NIHR Service Delivery and Organisation programme. [ Google Scholar ]
  • Perren, S. (2009). Topics in training: thinking of applying for the IAPT high or low‐intensity training? Sara Perren has some advice . Healthcare Counselling and Psychotherapy Journal , 1 , 29–30. [ Google Scholar ]
  • Pybis, J. , Saxon, D. , Hill, A. , & Barkham, M. (2017). The comparative effectiveness and efficiency of cognitive behaviour therapy and counselling in the treatment of depression: evidence from the 2 nd UK national audit of psychological therapies . BMC Psychiatry , 17 , 215. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Radhakrishnan, M. , Hammond, G. , Jones, P. B. , Watson, A. , McMillan‐Shields, F. , & Lafortune, L. (2013). Cost of Improving Access to Psychological Therapies (IAPT) programme: an analysis of cost of session, treatment and recovery in selected primary care trusts in the east of England region . Behaviour Research and Therapy , 51 , 37–45. [ PubMed ] [ Google Scholar ]
  • Rawlins, M. (2008). De testimonio: on the evidence for decisions about the use of therapeutic interventions . The Lancet , 372 , 2152–2161. [ PubMed ] [ Google Scholar ]
  • Ridge, D. , & Ziebland, S. (2006). ‘The old me could never have done that’: how people give meaning to recovery following depression . Qualitative Health Research , 16 , 1038–1053. [ PubMed ] [ Google Scholar ]
  • Roth, A. D. , Hill, A. , & Pilling, S. (2009). The competences required to deliver effective humanistic psychological therapies . London, UK: Department of Health. [ Google Scholar ]
  • Sanders, P. , & Hill, A. (2014). Counselling for depression: A person‐centred and experiential approach to practice . London, United Kingdom: Sage. [ Google Scholar ]
  • Saxon, D. , Ashley, K. , Bishop‐Edwards, L. , Connell, J. , Harrison, P. , Ohlsen, S. , … Barkham, M. (2017). A pragmatic randomized controlled trial assessing the non‐inferiority of counselling for depression versus cognitive‐behaviour therapy for patients in primary care meeting a diagnosis of moderate or severe depression (PRaCTICED): a study protocol for a randomized controlled trial . Trials , 18 , 93. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Saxon, D. , & Barkham, M. (2012). Patterns of therapist variability: therapist effects and the contribution of patient severity and risk . Journal of Consulting and Clinical Psychology , 80 , 535–546. [ PubMed ] [ Google Scholar ]
  • Saxon, D. , Firth, N. , & Barkham, M. (2017). The relationship between therapist effects and therapy delivery factors: therapy modality, dosage, and non‐completion Administration and Policy in Mental Health , 44 , 705–715. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Scottish Intercollegiate Guidelines Network (2017). What we do . Retrieved from http://www.sign.ac.uk/what-we-do.html
  • Simpson, S. , Corney, R. , Fitzgerald, P. , & Beecham, J. (2000). A randomised controlled trial to evaluate the effectiveness and cost‐effectiveness of counselling patients with chronic depression . Health Technology Assessment , 4 ( 36 ), 1–83. [ PubMed ] [ Google Scholar ]
  • Smith, A. , Graham, L. , & Senthinathan, S. (2007). Mindfulness‐based cognitive therapy for recurring depression in older people: a qualitative study . Aging and Mental Health , 11 , 346–357. [ PubMed ] [ Google Scholar ]
  • Spitzer, R. L. , Kroenke, K. , Williams, J. B. W. , & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: the GAD‐7 . Archives of Internal Medicine , 22 , 1092–1097. [ PubMed ] [ Google Scholar ]
  • Stiles, W. B. , Barkham, M. , Mellor‐Clark, J. , & Connell, J. (2008). Effectiveness of cognitive‐behavioural, person‐centred, and psychodynamic therapies in UK primary care routine practice: replication in a larger sample . Psychological Medicine , 38 , 677–688. [ PubMed ] [ Google Scholar ]
  • Stiles, W. B. , Barkham, M. , Twigg, E. , Mellor‐Clark, J. , & Cooper, M. (2006). Effectiveness of cognitive‐behavioural, person‐centred, and psychodynamic therapies as practiced in UK National Health Service settings . Psychological Medicine , 36 , 555–566. [ PubMed ] [ Google Scholar ]
  • Stoppard, J. M. , & McMullen, L. M. (1999). Toward an understanding of depression from the standpoint of women: exploring contributions of qualitative research approaches . Canadian Psychology , 40 , 75–76. [ Google Scholar ]
  • Straarup, N. S. , & Poulsen, S. (2015). Helpful aspects of metacognitive therapy and cognitive behaviour therapy for depression: a qualitative study . The Cognitive Behaviour Therapist , 8 , e22. [ Google Scholar ]
  • Stronks, K. , Wieringa, F. , & Hardon, A. (2013). Confronting diversity in the production of clinical evidence goes beyond merely including under‐represented groups in clinical trials . Trials , 14 , 177. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stuart, E. A. , Bradshaw, C. P. , & Leaf, P. J. (2015). Assessing the generalizability of randomized trial results to target populations . Prevention Science , 16 , 475–485. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Timulak, L. , & Elliott, R. (2003). Empowerment events in process‐experiential psychotherapy of depression: an exploratory qualitative analysis . Psychotherapy Research , 13 , 443–460. [ PubMed ] [ Google Scholar ]
  • Valkonen, J. , Hänninen, V. , & Lindfors, O. (2011). Outcomes of psychotherapy from the perspective of the users . Psychotherapy Research , 21 , 227–240. [ PubMed ] [ Google Scholar ]
  • Vlayen, J. , Aertgeerts, B. , Hannes, K. , Sermeus, W. , & Ramaeker, D. (2005). A systematic review of appraisal tools for clinical practice guidelines: multiple similarities and one common deficit . International Journal for Quality in Health Care , 17 , 235–242. [ PubMed ] [ Google Scholar ]
  • Wald, N. J. , & Morris, J. K. (2003). Teleoanalysis: combining data from different types of study . BMJ , 327 , 616–618. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ward, E. , King, M. , Lloyd, M. , Bower, P. , Sibbald, B. , Farrely, S. , … Addington‐Hall, J. (2000). Randomised controlled trial of non‐directive counselling, cognitive‐behaviour therapy, and usual general practitioner care for patients with depression. I: clinical effectiveness . BMJ , 321 , 1383–1388. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Watson, J. C. , Gordon, L. B. , Stermac, L. , Kalogerakos, F. , & Steckley, P. (2003). Comparing the effectiveness of process‐experiential with cognitive‐behavioral psychotherapy in the treatment of depression . Journal of Consulting and Clinical Psychology , 71 , 773–781. [ PubMed ] [ Google Scholar ]
  • Wessley, S. (2007). Commentary: a defence of the randomized controlled trial in mental health . BioSocieties , 2 , 115–127. [ Google Scholar ]
  • Williams, R. , Farquharson, L. , Palmer, L. , Bassett, P. , Clarke, J. , Clark, D. M. , & Crawford, M. J. (2016). Patient preference in psychological treatment and associations with self‐reported outcome: national cross‐sectional survey in England and Wales . BMC Psychiatry , 16 , 4. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • World Health Organization (2017). WHO guidelines approved by the Guidelines Review Committee . Retrieved from http://www.who.int/publications/guidelines/en/

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 11: Presenting Your Research

Writing a Research Report in American Psychological Association (APA) Style

Learning Objectives

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three ways of organizing an APA-style method. Long description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

""

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Long Descriptions

Figure 11.1 long description: Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”). [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

A type of research article which describes one or more new empirical studies conducted by the authors.

The page at the beginning of an APA-style research report containing the title of the article, the authors’ names, and their institutional affiliation.

A summary of a research study.

The third page of a manuscript containing the research question, the literature review, and comments about how to answer the research question.

An introduction to the research question and explanation for why this question is interesting.

A description of relevant previous research on the topic being discusses and an argument for why the research is worth addressing.

The end of the introduction, where the research question is reiterated and the method is commented upon.

The section of a research report where the method used to conduct the study is described.

The main results of the study, including the results from statistical analyses, are presented in a research article.

Section of a research report that summarizes the study's results and interprets them by referring back to the study's theoretical background.

Part of a research report which contains supplemental material.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

major sections of a research report used in counseling research

major sections of a research report used in counseling research

11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report A type of journal article in which the author reports on a new empirical research study. , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a title page The first page of an APA-style manuscript, containing the title, author names and affiliations, and author note. . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s Soooo Cute!

How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal of Personality and Social Psychology .

  • “Let’s Get Serious: Communicating Commitment in Romantic Relationships”
  • “Through the Looking Glass Clearly: Accuracy and Assumed Similarity in Well-Adjusted Individuals’ First Impressions”
  • “Don’t Hide Your Happiness! Positive Emotion Dissociation, Social Connectedness, and Psychological Functioning”
  • “Forbidden Fruit: Inattention to Attractive Alternatives Provokes Implicit Relationship Reactance”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The abstract A short summary (approximately 200 words) of a research article. In an APA-style manuscript, the abstract appears on the second page. is a summary of the study. It is the second page of the manuscript and is headed with the word Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The introduction The first major section of an APA-style empirical research report. It typically includes an opening, a literature review, and a closing. begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The opening The first paragraph or two of the introduction of an APA-style empirical report. It introduces the research question and explains why it is interesting. , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003). Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The compleat academic: A practical guide for the beginning social scientist (2nd ed.). Washington, DC: American Psychological Association. Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote (Jacoby, 1999).

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (p. 3).

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the literature review A written summary of previous research on a topic. It constitutes the bulk of the introduction of an APA-style empirical research report. , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the balance of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The closing The last paragraph or two of the introduction of an APA-style empirical research report. It restates the research question and comments on the method. of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions (p. 378).

Thus the introduction leads smoothly into the next major section of the article—the method section.

The method section The section of an APA-style empirical research report in which the method is described in detail. At minimum, it includes a participants subsection and a design and procedure subsections. is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

major sections of a research report used in counseling research

After the participants section, the structure can vary a bit. Figure 11.1 "Three Ways of Organizing an APA-Style Method" shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The results section The section of an APA-style empirical research report in which the results are described in detail. is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Some journals now make the raw data available online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The compleat academic: A practical guide for the beginning social scientist (2nd ed.). Washington, DC: American Psychological Association. suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The discussion The final major section of an APA-style empirical research report. It typically includes a summary of the research, a discussion of theoretical and practical implications of the study, limitations of the study, and suggestions for future research. is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how can they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What new research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968), Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendixes, Tables, and Figures

Appendixes, tables, and figures come after the references. An appendix An optional section at the end of an APA-style manuscript used to present important supplemental material. is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendixes come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figure 11.2 "Title Page and Abstract" , Figure 11.3 "Introduction and Method" , Figure 11.4 "Results and Discussion" , and Figure 11.5 "References and Figure" show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract

major sections of a research report used in counseling research

This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method

major sections of a research report used in counseling research

Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion

major sections of a research report used in counseling research

The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure

major sections of a research report used in counseling research

If there were appendixes or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g., Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

IMAGES

  1. PPT

    major sections of a research report used in counseling research

  2. 13+ SAMPLE Scientific Research Report in PDF

    major sections of a research report used in counseling research

  3. Sample Counseling Report

    major sections of a research report used in counseling research

  4. (PDF) Qualitative Research in Counseling: A Reflection for Novice

    major sections of a research report used in counseling research

  5. 11.2 Writing a Research Report in American Psychological Association

    major sections of a research report used in counseling research

  6. Research in Counseling: Methodological and Professional Issues

    major sections of a research report used in counseling research

VIDEO

  1. BSc Psychology from Rishihood University

  2. Careers in Psychology: A World of Possibilities!

  3. Counseling Research-Week 1-Intro and Syllabus

  4. Taking Your Child to a Therapist: What To Expect In A Counseling Session

  5. युवा मानसिकताः किन चाल्छन् आ*त्मह*त्यासम्मको कदम र कसरी बच्ने ? VOICE OF ENTREPRENEUR

  6. Maximizing Your Membership: RCOT membership benefits webinar from 30 October 2023

COMMENTS

  1. PDF Doing Research in Counselling and Psychotherapy

    Principle 1: The primary aim of research is to create knowledge products. Principle 2: The meaning, significance and value of any research study depend on where it fits within the existing literature. Principle 3: Developing reliable and practically useful research-based knowledge in the field of counselling and psychotherapy requires the ...

  2. Reporting Standards for Research in Psychology

    The standards are categorized into the sections of a research report used by APA journals. To illustrate how the tables would be used, note that the Method section in Table 1 is divided into subsections regarding participant characteristics, sampling procedures, sample size, measures and covariates, and an overall categorization of the research ...

  3. 11.2 Writing a Research Report in American Psychological Association

    An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the ...

  4. Research Paper Structure

    A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1 Many will also contain Figures and Tables and some will have an Appendix or Appendices. These sections are detailed as follows (for a more in ...

  5. Guidelines and Recommendations for Writing a Rigorous Quantitative

    The Methods section is typically the second major section in a research manuscript and can begin with an overview of the theoretical framework and research paradigm that ground the study (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). ... Counseling researchers should report and explain the selection of their data analytic procedures (e.g ...

  6. Writing guidelines for Professional Psychology: Research and Practice

    Rather, when a survey or research project is presented, this may be done in a middle section labeled "The Survey" or "The Exploration" or "The Evaluation." Brief presentations of the most critical aspects of method and the major or unexpected findings are made, along with discussion of the findings that actually warrant discussion.

  7. Chapter 2.5 The Research Report

    The title page of a research report serves two important functions. First, it provides a quick summary of the research, including the title of the article, authors' names, and affiliation. Second, it provides a means for a blind evaluation. When submitted to a professional journal, a short title is placed on the title page and carried ...

  8. Journal of Counseling Psychology

    The Journal of Counseling Psychology® publishes empirical research in the areas of. counseling activities (including assessment, interventions, consultation, supervision, training, prevention, psychological education, and advocacy) career and educational development and vocational psychology. diversity and underrepresented populations in ...

  9. The Major Sections of a Research Study According to APA

    In the early 1900s, there was no established way to write a research paper, making each author's paper a little different and a little confusing. The sections acted as a way to unify and ease reading.

  10. What are research reports in counseling?

    A research report is a reliable source to recount details about a conducted research. Research report is a medium to communicate research work with relevant people. It is also a good source of preservation of research work for the future reference. Many times, research findings are not followed because of improper presentation.

  11. Literature Reviews

    A literature review can be a short introductory section of a research article or a report or policy paper that focuses on recent research. Or, in the case of dissertations, theses, and review articles, it can be an extensive review of all relevant research. The format is usually a bibliographic essay; sources are briefly cited within the body ...

  12. Research in Counseling: Methodological and Professional Issues

    1 Unless explicitly noted, the terms counseling and psychotherapy shall be used interchangeably. While there are historical, political and sometimes substantive bases for differentiating these terms in some contexts, research issues, especially methodological ones, are so intertwined between the activities conventionally labeled counseling and psychotherapy, that differential use of the two ...

  13. PDF Writing and Record Keeping in Counseling

    Psychological reports tend to be more analytical and interpretive than reports based solely on intake interviews. This is appropriate because the tests and inventories have been studied extensively and their interpretation has been validated by empirical research. As much as possible, however, psychometri­

  14. PDF Qualitative Research in Counseling: A Reflection for Novice ...

    This paper is thus written to support novice counselor researchers, and to inspire an emerging research culture through sharing formative experiences and lessons learned during a qualitative research project exploring minority issues in counseling. Key Words: Counseling, Health, Qualitative, Methods, and Narrative.

  15. Journal of Counseling & Development

    Null hypothesis statistical testing (NHST) represents a primary method of addressing quantitative results in counseling research. The use of NHST and the expression of results are often limited to the populations of interest, and many studies may not be replicable. Furthermore, the communication of results in terms of practical and clinical ...

  16. Research Methods DQ

    enhancing the report&#039;s credibility by offering the reader the sources. Overall, the major sections work cohesively to communicate the research process, findings, and their broader implications in the field of counseling. Writing for research purposes and writing for standard graduate-level

  17. How should we evaluate research on counselling and the treatment of

    A major reason patients decline to be participants in trials is that they do not wish to be research subjects (Barnes et al., 2013). In addition, there has been a long‐term concern about the lack or underrepresentation of minorities in research studies (Hussain‐Gambles, Atkin & Leese, 2004 ; Stronks, Wieringa & Hardon, 2013 ).

  18. Writing a Research Report in American Psychological Association (APA

    At the end of this section is a sample APA-style research report that illustrates many of these principles. Sections of a Research Report Title Page and Abstract. An APA-style research report begins with a title page. The title is centred in the upper half of the page, with each important word capitalized.

  19. Sections of a Research Report

    An APA-style research report begins with a title page. The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that ...

  20. Understanding Research Reports in Counseling: Sections, Content

    The major sections of the paper include a title page, abstract, method, results, references, and a discussion or conclusion (Publications & Journal, 2008). Research is conducted to better understand and investigate a question, such as with counseling it could be about behaviors, counseling methods, interventions and processes.

  21. Major Sections of a Research Report in Counseling Research: Core

    The major sections of a research report used in counseling research typically include the following: 1. Introduction: This section provides an overview of the research topic, states the research problem or question, and presents the purpose and significance of the study.

  22. Describe the major sections of a research report used in counseling

    In a research report used in counseling research, there are several major sections typically included: 1. **Introduction:** This section provides an overview of the research topic, outlines the research problem or questions, and introduces the purpose of the study. 2.