ASHA_org_pad

What to Expect in Peer Review

The asha journals peer review model, peer review steps and timeline, before and after a decision on your manuscript.

Manuscripts submitted to the ASHA journals go through an editorial board peer review model. In this model, an editor-in-chief (EIC) is responsible for assigning each manuscript to an editor who has the appropriate content expertise. The editor assigns typically two to three reviewers who are editorial board members (EBMs) or one EBM and one ad hoc reviewer, or any combination thereof. Reviewers submit lists of strengths and weaknesses in a number of categories appropriate for the type of manuscript as well as any brief additional comments. Upon receipt of reviews, the editor is not expected to provide additional detailed comments. The editor, in a decision letter, instead helps the author identify the most important changes, particularly when EBMs or ad hoc reviewers disagree. An editor would be free to recruit additional reviews, such as for specialized statistics review, as needed.

This is a change from the previous peer review model in which an editor rendered a decision after two to three reviews were submitted to an associate editor, who made a decision recommendation. Also, review comments were not structured.

Review Policies

ASHA journals perform single-anonymized reviews, which means that the reviewer knows the author’s name, but the authors do not know the reviewers’ identities unless reviewers choose to include their names in the review. On rare occasions, authors do request a double-anonymized review (please see the Anonymized Review Policies page for additional information). Our standard review process is outlined below.

Original Submission Review

Using the ASHA Journals Editorial Manager system, you will upload a properly formatted manuscript and answer a series of disclosure questions (see our guide on Manuscript Submission for more information). The manuscript will then be assigned by the editor-in-chief to an editor with the right subject matter expertise. The editor will typically then assign the manuscript to at least two editorial board members (EBMs) or ad hoc reviewers, or some combination thereof, for reviews. The EBMs or ad hoc reviewers submit comments using a structured peer review template, along with a decision recommendation, to the editor. The editor then reads the reviews in depth, considers the recommendations, and renders a decision.

Author Revision and Submission

If your manuscript requires a revision, as is most typically the case, then you will be given up to 6 weeks to revise and resubmit the manuscript.

Revised Submission Review

After receiving your revised manuscript, the journal editor will typically then assign at least two EBMs or ad hoc reviewers, or some combination thereof, to review the revised version of the manuscript. The reviewers will submit comments and recommendations, and then the editor will render a revision decision.

Second Author Revision and Submission

If your manuscript requires a second revision for acceptance, you will be given up to 3 weeks to submit a revised manuscript.

Overall Estimated Time From Submission to Decision

Assuming two rounds of review (one round for the original submission and one round for the revised manuscript), time from submission to final decision in the editorial board peer review model can take as little as approximately 4 months. But again, the overall time from submission to final decision of a manuscript depends largely on the number of rounds of review and how long authors take to complete revisions. Authors following submission instructions and submitting revisions that thoroughly address review comments help peer review maintain a swift pace.

If Accepted

What's next.

If your article is accepted, it will begin the journal production process . During the production process, you will be asked to provide some answers to author queries and make some basic revisions, but most of the process will be handled by the ASHA Journals production staff at this point.

If Rejected

There are a number of reasons a manuscript may be rejected for publication in the ASHA Journals. They can range from the manuscript not being a good fit for the scope and mission of the journal to which it was submitted, to concerns over the overall quality.

Authors may disagree with the decision of the editors of ASHA journals and may wish to challenge and appeal those decisions.

All appeals concerning decisions of an editor are first directed to the editor. In many cases, author-editor disagreements can be resolved directly through discussions between these parties. If no resolution is achieved, the author may file an appeal with the chair of the Journals Board.

The Journals Board chair discusses the disagreement with both parties to determine whether the dispute involves matters of scientific or technical opinion. If the dispute solely concerns such differing opinions, the appeal is not considered further and the original editorial decision is upheld. The chair then notifies the author and editor of the decision.

If the chair concludes that the issue could be the result of personal bias and/or capriciousness in an editorial decision, the chair then convenes an ad hoc Journals Board Appeals Committee. This committee is made up of two voting members of the Journals Board and the Journals Board chair. This committee is charged with the task of determining whether the author’s appeal has merit. This decision will be determined by majority vote.

If the decision is that there is no merit to the appeal, the chair of the Journals Board notifies the editor and the author of the decision.

If the committee determines that the appeal has merit, the editor is given an opportunity to reconsider the final decision.

If the editor maintains the original decision, the chair of the Journals Board may assign a new guest editor for the manuscript. New editorial board member reviewers would then be solicited and the review process re-initiated.

Author Resource Center

Related content, aja special issue: internet and audiology, select papers from the 45th clinical aphasiology conference, improved review process with new editorial board structure, now in effect, quick resources.

  • Perspectives  
  • Journal Production Steps

Quick Facts

Number of journals: 5 Editors-in-Chief: 10 Editors: 59 Editorial Board Members (EBMs): 352 Time from submission to decision: Approximately 4 months

Average acceptance rate: 52% Growth in amount published since 2010: 58%

ASHAWire

About the ASHA Journals

ASHA publishes four peer-reviewed scholarly journals and one peer-reviewed scholarly review journal pertaining to the general field of communication sciences and disorders (CSD) and to the professions of audiology and speech-language pathology. These journals are the  American Journal of Audiology ;  American Journal of Speech-Language Pathology ;  Journal of Speech, Language, and Hearing Research ;  Language, Speech, and Hearing Services in Schools ; and Perspectives of the ASHA Special Interest Groups . These journals have the collective mission of disseminating research findings, theoretical advances, and clinical knowledge in CSD.

Connect with the ASHA Journals

Subscribe to the asha journals, additional author services.

ASHA Author Services Portal  

ad hoc journal article review

© 1997-2024 American Speech-Language-Hearing Association Privacy Notice Terms of Use

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Continuing Education
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why publish with this journal?
  • About Archives of Clinical Neuropsychology
  • About the National Academy of Neuropsychology
  • Journals Career Network
  • Editorial Board
  • Advertising and Corporate Services
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

On becoming a peer reviewer for a neuropsychology journal, why become an ad hoc reviewer, how does one become an ad hoc reviewer, what does one do as an ad hoc reviewer, introduction, materials and methods, other tips on reviewing, conflict of interest, acknowledgements.

  • < Previous
  • Article contents
  • Figures & tables
  • Supplementary Data

Kevin Duff, Sid E. O'Bryant, Holly James Westervelt, Jerry J. Sweet, Cecil R. Reynolds, Wilfred G. van Gorp, Daniel Tranel, Robert J. McCaffrey, On Becoming a Peer Reviewer for a Neuropsychology Journal, Archives of Clinical Neuropsychology , Volume 24, Issue 3, May 2009, Pages 201–207, https://doi.org/10.1093/arclin/acp031

  • Permissions Icon Permissions

The peer-review process is an invaluable service provided by the professional community, and it provides the critical foundation for the advancement of science. However, there is remarkably little systematic guidance for individuals who wish to become part of this process. This paper, written from the perspective of reviewers and editors with varying levels of experience, provides general guidelines and advice for new reviewers in neuropsychology, as well as outlining benefits of participation in this process. It is hoped that the current information will encourage individuals at all levels to become involved in peer-reviewing for neuropsychology journals.

The peer-review process of professional journal publishing is as important to the scientific enterprise as developing reliable and valid measures, well-characterized samples, and appropriate statistical techniques. However, many professionals have limited involvement in reviewing manuscripts for scientific publication. Whether one is well grounded by scientific training or not, beginning involvement in the review of journal manuscripts typically occurs with little to no guidance. In addition, empirical evaluation of the peer-review process has in some instances revealed a disappointing level of agreement between peer reviewers (e.g., Rothwell & Martyn, 2000 ). Probable factors in producing low reviewer agreement are the general lack of direction provided to reviewers, who are often left to “figure it out” on their own, as well as a disproportionate number of reviewers who are less experienced and more junior in their careers. The current article will review some benefits of being an ad hoc reviewer for journals, and outline some points to keep in mind while conducting reviews.

There are many reasons to serve as an ad hoc reviewer for scientific journals. It may seem a truism that volunteerism is its own reward, but nevertheless the following is a list of specific benefits that can accrue to reviewers:

Staying current with the literature . By evaluating manuscripts, the reviewer is exposed to the most recent, cutting edge research in the field, even before it is published. As one reads a manuscript, one takes in concise summaries of relevant literature in the Introduction section, new procedures detailed in the Methods section, and the latest findings in the Results section. Although not all submissions will be published, reviewers are privy to data that few others have seen.

Developing your professional diversity . If your work primarily involves conducting clinical assessments or interventions, then reviewing manuscripts might provide an outlet for your “researcher side.” If you primarily conduct basic science research, then reviewing clinically relevant findings might widen the scope of your own work or viewpoint. If you primarily teach, then reviewing might expand your breadth of the basics for what you teach, as well as offer teaching opportunities for your students as co-reviewers (discussed subsequently).

Shaping the field . By commenting on the work of your peers, you can sometimes guide the focus of a particular paper. Your editorial suggestions could improve methodologies and points of view within neuropsychology.

Taking advantage of an opportunity to provide service to the scientific community . At some academic institutions, one way of demonstrating scientific productivity and involvement in your discipline is through service in peer review of manuscripts related to your areas of investigation. In fact, being asked to be a peer reviewer represents recognition of your own work, which has attracted the attention of the journal's editorial board. Frequent involvement in publishing your own work and peer reviewing manuscripts for the same journal can lead to an invitation to serve on the editorial board, which is widely considered to be a distinctive recognition of your own scientific efforts. This is true whether one works in an academic setting or in a private practice setting. Interestingly, clinical neuropsychology is one of the few healthcare specialties in which private practitioners are relatively frequently involved in peer-reviewed journal publishing.

Giving back to the field . Many neuropsychology journals are affiliated with professional organizations. For example, Archives of Clinical Neuropsychology is the official journal of the National Academy of Neuropsychology, The Clinical Neuropsychologist is the official journal of the American Academy of Clinical Neuropsychology, The Journal of the International Neuropsychological Society is the official journal of the International Neuropsychological Society, and Applied Neuropsychology is the official journal of the American College of Professional Neuropsychology. By participating in the review process, you are sharing your individual skills and professional perspective with peers in your specialty, including those who constitute a much broader readership than the membership organization, such as psychologists who are not neuropsychologists and individuals outside our discipline altogether, such as physicians.

Improving the quality of your own work . Just as reading recently published studies provides new avenues for one's own studies, reviewing manuscripts submitted for publication can provide ideas about the methods being used and research questions being investigated by peers with similar interests to one's own. Though reviewers must be careful to avoid intellectual plagiarism, there is potential for learning about new techniques that might apply to your existing line of research.

If you are interested in reviewing manuscripts for a journal, but do not know where to begin, there are many ways to become involved in the process. These fundamental ideas may require patience on your part, but will improve your chances of being given an opportunity.

Directly contact journal editors . A fact known all too well to journal editors is that it can be quite difficult at times to find suitable reviewers who will be able to provide a timely review of a manuscript, in part because the most skilled and experienced reviewers are often the busiest. Editors therefore need a long list of potential reviewers, and often are looking for names to add to that list. This remains true, even in the age of electronic databases to which editors have access. A brief email stating your name, basic qualifications, and areas of interest is usually sufficient. Most editors will be very receptive to anyone expressing interest in reviewing articles and excited at the prospect of another resource!

Talk to your peers . It is likely that some of your colleagues are already involved in the review process and can provide you with contact information. Alternatively, your peers can suggest your name as a potential reviewer to the editor. Finally, you could offer to co-review a paper with a peer who is already an ad hoc reviewer. This could give you a chance to “prove yourself” to the editor. If you want to co-review a manuscript, then you should contact the editor about this in advance, as submitted manuscripts are confidential outside the review process and a co-reviewer would usually receive an acknowledgement in one of the issues of the journal.

Publish . If you produce research, then eventually someone will “cold call” you and ask for your opinion on a manuscript. However, this latter method may take some time and is not very efficient.

If you were interested in reviewing, have contacted an editor, and received your first manuscript for review, how should you approach the assignment? As with writing a paper or grant, evaluating a patient, or teaching a course, there is no right or wrong way to review a paper, but there are some basic guidelines one might follow.

Before thinking of specific review suggestions, we suggest first taking a mental step backward to reflect on the bigger picture of the task at hand. It may help to consider that a colleague has taken a great deal of time and energy to conceptualize a problem, identified data that can address the problem, gathered and analyzed the data, and finally written down all the relevant information related to the specific project at hand. Whether all of these steps were effective and led to a publishable paper or not, each of the individuals who is willing to undertake this research activity does so with the belief that new and important information is being gathered for sharing with peers. It is therefore the general goal of a peer reviewer to be helpful to the author(s), even if the review identifies numerous issues that may ultimately prevent publication. Ideally, there is a spirit or tone set in the reviewing process, which is devoid of bias, competition or envy, arbitrariness, and harshness. Stated more positively, as it was first established by the Royal Society of England in the seventeenth century, effective peer review as a component of scientific journal publishing has been conceptualized as a professional consultation that is delivered in a respectful and timely manner on an intellectual matter, with the process being confidential (cf. Moore, 2005 ). The peer reviewer is serving as a consultant to the author(s) and with the editor, and is essential to the integrity of the scientific publishing process in that an editor cannot be a content expert in all fields ( Moore, 2005 ). Therefore, the duties of a peer reviewer are not to be viewed lightly.

Some reviewers prefer to start with a brief introductory paragraph that summarizes the article and its major findings. This section might also highlight any broad strengths and weaknesses of the manuscript. Although not required, this approach has the advantage of assuring the editor and author that you read the article and understand its main points. Alternatively, other reviewers prefer to skip this step and get right to the critical comments. The key throughout is to evaluate the manuscript, and not simply annotate it. The authors, in particular, already know what they did and what their study was about, and long detailed summaries are not helpful. The reviewer's comments should be evaluative.

Reviews can be structured based on the sections of a typical manuscript (e.g., Introduction, Methods, Results, Discussion), and relevant comments are only made using the outline of those sections. Alternatively, reviews can lay out comments in order of importance (e.g., most important to least important), perhaps with headings that identify “major” and “minor” points for the authors to consider. Some form of structure, if only to present comments in the order of the manuscript pages is better, than presenting comments in a haphazard manner. More structure usually allows the editor and author to follow your reasoning and act accordingly (e.g., make a reasoned decision on the manuscript).

Below are some suggestions to consider when reviewing specific sections of a manuscript.

Does the introduction begin generally, and then become more focused? Bem (1987) noted that an empirical article often has an hourglass shape, starting broadly, but narrowing its scope as the Introduction moves to the Methods section. The Results section is also narrow, but gradually widens its scope throughout the Discussion section.

Is the relevant and most recent literature cited and reviewed? Although classic studies in neuropsychology might set the stage, more contemporary studies usually provide more relevant information. For example, a 1975 article using the WAIS might have been relevant in a prior era, but a 2005 article using the WAIS-III is more relevant to the present reader.

Are all critical topics and/or questions adequately covered in this section? Do gaps exist in the authors' line of reasoning? Does this section “flow”?

Is the length appropriate? Is it too long and does it cover unnecessary background information? Many inexperienced authors write lengthy, dissertation style introductions, which take up valuable journal space and thereby may unwittingly put the article's acceptance into some jeopardy. A reviewer can help the author by recommending the introduction be tightened and condensed. Conversely, some Introductions can be too brief, leaving an uninformed reader without any context.

Is a specific purpose of the study stated?

Are specific hypotheses clearly stated? And are the hypotheses properly motivated by the background provided in the Introduction?

Methods sections are generally quite specific and detailed, and it is difficult to comment on all the issues and possible problems that a reviewer might encounter. This is where identifying key issues is quite important. Some guidelines for reviewing this section of paper:

Is the sample appropriately/adequately described? To utilize research findings, readers need to know who the sample was, what were the recruitment procedures, how were participants assigned to groups. Some information about demographic characteristics should be reported (e.g., means, standard deviations, ranges). Age, education, and sex might be most relevant in some studies, whereas Glasgow Coma Scale and length of loss of consciousness might be most relevant in others. Given the increasing diversity of the population, information regarding race/ethnicity and primary language are oftentimes necessary.

Was approval received from the local Institutional Review Board? Was informed consent obtained from each participant?

What were the methods of data collection? Are methods adequately described so that the study could be replicated? For example, it is more informative to indicate that “age-corrected standard scores from the test manual of the California Verbal Learning Test – II were used” than “the California Verbal Learning Test – II was used.”

If an intervention was used, is it also adequately described so that readers can understand what was done? Sometimes a citation to another published article is sufficient; sometimes it is not.

What are the statistical analyses utilized? Are the dependent variables clearly stated? Are the statistics appropriate for the questions? Are relevant covariates considered? Would non-parametric tests be more appropriate? Are there sample size/power concerns (e.g., too many analyses and/or no alpha correction)? Would other analytic techniques better answer the same question? Should the author(s) drop/add any analyses?

Are the necessary results presented? Are the relevant statistical values and degrees of freedom reported, along with p -values? If appropriate, are effect sizes reported?

Is the presentation of the results understandable to someone who does not do research in this area? Within the body of the text, subheadings might make results easier to read and understand. Within tables, clear column and row headings can increase the value of the table.

Are figures and tables appropriately titled? Do the authors use notes that are clear and informative? Are all abbreviations used within figures and tables defined in notes, such that readers will not have to search back through text to grasp their meaning?

Do the authors include analyses that were not discussed in the Introduction and Method sections? Conversely, are analyses missing that were mentioned earlier in the paper?

Do the authors commit some of the “deadly sins” outlined by Millis (2003 )? In this paper, Millis highlights some common statistical errors made in neuropsychology manuscripts (e.g., multiple comparisons, low power, ignoring missing data), as well as provides guidance for correcting these errors.

Does the manuscript need an expert statistical review? Some statistical analyses can be quite challenging for the average reviewer, and it is appropriate to let the editor know that you do not have the expertise to fully comment on results. Consider that if these results are too complex for you as the reviewer, then they may also be too complex for the typical journal reader, in which case the authors must do a better job of explaining their analyses.

In the Discussion section, the results are often summarized, integrated, and put into context with the existing literature.

Although the primary sections of the manuscript associated with the preceding points are most important to evaluate when reviewing a manuscript, reviewers are also expected to comment on other components of the manuscript, if they are relevant to the question of publishability. The following are common points on which the journal editor will appreciate guidance from a reviewer.

Are the conclusions supported by the findings? Some authors tend to over-interpret their findings. In a manner of speaking, data are specific to that single study, and they may have limited relevance outside of that single study. As such, data should not be stretched to topics or levels of meaning for which they are not relevant. Reviewers should offer constructive suggestions to the authors for correcting over-interpretation of data.

Are the findings generalized to the appropriate populations and/or settings? For example, a finding that symptom validity testing incorrectly classified children with Learning Disorders should not necessarily be extended to adults with Learning Disorders or even children with other development disorders.

Are the results put into context with the existing body of literature? If they stand apart from other studies, do the authors discuss why this might be?

Do the authors simply restate the Results section? Although authors often highlight certain findings in this section, they should move beyond a restating of the findings and actually discuss and integrate their findings.

Are new results and/or data presented that have not been discussed elsewhere? Typically, results should be presented in the Results section, not the Discussion section.

Do the authors provide a conceptual framework and/or presentation for how and when to use the findings? Finding a statistically significant result does not necessarily answer the “so what?” question. What is gained by the completion and reporting of this study?

Are the appropriate caveats and limitations to the article mentioned and discussed?

Are future research directions provided?

Has the manuscript been carefully prepared in the appropriate style of the journal (e.g., APA writing style)?

Does the theme/content of the manuscript fit with the aims of the journal?

Does the manuscript address something new and add something to the existing literature?

How adequate is the general writing style (e.g., grammar, punctuation, and readability)? When recommendations are made for correcting or improving the writing, it is best to be very specific. That is, rather than suggest that the authors improve their writing and remove errors, it is much better to identify the page and line in which the problems can be found, and to suggest the solution.

Is there an Acknowledgment section? Does it mention the funding agency? What was the role of the funding agency in the project? Were possible conflicts of interest of the authors noted to the reader?

Is the Reference section correctly formatted for the specific journal? Are all references mentioned in text provided in the reference section and vice versa? Are there errors in specific references? If references are recommended by a reviewer, it is best to provide the author the complete citation, to avoid confusion. If at all possible, avoid suggesting that the authors cite your own work; this is viewed as self-promotion and to be avoided, unless the citation in question is truly seminal and the omission would substantially weaken the instructive nature of the article for the reader.

Now that you have completed your initial review, you might wonder how to hone your new craft. Below are some suggestions for becoming a more refined ad hoc reviewer.

Peer reviewers, the bedrock of medical journal objectivity, require more training and experience. One simple solution might be for editors to provide them with a short review of best practices along with the checklist of core elements to consider . ( Ray, 2002 , p.772)

Provide critiques in a timely manner. Try to adhere to deadlines. If you are late on a review, it slows down the process and prevents the editors from achieving a timely decision on the manuscript, and the authors from receiving timely feedback. If you will be late with the review, contact the editor to give an estimate of when it will be completed.

Reviewing takes practice. Similar to writing papers, interviewing patients, and preparing lectures, do not expect that your initial review will be your best work. However, it is likely that reviewing more papers will make you better at it.

Readily ask for advice/guidance. You do not need to be an expert on every topic to be a good reviewer. Know your areas of expertise within neuropsychology (e.g., specific tests, specific disorders) and research (e.g., study design, statistics). When the manuscript exceeds those areas, do not be afraid to consult with others. You could ask colleagues for their thoughts (without divulging the entire manuscript). You could refer to the literature to see how others have addressed similar problems. You can let the editor know that certain aspects of the manuscript fall outside your knowledge base, and, in some instances, you should decline the solicitation to be a reviewer if the topic is not sufficiently within your knowledge and expertise. The boundary on when to decline a topic based on lack of relevant knowledge is perhaps best illuminated by asking yourself the question, “If I was the author, would I want a reviewer who has only my degree of knowledge to pass judgment on my hard work?”

Learn from the other reviewers. Once you submit your review, most journals will allow you to see the other completed reviews. Use this as opportunity to see if your specific comments and ultimate recommendation for the disposition of the paper align with the reviews of potentially more experienced reviewers, and learn from the issues brought up by other reviewers that you may have missed.

Cover the entire manuscript. Try to find strengths and weaknesses in all components of the submission. Resist the urge to stop reviewing the manuscript because you found a “fatal flaw” in the Introduction or Methods sections. Your feedback to the authors may lead the editor to reject the manuscript, but the feedback itself can still be very valuable to the authors in their future research endeavors. It is easy to be critical; it is better but more challenging to help the authors improve their paper by adopting a constructive tone to your critique.

Provide helpful comments to the author(s). Reviewers can help shape manuscripts (and ultimately the field). By providing constructive and concrete comments, a reviewer can provide direction that assists the author in building a better manuscript. Unclear suggestions (e.g., “statistical analyses are wrong”) provide no real instruction to the authors when revising their work. In a survey of corresponding authors of a psychology journal, Nickerson (2005) found authors want specific information on problems in their manuscripts and concrete suggestions to improve those problems.

At all costs, avoid tirades and unnecessary negativity in reviews. There is no place in a review for personal attacks or vendettas of any sort. If you disagree with the author, then state so in a constructive manner. Try to help the author.

Avoid statements about your recommendations for acceptance or rejection of the manuscript. Most editors will prefer that the reviewers' comments to the authors focus on the strengths and weaknesses of the submission, rather than indicating whether the reviewer thinks the paper should be published. If one reviewer is recommending publication and others are not, this can be confusing and frustrating to the authors. You can indicate your enthusiasm/degree of concern about the work, but reserve comments such as “I think this paper should definitely be published” for the section of confidential communication to the editor.

Unpublished manuscripts are confidential. Until a manuscript is accepted and “in press,” you cannot reference it. Refrain from contacting authors or letting them know in any manner that you reviewed their work. Do not share the findings with your peers. A corollary of this point is that you, as the reviewer, also have anonymity. Editorial staff will not divulge who reviewed specific manuscripts.

Signing reviews remains a controversial practice in the review process. In favor of this practice, it makes the process more transparent (e.g., authors know how specific individuals feel about their work). This may carry more weight in the revision process, especially if the reviewers are more senior members of the field. Conversely, confidentiality is removed, which may affect critiques. Reviewers should check with journal editors about their preferences for signing reviews.

The authors of this article hold editorial positions at several neuropsychology journals, including Archives of Clinical Neuropsychology (KD, HW, RJM), Applied Neuropsychology (CRR), Journal of Clinical and Experimental Neuropsychology (WGvG, DT), and The Clinical Neuropsychologist (JJS).

We would like to gratefully acknowledge the contributions of the Guest Action Editor, Arthur MacNeill Horton, Jr., Ed. ABPP, ABPN, and the anonymous reviewers of this manuscript.

Google Scholar

Google Preview

  • neuropsychology
  • peer review

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1873-5843
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

A comprehensive review on vehicular ad-hoc networks routing protocols for urban and highway scenarios, research gaps and future enhancements

  • Published: 18 April 2024

Cite this article

  • Ishita Seth 1 ,
  • Kalpna Guleria 1 &
  • Surya Narayan Panda 1  

Vehicular Ad-hoc Networks (VANETs) have received extensive consideration from the industry and the research community because of their expanding emphasis on constructing Intelligent Transportation Systems (ITS) to enhance road safety. ITS is a collection of technologies and applications that aim to improve transportation safety and mobility while lowering the number of accidents. In VANET, routing protocols play a significant role in enhancing communication safety for the transportation system. The high mobility of nodes in VANET and inconsistent network coverage in different areas make routing a challenging task. As a result, ensuring that the VANET routing protocol has the maximum packet delivery ratio (PDR) and low latency is of utmost necessity. Due to the high dynamicity of the VANET environment, position-based routing protocols are paramount for VANET communication. VANET is subjected to frequent network disconnection due to the varied speeds of moving vehicles. Managing and controlling network connections among V2V and V2I is the most critical issue in VANET communication. Therefore, reliable routing protocols that can adapt to frequent network failures and select alternative paths are still an area to be explored further. Majorly, VANET routing protocols follow the greedy approach; once the local maximum is reached, the packets start dropping, resulting in a lower packet delivery ratio. Therefore, lower PDR is still an issue to be resolved in VANET's routing protocols. This paper investigates recent position-based routing protocols proposed for VANET communication in urban and highway scenarios. It also elaborates on topology-based routing, which was initially used in VANET, and its research gaps, which are the major reason for the advent of position-based routing techniques proposed for VANET communication by various researchers. It provides an in-depth comparison of different routing protocols based on their performance metrics and communication strategies. The paper highlights various application areas of the VANET, research challenges encountered, and possible solutions. Further, a summary and discussion on topology-based and position-based routing protocols mark the strengths, limitations, application areas, and future enhancements in this domain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

ad hoc journal article review

Data availability

No datasets were generated or analysed during the current study.

ROR SAFETY (2018) Global Status Report on Road, World Heal. Organ 20. http://apps.who.int/bookorders

Ang LM, Seng KP, Ijemaru GK, Zungeru AM (2019) Deployment of IoV for smart cities: applications, architecture, and challenges. IEEE Access 7:6473–6492. https://doi.org/10.1109/ACCESS.2018.2887076

Article   Google Scholar  

Contreras-Castillo J, Zeadally S, Ibáñez JAG (2017) A seven-layered model architecture for internet of vehicles. J Inf Telecommun 1:4–22. https://doi.org/10.1080/24751839.2017.1295601

Sherazi HHR, Khan ZA, Iqbal R, Rizwan S, Imran MA, Awan K, Elhoseny M (2019) A heterogeneous IoV architecture for data forwarding in vehicle to infrastructure communication. Mob Inf Syst 2019. https://doi.org/10.1155/2019/3101276

Arooj A, Farooq MS, Umer T, Rasool G, Wang B (2020) Cyber physical and social networks in IoV (CPSN-IoV): a multimodal architecture in edge-based networks for optimal route selection using 5G technologies. IEEE Access 8:33609–33630. https://doi.org/10.1109/ACCESS.2020.2973461

Contreras-Castillo J, Zeadally S, Guerrero-Ibanez JA (2018) Internet of vehicles: architecture, protocols, and security. IEEE Internet Things J 5:3701–3709. https://doi.org/10.1109/JIOT.2017.2690902

Piramuthu OB, Caesar M (2023) VANET authentication protocols: security analysis and a proposal. J Supercomput 79(2):2153–2179

Seth I, Guleria K, Panda SN (2022) Introducing intelligence in vehicular ad hoc networks using machine learning algorithms. ECS Trans 107(1):8395

Seth I, Panda SN, Guleria K (2021) The essence of smart computing: internet of things, architecture, protocols, and challenges. 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, pp 1–6. https://doi.org/10.1109/ICRITO51393.2021.9596523.s

Nayak RP, Sethi S, Bhoi SK, Mohapatra D, Sahoo RR, Sharma PK, Puthal D (2022) TFMD-SDVN: a trust framework for misbehavior detection in the edge of software-defined vehicular network. J Supercomput 78:7948–7981.  https://doi.org/10.1007/s11227-021-04227-z

Katiyar A, Singh D, Yadav RS (2020) State-of-the-art approach to clustering protocols in VANET: a survey. Wireless Netw 26:5307–5336

Wang S, Zhao Q, Zhang N, Lei T, Yang F (2018) Virtual vehicle coordination for vehicles as ambient sensing platforms. IEEE Access 6:11940–11952. https://doi.org/10.1109/ACCESS.2018.2808937

Manifesto C (2007) CAR 2 CAR Communication Consortium Manifesto, System 94. http://www.car-2-car.org/fileadmin/downloads/C2C-CC_manifesto_v1.1.pdf

Adeli M, Bagheri N (2021) Mdsbsp: a search protocol based on mds codes for rfid-based internet of vehicle. J Supercomput 77:1094–1113

El Gayyar KS, Saleh AI, Labib LM (2022) A new fog-based routing strategy (FBRS) for vehicular ad-hoc networks. Peer-to-Peer Netw Appl 15:386–407.  https://doi.org/10.1007/s12083-021-01197-0

Dharani Kumari NV, Shylaja BS (2019) AMGRP: AHP-based Multimetric Geographical Routing Protocol for Urban environment of VANETs. J King Saud Univ - Comput Inf Sci 31:72–81. https://doi.org/10.1016/j.jksuci.2017.01.001

Kaddoura S, Haraty RA, Al Jahdali S, Assi M (2023) SDODV: A smart and adaptive on-demand distance vector routing protocol for MANETs. Peer-to-Peer Netw Appl 16(5):2325–2348

Ji H, Alfarraj O, Tolba A (2020) Artificial Intelligence-empowered edge of vehicles: architecture, enabling technologies, and applications. IEEE Access 8:61020–61034. https://doi.org/10.1109/ACCESS.2020.2983609

Thirunavukkarasu V, Senthil Kumar A, Prakasam P (2022) Cluster and angular based energy proficient trusted routing protocol for mobile ad-hoc network. Peer-to-Peer Netw Appl 15(5):2240–2252

MalekiTabar M, Rahmani AM (2022) A delay-constrained node-disjoint multipath routing in software-defined vehicular networks. Peer-to-Peer Netw Appl 15(3):1452–1472

Ahmadi KD, Rashidi AJ, Moghri AM (2022) Design and simulation of autonomous military vehicle control system based on machine vision and ensemble movement approach. J Supercomput 78(15):17309–17347

Seth I, Guleria K, Panda SN (2024) A lane-based advanced forwarding protocol for internet of vehicles. Int J Pervasive Comput Commun 20(1):147–167. https://doi.org/10.1108/IJPCC-08-2022-0305

Kachooei MA, Hendessi F, Ghahfarokhi BS, Nozari M (2022) An olsr-based geocast routing protocol for vehicular ad hoc networks. Peer-to-Peer Netw Appl 15:246–266. https://doi.org/10.1007/s12083-021-01246-8

Malik S, Sahu PK (2019) A comparative study on routing protocols for VANETs. Heliyon 5:e02340. https://doi.org/10.1016/j.heliyon.2019.e02340

Shafi S, Ratnam DV (2022) A trust based energy and mobility aware routing protocol to improve infotainment services in VANETs. Peer-to-Peer Netw Appl 15:576–591. https://doi.org/10.1007/s12083-021-01272-6

Wu C, Ohzahata S, Ji Y, Kato T (2016) How to utilize interflow network coding in VANETs: A backbone-based approach. IEEE Trans Intell Transp Syst 17:2223–2237. https://doi.org/10.1109/TITS.2016.2516027

Kaiwartya O, Abdullah AH, Cao Y, Altameem A, Prasad M, Lin CT, Liu X (2016) Internet of vehicles: motivation, layered architecture, network model, challenges, and future aspects. IEEE Access 4:5356–5373. https://doi.org/10.1109/ACCESS.2016.2603219

Datta SK, Da Costa RPF, Harri J, Bonnet C (2016) Integrating connected vehicles in Internet of Thingsecosystems: Challenges and solutions. WoWMoM 2016 - 17th Int. Symp a World Wireless, Mob Multimed Netw. https://doi.org/10.1109/WoWMoM.2016.7523574

Lin D, Kang J, Squicciarini A, Wu Y, Gurung S, Tonguz O (2017) MoZo: A Moving zone based routing protocol using pure V2V communication in VANETs. IEEE Trans Mob Comput 16:1357–1370. https://doi.org/10.1109/TMC.2016.2592915

Chowdhury SI, Il Lee W, Choi YS, Kee GY, Pyun JY (2011) Performance evaluation of reactive routing protocols in VANET, 17th Asia-Pacific Conf. Commun. APCC 2011 559–564. https://doi.org/10.1109/APCC.2011.6152871

Al-Sultan S, Al-Doori MM, Al-Bayatti AH, Zedan H (2014) A comprehensive survey on vehicular Ad Hoc network. J Netw Comput Appl 37:380–392. https://doi.org/10.1016/j.jnca.2013.02.036

Cheng J, Cheng J, Zhou M, Liu F, Gao S, Liu C (2015) Routing in internet of vehicles: A review. IEEE Trans Intell Transp Syst 16:2339–2352. https://doi.org/10.1109/TITS.2015.2423667

Qu F, Wu Z, Wang F, Cho W (2015) A Security and privacy review of VANETs. IEEE Trans Intell Transp Syst 16:2985–2996. https://doi.org/10.1109/TITS.2015.2439292

Kumar S, Verma AK (2015) Position based routing protocols in VANET: A survey. Wirel Pers Commun 83:2747–2772. https://doi.org/10.1007/s11277-015-2567-z

Liang W, Li Z, Zhang H, Wang S, Bie R (2015) Vehicular Ad Hoc networks: Architectures, research issues, methodologies, challenges, and trends. Int J Distrib Sens Networks 2015. https://doi.org/10.1155/2015/745303

Nair C (2016) Analysis and comparative study of topology and position based routing protocols in VANET. PnrsolutionOrg. 4:43–52

Google Scholar  

Patel D, Faisal M, Batavia P, Makhija S, Mani M (2016) Overview of routing protocols in VANET. Int J Comput Appl 136:4–7. https://doi.org/10.5120/ijca2016908555

Khattra TKK (2017) Routing protocols for vehicular Ad-Hoc networks: a review. Int J Adv Res Comput Sci 5:788–791. https://doi.org/10.26483/ijarcs.v8i7.4422

Yasser A, Zorkany M, Abdel Kader N (2017) VANET routing protocol for V2V implementation: A suitable solution for developing countries. Cogent Eng. 4:1–26. https://doi.org/10.1080/23311916.2017.1362802

Wahid I, Ikram AA, Ahmad M, Ali S, Ali A (2018) State of the art routing protocols in VANETs: a review. Procedia Comput Sci 130:689–694. https://doi.org/10.1016/j.procs.2018.04.121

Boussoufa-Lahlah S, Semchedine F, Bouallouche-Medjkoune L (2018) Geographic routing protocols for Vehicular Ad hoc NETworks (VANETs): A survey. Veh Commun 11:20–31. https://doi.org/10.1016/j.vehcom.2018.01.006

Abbasi IA, Khan AS (2018) A review of vehicle to vehicle communication protocols for VANETs in the urban environment. Futur Internet. 10. https://doi.org/10.3390/fi10020014

Tripp-Barba C, Zaldívar-Colado A, Urquiza-Aguiar L, Aguilar-Calderón JA (2019) Survey on routing protocols for vehicular ad Hoc networks based on multimetrics. Electron 8:1–32. https://doi.org/10.3390/electronics8101177

Bengag A, Bengag A, Elboukhari M (2020) Routing protocols for VANETs: A taxonomy, evaluation and analysis. Adv Sci Technol Eng Syst 5:77–85. https://doi.org/10.25046/aj050110

Srivastava A, Prakash A, Tripathi R (2020) Location based routing protocols in VANET: Issues and existing solutions. Veh Commun 23:100231. https://doi.org/10.1016/j.vehcom.2020.100231

Abraham A, Koshy R (2021) A survey on VANETs routing protocols in urban scenarios. In: Second Int Conf Networks Adv Comput Technol, pp 217–229. https://doi.org/10.1007/978-3-030-49500-8_19

Elhoseny M, Shankar K (2020) Energy efficient optimal routing for communication in VANETs via clustering model. Springer International Publishing. https://doi.org/10.1007/978-3-030-22773-9_1

Evropeytsev G, Pomares Hernández SE, Pérez Cruz JR, Rodríguez Henríquez LM, López Domínguez E (2019) A Scalable Indirect Position-Based Causal Diffusion Protocol for Vehicular Networks. IEEE Access 7:14767–14778. https://doi.org/10.1109/ACCESS.2019.2893157

Singh S, Agrawal S (2014) VANET routing protocols: Issues and challenges. 2014 Recent Adv Eng Comput Sci RAECS 2014:6–8. https://doi.org/10.1109/RAECS.2014.6799625

N.- us-Sama, K Zen, A.- Ur-Rahman (2017) An extensive survey on performance comparison of routing protocols in wireless sensor network. J Appl Sci 17:238–245. https://doi.org/10.3923/jas.2017.238.245

He G (2002) Destination-sequenced distance vector (DSDV) Protocol. Networking Laboratory, Helsinki University of Technology 135:1–9

Clausen T, Jacquet P (2003) Optimized Link State Routing Protocol (OLSR)

Johnson DB, Maltz DA (1996) DSR : The Dynamic Source Routing Protocol for Multi-Hop Wireless Ad Hoc Networks, Comput. Sci. Dep. Carnegie Mellon Univ. Addison-Wesley, pp 139–172. http://www.monarch.cs.cmu.edu/

Perkins CE, Park M, Royer EM (1999) Mobile computing systems and applications (WMCSA ’99). Ad-Hoc On-Demand Distance Vector Routing, pp 90–100

Marina MK, Das SR (2006) Ad hoc on-demand multipath distance vector routing. Wirel Commun Mob Comput 6:969–988. https://doi.org/10.1002/wcm.432

Slavik M, Mahgoub I, Alwakeel MM (2014) Analysis and evaluation of distance-to-mean broadcast method for VANET. J King Saud Univ - Comput Inf Sci 26:153–160. https://doi.org/10.1016/j.jksuci.2013.08.004

Pei G, Gerla M, Chen TW (2000) Fisheye state routing: A routing scheme for ad hoc wireless networks. IEEE Int Conf Commun 1:70–74. https://doi.org/10.1109/icc.2000.853066

Karp B, Kung HT (2000) GPSR: Greedy perimeter stateless routing for wireless networks. In Proceedings of the 6th annual international conference on Mobile computing and networking, pp 243–254

Lochert C, Hartenstein H, Tian J, Fübler H, Hermann D, Mauve M (2000) A routing strategy for vehicular ad hoc networks in city environments. IEEE Intell Veh Symp Proc 2003:156–161. https://doi.org/10.1109/IVS.2003.1212901

Lochert C, Mauve M, Füßler H, Hartenstein H (2005) Geographic routing in city scenarios. ACM SIGMOBILE Mob Comput Commun Rev 9:69–72. https://doi.org/10.1145/1055959.1055970

Zhao J, Cao G (2008) VADD: Vehicle-assisted data delivery in vehicular Ad hoc networks. IEEE Trans Veh Technol 57:1910–1922. https://doi.org/10.1109/TVT.2007.901869

Yang Q, Lim A, Li S, Fang J, Agrawal P (2010) ACAR: adaptive connectivity aware routing for vehicular ad hoc networks in City scenarios. Mob Networks Appl 15:36–60. https://doi.org/10.1007/s11036-009-0169-2

Soares VNGJ, Rodrigues JJPC, Farahmand F (2014) GeoSpray: A geographic routing protocol for vehicular delay-tolerant networks. Inf Fusion 15:102–113. https://doi.org/10.1016/j.inffus.2011.11.003

Mershad K, Artail H, Gerla M (2012) ROAMER: Roadside units as message routers in VANETs. Ad Hoc Netw 10:479–496. https://doi.org/10.1016/j.adhoc.2011.09.001

Mirjazaee N, Moghim N (2015) An opportunistic routing based on symmetrical traffic distribution in vehicular networks. Comput Electr Eng 47:1–12. https://doi.org/10.1016/j.compeleceng.2015.08.003

Zhang X, Cao X, Yan L, Sung DK (2016) A street-centric opportunistic routing protocol based on link correlation for urban VANETs. IEEE Trans Mob Comput 15:1586–1599. https://doi.org/10.1109/TMC.2015.2478452

Bazzi A, Zanella A (2016) Position based routing in crowd sensing vehicular networks. Ad Hoc Netw 36:409–424. https://doi.org/10.1016/j.adhoc.2015.06.005

Saleh AI, Gamel SA, Abo-Al-Ez KM (2017) A Reliable routing protocol for vehicular ad hoc networks. Comput Electr Eng 64:473–495. https://doi.org/10.1016/j.compeleceng.2016.11.011

Kumar S, Verma AK (2017) An advanced forwarding routing protocol for urban scenarios in VANETs. Int J Pervasive Comput Commun 13:334–344. https://doi.org/10.1108/IJPCC-D-17-00008

Karimi R, Shokrollahi S (2018) PGRP: Predictive geographic routing protocol for VANETs. Comput Networks 141:67–81. https://doi.org/10.1016/j.comnet.2018.05.017

Liu L, Chen C, Ren Z, Yu FR (2018) An Intersection-Based Geographic Routing with Transmission Quality Guaranteed in Urban VANETs. IEEE Int Conf Commun. 1–6. https://doi.org/10.1109/ICC.2018.8422935

Hassan AN, Abdullah AH, Kaiwartya O, Cao Y, Sheet DK (2018) Multi-metric geographic routing for vehicular ad hoc networks. Wirel Networks 24:2763–2779. https://doi.org/10.1007/s11276-017-1502-5

Li N, Martinez-Ortega JF, Diaz VH, Fernandez JAS (2018) Probability prediction-based reliable and efficient opportunistic routing algorithm for VANETs. IEEE/ACM Trans Netw 26:1933–1947. https://doi.org/10.1109/TNET.2018.2852220

Chen C, Liu L, Qiu T, Yang K, Gong F, Song H (2019) ASGR: An artificial spider-web-based geographic routing in heterogeneous vehicular networks. IEEE Trans Intell Transp Syst 20:1604–1620. https://doi.org/10.1109/TITS.2018.2828025

Ksouri C, Jemili I, Mosbah M, Belghith A (2022) Towards general Internet of Vehicles networking: Routing protocols survey. Concurr Comput Pract Exp 34:1–35. https://doi.org/10.1002/cpe.5994

Ghaffari A (2020) Hybrid opportunistic and position-based routing protocol in vehicular ad hoc networks. J Ambient Intell Humaniz Comput 11:1593–1603. https://doi.org/10.1007/s12652-019-01316-z

Gao H, Liu C, Li Y, Yang X (2021) V2VR: Reliable hybrid-network-oriented V2V data transmission and routing considering RSUs and connectivity probability. IEEE Trans Intell Transp Syst 22:3533–3546. https://doi.org/10.1109/TITS.2020.2983835

Kandasamy S, Mangai S (2021) A smart transportation system in VANET based on vehicle geographical tracking and balanced routing protocol. Int J Commun Syst 34. https://doi.org/10.1002/dac.4714

Kazi AK, Khan SM (2021) DyTE: An Effective Routing Protocol for VANET in Urban Scenarios. Eng Technol Appl Sci Res 11:6979–6985. https://doi.org/10.48084/etasr.4076

Shokrollahi S, Dehghan M (2023) TGRV: A trust-based geographic routing protocol for VANETs. Ad Hoc Netw 140:103062

Diaa MK, Mohamed IS, Hassan MA (2023) OPBRP-obstacle prediction based routing protocol in VANETs. Ain Shams Eng J 14(7):101989

Download references

The authors did not receive support from any organization for the submitted work.

Author information

Authors and affiliations.

Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, 140401, Punjab, India

Ishita Seth, Kalpna Guleria & Surya Narayan Panda

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Seth, I., Guleria, K., and Panda, S.N.;  Methodology: Guleria K., and Seth, I.; Analysis: Seth, I and Guleria K.,; Writing original Draft : Seth, I., and Guleria ,K.; Review and Editing:  Guleria, K.,Seth, I. and Panda, S.N.

Corresponding author

Correspondence to Kalpna Guleria .

Ethics declarations

Ethical statement.

Authors declare that the research presented in this paper does not require any ethical approval from Govt. or Non-Govt. organizations.

Human and animal rights

This article does not contain any studies with human participants or animals performed by any of the authors.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Seth, I., Guleria, K. & Panda, S.N. A comprehensive review on vehicular ad-hoc networks routing protocols for urban and highway scenarios, research gaps and future enhancements. Peer-to-Peer Netw. Appl. (2024). https://doi.org/10.1007/s12083-024-01683-1

Download citation

Received : 20 November 2023

Accepted : 04 March 2024

Published : 18 April 2024

DOI : https://doi.org/10.1007/s12083-024-01683-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Vehicular-ad-hoc networks
  • Position-based protocols
  • Topology- based protocols
  • Urban Scenarios
  • Highway Scenarios

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Best Practice Recommendations: User Acceptance Testing for Systems Designed to Collect Clinical Outcome Assessment Data Electronically

Sarah gordon.

1 PPD, Wilmington, NC USA

Jennifer Crager

2 PPD, Lubbock, TX USA

Cindy Howry

3 .assisTek, Austin, TX USA

Alexandra I. Barsdorf

4 Clinical Outcomes Solutions, Chicago, IL USA

5 Janssen Research & Development, Raritan, NJ USA

Mabel Crescioni

6 Hemophilia Federation of America, Washington, DC, USA

7 Clinical Ink, Winston-Salem, NC USA

Patricia Delong

8 Janssen Global Services, LLC, Raritan, NJ USA

Christian Knaus

9 Science 37, Los Angeles, CA USA

David S. Reasner

10 Imbria Pharmaceuticals, Boston, MA USA

Susan Vallow

11 Novartis Oncology, East Hanover, NJ USA

Katherine Zarzar

12 Genentech, Inc., A Member of the Roche Group, South San Francisco, CA USA

Sonya Eremenco

13 Critical Path Institute, Tucson, AZ USA

Implementing clinical outcome assessments electronically in clinical studies requires the sponsor and electronic clinical outcome assessment (eCOA) provider to work closely together to implement study-specific requirements and ensure consensus-defined best practices are followed. One of the most important steps is for sponsors to conduct user acceptance testing (UAT) using an eCOA system developed by the eCOA provider. UAT provides the clinical study team including sponsor or designee an opportunity to evaluate actual software performance and ensure that the sponsor’s intended requirements were communicated clearly and accurately translated into the system design, and that the system conforms to a sponsor-approved requirements document based on the study protocol. The components of an eCOA system, such as the study-specific application, customization features, study portal, and custom data transfers should be tested during UAT. While the provider will perform their own system validation, the sponsor or designee should also perform their due diligence by conducting UAT. A clear UAT plan including the necessary documentation may be requested by regulatory authorities depending on the country. This paper provides the electronic patient-reported outcome (ePRO) Consortium’s and patient-reported outcome (PRO) Consortium’s best practice recommendations for clinical study sponsors or their designee for conducting UAT with support from eCOA providers to ensure data quality and enhance operational efficiency of the eCOA system. Following these best practice recommendations and completing UAT in its entirety will support a high quality eCOA system and ensure more reliable and complete data are collected, which are essential to the success of the study.

Introduction

The collection of clinical outcome assessments electronically in clinical studies involves a process that requires clinical study sponsors and electronic clinical outcome assessment (eCOA) providers to work closely together to implement study-specific requirements, incorporate best practices, and ensure successful data collection to generate evidence for regulators and other stakeholders including payers and health technology assessment bodies. There are multiple steps in the system development process (Fig.  1 ), most of which have been discussed in the literature [ 1 , 2 ] and regulatory guidance [ 3 – 5 ]. However, one of the most important steps in this process, user acceptance testing (UAT), which aims to ensure that an electronic system functions according to agreed-upon requirements (e.g., business requirements document based on the study protocol), deserves increased attention. Therefore, Critical Path Institute’s electronic patient-reported outcome (ePRO) Consortium and patient-reported outcome (PRO) Consortium have developed UAT best practice recommendations for clinical study sponsors or their designee for conducting UAT with support from eCOA providers to ensure data quality and enhance operational efficiency of the eCOA system. Utilizing these best practices should improve the reliability or precision of clinical outcome assessment (COA) data collected electronically in clinical studies to support product registration.

An external file that holds a picture, illustration, etc.
Object name is 43441_2021_363_Fig1_HTML.jpg

Typical eCOA implementation process

The United States Food and Drug Administration’s (FDA’s) “General Principles of Software Validation; Final Guidance for Industry and FDA Staff” outlines regulatory expectations for software validation [ 3 ]. This guidance states that terms such as beta test, site validation, user acceptance test, installation verification, and installation testing have all been used to describe user site testing which encompasses any other testing that takes place outside of the developer’s controlled environment. For purposes of this paper, the term “UAT” will be referenced and “user” will refer to sponsor staff (or designee) who serve as substitutes to trial participants for the participant-facing components of the eCOA system. The FDA general principles go on to say that “User site testing should follow a pre-defined written plan with a formal summary of testing and a record of formal acceptance. Documented evidence of all testing procedures, test input data, and test results should be retained” [ 3 , p. 27]. These statements in the guidance indicate that a user testing process itself as well as documentation are both best practices in software development as well as regulatory expectations.

In 2013, the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) ePRO Systems Validation Task Force defined UAT as “the process by which the clinical trial team determines whether the system meets expectations and performs according to the system requirements documentation” [ 2 , p. 486]. In this same report, the task force also indicated that UAT should not be “a complete revalidation effort conducted by the sponsoring clinical trial team” [ 2 , p. 486]. but, rather, a “focused, risk-based approach to testing that allows the clinical trial team to determine whether the system complies with the key system requirements (which ultimately reflect the protocol)” [ 2 , p. 486]. Because differentiating between the specific activities recommended for UAT and those activities conducted during system validation can be confusing, these best practice recommendations were developed to clarify those activities and considerations that should be accounted for during UAT by the sponsor or designee. A separate process called usability testing involves participants and evaluates their ability to use the system as intended for the purposes of the study, which is outside the scope of this paper. See Coons et al. [ 1 ] and Eremenco et al. [ 6 ] for more information on usability testing, and FDA’s Discussion Document for Patient-Focused Drug Development Public 2 Workshop on Guidance 3 [ 7 ] which discusses both usability testing and UAT.

The concept of UAT comes from the software development lifecycle (SDLC) and is intended to test how the system would perform in circumstances similar to those in which the system will eventually be used. In clinical studies where electronic systems are being used to collect COA data, UAT provides the clinical study team, including sponsor and/or contract research organization (CRO) representatives, an opportunity to evaluate actual system performance and ensure that the sponsor’s intended requirements were communicated clearly and accurately translated into the system design, and that the system conforms to a sponsor-approved requirements document.

System requirements should be thoroughly tested by the eCOA provider prior to UAT in conformance with the SDLC process implemented by the eCOA provider. The eCOA provider project manager will notify the sponsor and/or designee when the vendor testing process is completed so that UAT may proceed. This step followed by the eCOA provider allows the focus of UAT to remain on a common understanding of the requirements with the actual system in hand, as well as identifying and correcting issues proactively that study team, site, and study participant users might experience once the system is deployed. UAT takes place toward the end of the eCOA implementation process (Fig.  1 ), occurring after the study-specific system requirements have been documented by the eCOA provider and approved by the study sponsor, and the system is built and tested by the eCOA provider’s in-house testing team. UAT must be completed prior to launching the technology for the study.

Components of an eCOA System

eCOA systems are built differently by each eCOA provider but typically have the same core components. Table ​ Table1 1 provides the suggested guidelines for testing these components in terms of when formal testing using UAT scripts is recommended as a best practice as opposed to cases where ad hoc testing may be sufficient. Details on the development of UAT scripts are provided in the UAT Documentation section of this paper. eCOA systems can be deployed on provisioned devices. If the study is utilizing a provisioned device model, the eCOA provider will distribute devices to each tester. eCOA systems can also contain components that are application-based such as those developed for Bring Your Own Device (BYOD) studies, where a variety of devices (including different makes and models) should be included in the formal UAT to ensure consistency between device types. If a study is utilizing a BYOD setup, the eCOA provider is required to provide the testers with minimum operating system and device requirements (e.g., Android/iOS operating system versions, internet browser, screen size). If feasible to be done at the time of the eCOA UAT, testing of any integrated devices (e.g., glucometers, sensor patches) or systems (e.g., IRT, EDC), should also be included within the component testing of UAT. For purposes of this paper, best practices for testing integrated devices or systems will not be covered.

eCOA system components and testing guideline

eCOA Hosting Environments

A hosting environment is the physical server environment in which the eCOA platform resides. eCOA providers should have multiple hosting environments to support a proper setup. Typically, all development of an eCOA system is done within a Development (or Dev) environment. In the Dev environment, the eCOA provider builds the system per the study requirements and can easily make changes as needed. The Dev environment is sometimes referred to as a sand box as the eCOA provider is able to modify the design without impact to test or live study data.

Once the development of the software application is completed, system/integration testing of the software application is performed by the eCOA provider in a Test environment. After this process is completed by the eCOA provider, UAT should be performed by the sponsor or designee who is provided access to the software application in a separate UAT environment hosted by the eCOA provider.

Once UAT has been completed successfully, with no outstanding issues, and all parties agree that the system is acceptable for study use, the study configuration is moved to the Production environment. The Production environment will collect only live study data. UAT should not be performed in a Production environment under any circumstances, as UAT data could end up in a live system database. In the event that the study requirements change (e.g., due to a protocol amendment) once the system is live, any post-production changes must be made in the Development environment and subsequently tested in the Test environment by the eCOA provider and UAT environment by the sponsor or designee before moving the modified study configuration to the Production environment.

Roles and Responsibilities

When planning and executing UAT for an eCOA system implemented for a clinical study, there are two main expected stakeholders, which can be categorized on a high level as:

  • Sponsor or designee: the entity for whom the system is built and who funds both the build and clinical study, and who has ultimate accountability for the study overall. Note that a CRO and/or UAT vendor may be engaged to act as a designee of the sponsor to perform UAT.
  • eCOA Provider: the entity who is contracted by the sponsor or CRO to carry out the design, build, and support of the system

These primary stakeholders can delegate or outsource roles and responsibilities to any degree necessary to a third party. It is recommended that the Sponsor (or designee) performing UAT ensures all testers are fully trained in the UAT process. In addition, it is recommended that a range of study team roles be involved in UAT execution, including for example, clinical operations, site monitoring, data management, and biostatistics. It is not a best practice for the eCOA provider’s staff to conduct UAT, as it should be conducted by a separate entity to ensure it is objective. It is important to note that study participants are not included in UAT as a standard practice because of its emphasis on formally testing requirements.

Each UAT should go through the basic stages of Planning, Execution, and Follow-Up/Closeout, and all stakeholders should participate in each stage. Table ​ Table2 2 details the ideal level of involvement and responsibilities by stage.

Stages and stakeholder responsibilities

Table ​ Table3 3 outlines primary responsibilities for the tasks necessary to conduct UAT.

Task ownership matrix

UAT Conduct

A UAT timeline can vary; however, it is best to plan for at least a 2-week cycle that assumes multiple rounds for UAT including testing as outlined in the test plan and scripts, changes, re-verification, and final approval. UAT timelines are also dependent on the complexity of the study design including the number of treatment arms, assessments, visit schedule and when the system build will be fully validated by the eCOA provider versus the planned date for the system to be launched for the study as UAT is often the rate-limiting step that must be completed to launch the system. The UAT timeline can be extended or shortened depending on these variables and the actual iterations of rounds of testing needed. Regardless of the length of time for UAT, time for testing, changes, validation of changes by the eCOA provider test team, and re-testing by the UAT team needs to be accounted for prior to a system launch. If these steps are not carried out, the potential for issues and reduced system quality increases.

While UAT is being conducted, each tester should document findings within the test script(s) and provide all findings (issues/questions/changes) within a UAT findings log. This log can be in several different formats such as spreadsheets or an electronic UAT system. At the completion of each round of testing, findings should be collated into one log for ease of review by sponsor and/or designee team with duplicate issues removed. Following each round of UAT, a debrief meeting should be held to examine and discuss all findings as a team. It is important for all testers to be represented at the meeting so that each finding can be discussed and clarified as necessary. The team may prioritize UAT findings and decide on a phased implementation based which bugs/errors must be corrected ahead of Go-Live vs. those that can be implemented in a “Post Go-Live release plan.” If this approach is taken, it is critical to get agreement between the sponsor and the eCOA provider along with a communication plan to the study team members. Impact to the Data Management Review Plan and Study Site Monitoring Plans should also be evaluated for impact.

Issues (bugs) or changes identified in the UAT findings log need to be categorized to determine their priority and relevance. Categories may include system issue, application or software bug, design change, enhancement, or script error, all of which may have different names, depending on the eCOA provider, but ultimately these categories help determine the corrective plan of action (if necessary). A system issue is a problem in the software programming that causes the system to function incorrectly, which is a critical finding and should be prioritized over all other findings for correction and re-testing. An application or software bug is an error, flaw or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. An issue is an instance where the agreed-upon requirements were not met. Design changes are requests for changes to the system but are not error corrections while enhancements are requests for improvements to the system that arise from the UAT. Design changes and/or enhancements should be evaluated by the full team to determine whether the change would improve the performance of the system and/or user experience as well as whether time permits the change to be made within the constraints of the system launch date. Enhancements or changes to the original system design need to be reviewed carefully between the sponsor and the eCOA provider as system design changes create risk and fees may be charged if a change request is deemed out of scope or an expansion of the previously agreed scope. Script errors are mistakes in the script that may lead to erroneous results although the system actually performs correctly. Script errors should be documented and updated in the script template to ensure any future user of the script does not encounter the same problem(s).

While discussing changes resulting from UAT, the original scope of work should always be reviewed and referenced when considering implementing the change. eCOA providers should correct any programming errors found in UAT at no additional cost. If necessary design features not included in the original requirements document are identified as a result of UAT, sponsors are advised to consider the timeline and cost implications of introducing those new features at this late stage. If it is deemed the changes are required prior to launch, the sponsor may need to accept any additional costs or delays to launch, depending on the assumptions built into the original contract. Alternatively, the team may decide that although changes should be made, they are not needed for launch and can be made as a post-production change, after the system is launched into production. The UAT testers and other sponsor representatives should discuss the cost and timeline implications for any options prior to making a final decision about design changes. Involvement of the key stakeholders during the bidding and design process is an ideal way to reduce/limit design changes and expedite processes between the sponsor/CRO and the eCOA provider.

UAT Documentation

Proper documentation is imperative to ensure execution of effective testing as shown in Fig.  2 and to meet regulatory expectations. UAT documents should include a UAT test plan, test scripts, findings log, a summary of issues and resolutions (e.g., UAT Summary Report), and lastly, a UAT approval form. The eCOA provider may generate additional documentation such as instructions related to”time travel” (the mechanism by which testers can move between different dates and times by adjusting the eCOA device clock) to assist UAT.

An external file that holds a picture, illustration, etc.
Object name is 43441_2021_363_Fig2_HTML.jpg

UAT documentation workflow

Standard Operating Procedures (SOPs), Working Instructions, and/or guidance documents, and performance metrics for UAT should be developed by the sponsor or designee who is managing UAT to document the requirements for the process and all necessary documentation. UAT SOPs should outline how clinical study teams determine whether the system performs in accordance with the finalized system requirements document. SOPs should define the documents required to complete the UAT, those responsible to perform testing, and when and how UAT is to be performed. Frequency of UAT is also defined in the SOP depending upon initial production releases, updates to correct issues, and/or updates requested by the sponsor. UAT documentation should be made available for audits / inspections and follow Good Clinical Practice as well as appropriate record retention (Fig.  3 ).

An external file that holds a picture, illustration, etc.
Object name is 43441_2021_363_Fig3_HTML.jpg

Example of manual and electronic test script

UAT Test Plan

The UAT test plan is developed by the sponsor or designee and may contain: a purpose, a scope, definitions, references, strategy and approach, assumptions and constraints, risk assessment, UAT team roles and responsibilities, information about the test environment(s), a description of all test cases, scripts, deliverables, the UAT test summary (results), and approvals/signatures. A UAT test plan ensures all parties are aware of the scope and strategy of how requirements will be tested. It will allow the sponsor or designee to review the system per the protocol and the signed system requirements document. As such, it should be considered the first document to be created within the UAT process. The following sections should be considered when creating the Test Plan (Table ​ (Table4 4 ).

UAT test plan template content

Table ​ Table5 5 provides several considerations for testing functionality that is common across eCOA providers. The screen interface may include different controls to navigate from one screen to the next and buttons or graphic controls to select responses to items; these elements are referred to as screen controls. In addition, the Test Plan should include the method for testing custom features for each study.

Functionality and methodology for testing

Test Scripts

Test scripts outline each step that a tester will take to test the use cases in the system. Test scripts are designed to be followed step-by-step so that the tester does not have to try to remember how he or she arrived at a given screen. If the step occurs as expected in the script, the tester indicates “pass.” If something happens when the step is carried out that is not as expected, the tester indicates “fail” and provides a reason for failure, with applicable screenshots, if necessary. UAT test scripts will be referenced in the UAT Test Plan. It is best practice that the sponsor or designee write the test scripts and not ask the eCOA provider to provision test scripts. Test scripts should be approved by the appropriate individual within the sponsor or designee prior to UAT being conducted. The approver may vary depending on sponsor UAT process and SOPs. Upon completion of the scripts, the tester should sign (or electronically sign) as well as record the date(s) of script execution.

In some cases, a tester may informally test functionality that is not detailed in the test script, which is referred to as ad hoc testing; for example, this might occur when the actual results of a test step are not the expected results, and ad hoc testing might help identify the root cause of the issue. While such ad hoc testing can be useful in identifying issues, it is considered supplemental and should not be conducted in place of following test scripts. Any issue detected in ad hoc testing should be documented and formally tested in the next round of UAT to document resolution.

Table ​ Table6 6 outlines the aspects that should be documented in each test script section:

Test script content

If any step in a script “fails” due to an issue with the system, device, or configuration, then the entire test case fails (see Fig.  4 ). If a test case fails during UAT, the test case should be completed again once the eCOA provider has confirmed that the issue has been resolved. If it is between the sponsor and the eCOA provider that the issue will stay unresolved in the system, then it should be noted in the UAT summary report (results). Otherwise, UAT should not be considered finished until all test cases have been passed by a tester and all issues from the findings log addressed.

An external file that holds a picture, illustration, etc.
Object name is 43441_2021_363_Fig4_HTML.jpg

Example of test script execution

If a test case fails due to a script error, retesting of the test case may not be required. The UAT team should identify whether a retest is required for a test case failure due to script error. For example, if a script contains a typographical error or is poorly written but the test case still proves and supports the scope of the test, it is acceptable to amend the script and pass the test case.

Before UAT approval, a UAT summary or report should be created by the UAT testing team (sponsor or designee) summarizing the results of testing including any known issues that will not be corrected in the system before the system is launched into production.

A UAT approval form should be signed by the sponsor or a representative from any other testing party (i.e., sponsor’s designee). UAT should not be considered completed until this form is signed.

Once UAT has been completed, all UAT documentation (e.g., UAT Test Plan, completed test cases, UAT Summary Report, and UAT approval form) should be archived for maintenance as essential study documents. As a final step for closure of UAT, the sponsor and/or designee should review the agreed-upon UAT performance metrics. The Metrics Champion Consortium (MCC) has a standard set of UAT metrics designed to assess the performance of the UAT [ 8 ]. It is recommended the sponsor (or designee) utilize the MCC metrics to support the evaluation of the UAT.

In summary, although UAT may be performed differently among eCOA providers and sponsors, the end goal is to ensure proper documentation of UAT activities. Various techniques may be used depending on the nature of the eCOA system and the study. Rigorous and complete testing will facilitate successful system deployment, while thorough documentation of UAT will meet requirements for regulatory inspection. Completing the full UAT process using these best practices will help reduce the risk that a system does not meet the expectations of the stakeholders within a study. A thorough UAT process will also minimize the risk of inaccurate or missing data due to undetected flaws in the system that could jeopardize the results of the study and product approval. Following these best practice recommendations and completing UAT in its entirety will help support a high quality eCOA system and ensure more reliable and complete data are collected, which are essential to the success of the study.

Acknowledgements

Critical Path Institute is supported by the Food and Drug Administration (FDA) of the U.S. Department of Health and Human Services (HHS) and is 54.2% funded by the FDA/HHS, totaling $13,239,950, and 45.8% funded by non-government source(s), totaling $11,196,634. The contents are those of the authors and do not necessarily represent the official views of, nor an endorsement by, FDA/HHS or the U.S. Government. For more information, please visit FDA.gov. Additional support for the Electronic Patient-Reported Outcome (ePRO) Consortium comes from membership fees paid by members of the ePRO Consortium ( https://c-path.org/programs/eproc/ ). Additional support for the Patient-Reported Outcome (PRO) Consortium comes from membership fees paid by members of the PRO Consortium ( https://c-path.org/programs/proc/ ). The authors wish to thank Ana DeAlmeida, from Genentech, Inc., A Member of the Roche Group, for her review of an early draft of the manuscript. We gratefully acknowledge Scottie Kern, Executive Director of the ePRO Consortium, for his contributions to the revisions to the manuscript.

Biographies

Alexandra i. barsdorf:.

DrPH, JD, LLM

  • Open access
  • Published: 04 December 2021

Examining non-technical skills for ad hoc resuscitation teams: a scoping review and taxonomy of team-related concepts

  • J. Colin Evans   ORCID: orcid.org/0000-0002-0084-230X 1 ,
  • M. Blair Evans 2 ,
  • Meagan Slack 3 ,
  • Michael Peddle 1 &
  • Lorelei Lingard 4  

Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine volume  29 , Article number:  167 ( 2021 ) Cite this article

5542 Accesses

8 Citations

38 Altmetric

Metrics details

Non-technical skills (NTS) concepts from high-risk industries such as aviation have been enthusiastically applied to medical teams for decades. Yet it remains unclear whether—and how—these concepts impact resuscitation team performance. In the context of ad hoc teams in prehospital, emergency department, and trauma domains, even less is known about their relevance and impact.

This scoping review, guided by PRISMA-ScR and Arksey & O’Malley’s framework, included a systematic search across five databases, followed by article selection and extracting and synthesizing data. Articles were eligible for inclusion if they pertained to NTS for resuscitation teams performing in prehospital, emergency department, or trauma settings. Articles were subjected to descriptive analysis, coherence analysis, and citation network analysis.

Sixty-one articles were included. Descriptive analysis identified fourteen unique non-technical skills. Coherence analysis revealed inconsistencies in both definition and measurement of various NTS constructs, while citation network analysis suggests parallel, disconnected scholarly conversations that foster discordance in their operationalization across domains. To reconcile these inconsistencies, we offer a taxonomy of non-technical skills for ad hoc resuscitation teams.

This scoping review presents a vigorous investigation into the literature pertaining to how NTS influence optimal resuscitation performance for ad hoc prehospital, emergency department, and trauma teams. Our proposed taxonomy offers a coherent foundation and shared vocabulary for future research and education efforts. Finally, we identify important limitations regarding the traditional measurement of NTS, which constrain our understanding of how and why these concepts support optimal performance in team resuscitation.

Graphical abstract

ad hoc journal article review

Introduction

Despite establishing the significance of teammate collaboration for resuscitation performance, resuscitation literature has yet to achieve a consensus regarding how non-technical skills (NTS) work and which constructs are most relevant to resuscitation teams. Interpersonal constructs like leadership, teamwork, and communication, and cognitive constructs such as decision-making and situational awareness have been studied in many settings and are now included within resuscitation guidelines around the world [ 1 , 2 ]. Prehospital, emergency department, and trauma resuscitation teams perform in dynamic domains [ 3 ], experience frequent team membership turnover and integrate different professional cultures [ 4 ] all while expressing a high degree of interdependence [ 5 ]. The composition of these teams varies by region, but what these teams hold in common is their shared tasking as specialists in resuscitation and the necessity to unite members who are available to respond at the time of the patient’s critical event on an ad hoc basis. While there is now an extensive literature examining NTS for teams performing in these settings [ 6 , 7 ], the specific impact of their ad hoc and intersectoral nature tends to be overlooked [ 8 ].

Ad hoc resuscitation teams, otherwise known as action teams [ 9 , 10 ] and variable role, variable personnel (V R V P ) teams [ 11 ], are composed in response to an acute demand for a limited performance [ 4 ] with variable membership including representation from various disciplines (e.g., emergency medicine, anaesthesia, surgery) and professions (e.g., physician, nurse, respiratory therapist). An added layer of complexity specific to prehospital resuscitation teams is their intersectoral nature: team members may also represent multiple sectors of society [ 12 ] (e.g., paramedic/EMT, physician, nurse, fire, police, lay responder), some of whom may have neither healthcare training nor a primary healthcare focus. Efforts to actively translate evidence from NTS literature into training and practice for resuscitation teams may be undermined if these findings are incompatible with the teams’ ad hoc dimension. A clear understanding of how NTS constructs relate to ad hoc teams is necessary to capitalize on – and meaningfully extend – the rich literature on NTS in resuscitation.

With this scoping review we take a configurative approach [ 13 ], which seeks to interpret and understand the state of resuscitation team literature. In contrast with an aggregative approach of combining empirical observations and making summative statements (e.g., meta-analysis), we used a configurative approach to identify key themes, clarify discrepancies, and describe gaps in the scholarly conversation pertaining to NTS for resuscitation teams. Through this lens, we classify each source based on team setting and structure, and the types of NTS constructs investigated—leveraging this review to interrogate existing theory and advance novel perspectives. Our aim is to provide future researchers and educators a clearer understanding of team dynamics and a common language for NTS, particularly as they pertain to ad hoc prehospital, emergency department, and trauma resuscitation teams.

We selected scoping review as the most appropriate methodology to map the state of the literature pertaining to NTS in ad hoc team resuscitation. This approach allows us to describe the breadth of the literary landscape and account for its contours, unrestricted to methodology and setting or by a narrowly predefined research question [ 14 ]. Our search was guided by both PRISMA scoping review guidelines [ 15 ], and Arksey & O’Malley’s five step framework [ 14 , 16 ] (see Table 1 ).

Identifying the research question and search strategy

Leveraging the research question specified above, a preliminary list of keywords was first generated by brainstorming among members of the authorship team (which includes experts in medical teams and clinical aspects of resuscitation) regarding relevant terms and concepts. This keyword list was refined by reviewing concepts described in relevant studies using a database and google scholar search of the terms “non-technical skills” and “resuscitation”, and by cross-referencing all terms with the taxonomy applied to surgeons by Yule et al. [ 17 ]. This taxonomy was selected because our team regarded it as the most comprehensive and representative of the non-technical constructs identified in our preliminary search. This taxonomy distinguishes constructs as interpersonal skills (communication, leadership, teamwork, briefing/planning/preparation, resource management, seeking advice and feedback, coping with pressure/stress/fatigue) and cognitive skills (situation awareness, mental readiness, decision making, adaptive strategies/flexibility, workload distribution). With this keyword list, a research librarian assisted in selecting MeSH terms, database selection, and designing search queries. Our team chose to emphasize medical literature and selected four databases [EMBASE (Ovid), CINAHL (Ebsco), MEDLINE (Ovid), and Psychinfo (Ovid)]. The database search combined three groups of terms: 1. activity (e.g., resuscitation, ATLS, ACLS), 2. setting (e.g., prehospital, paramedic, emergency department), and 3. non-technical skills. An example of our CINAHL search query is available in online supplemental materials.

Study selection as well as inclusion and exclusion criteria were informed in an iterative fashion as our familiarity with the literature evolved. We primarily sought literature that specified a focus on prehospital or emergency deparment teams. Teams including other descriptions were included (e.g., trauma teams) in cases where our research team determined that teams described in the papers included emergency or prehospital members or when the clinical tasks took place in an emergency department context. Our final criteria are listed in the online supplemental materials and sought to identify manuscripts featuring original empirical studies as well as literature reviews that overtly measured or described NTS in the prehospital, emergency, and trauma settings. The inclusion of review articles in this dataset aligns with our configurative approach and speaks to our research question, which focuses on patterns regarding how relevant concepts were used in the literature.

Data collection, charting and analysis

The PRISMA flowchart illustrating the progress of the search is available in Fig.  1 . We performed our initial search in June, 2017 and a final update on October 12, 2021. The search identified numerous domains where resuscitation teams research was published and therefore supplemental search strategies (i.e., hand search of selected titles; grey literature search) were not integrated into this review. The database search results were combined with articles identified in our preliminary search and uploaded into the Covidence software platform [ 16 ] for duplicate removal, title & abstract screening, and full text eligibility screening. Two reviewers independently conducted title and abstract screening for all sources as well as subsequent screening for full text eligibility, with discrepancies resolved through discussion.

figure 1

PRISMA flowchart

Our team performed three analyses of articles selected for data charting [ 14 ]: (a) traditional extraction of descriptive information, (b) coherence analysis to critically consider a study’s capacity to inform the literature, and (c) citation network analysis of articles included in this review.

Data extraction was performed by a single author using a Microsoft Excel (2018) spreadsheet. This analysis included categorizing broad themes (e.g., publication date, study type) as well as those more pertinent to our review (e.g., setting, team type, non-technical skills studied).

The heterogeneity of manuscript types and topics across the resulting articles in our dataset led us to employ ‘ Coherence Analysis ’ to explore how knowledge is being mobilized across this literature and situate each article by its influence on emerging theory. Traditional quality appraisals that entail a focus on methodological characteristics (e.g., risk of bias assessment) are ill-suited for scoping reviews. Instead, our coherence analysis aims to provide insight into how an article contributes to the scholarly conversation and uses an approach akin to those used by existing narrative reviews [ 17 ] and qualitative meta-syntheses [ 18 , 19 ].

The coherence analysis involved three binary (Yes/No) items addressing: (1) Whether concepts related to non-technical skills team aspects were defined and operationalized (e.g., operational definition in main text), (2) Whether the article was situated within the broader literature by citing and appropriately characterizing relevant seminal works, and (3) Whether the article presents findings that contribute to our knowledge of non-technical skills. To assess intercoder reliability, our primary analyst and another author completed coherence analysis for ten articles. Across the 30 decision points, raters agreed on 26 decisions (87% agreement). The resulting Cohen’s Kappa value ( Κ  = 0.59; CI  = 0.21–0.96) was acceptable.

Because the coherence analysis suggested that several articles were not well situated within the broader literature, we performed a citation network analysis to illustrate and explore relationships between articles. We examined the reference lists of included articles and cross-referencing citations for other articles in our dataset, producing a social network matrix identifying which articles were cited by those published later. The network matrix was visualized using Gephi software (v. 0.9.2), whereby the resulting network was descriptively analyzed alongside indicators of each article’s position within the network.

Descriptive summary

The search query produced 9595 independent records screened by reviewers, from which 205 articles were reviewed at the level of full text. Sixty-one articles were included in the final analysis, which spanned 1992 to 2021 with forty-six (75%) articles involving original empirical data. Among the twenty (33%) intervention-based articles, six were randomized clinical trials or controlled experiments and fourteen described interventions delivered to a single group of participants (e.g., pre/post cohort study; descriptions of feasibility). Articles reporting on interventions used one of two approaches: either examining the implementation of specific policies or processes (e.g., team debriefing) or the implementation of targeted group interventions (e.g., TeamSTEPPS). Among the twenty-six nonintervention articles (43%), nine were qualitative articles involving interviews and/or observation, thirteen were quantitative articles using data drawn from clinical/training tasks (e.g., use of electronic health records, quantitative coding of video), two performed mixed methods analysis using both qualitative and quantitative assessments, and two were survey articles assessing staff perceptions of the salience of NTS. The analyzed articles also included thirteen (21%) narrative reviews and two comment articles.

In terms of the nature of the teams being investigated in these articles, thirty-eight (62%) articles referred to multidisciplinary teams with members from more than one discipline of medicine (e.g., surgical resident and emergency medicine resident), and forty-five (74%) had an interprofessional scope (e.g., physician, nurse, paramedic, respiratory therapist). With regard to our specific focus on ad hoc teams, twenty-eight (46%) articles addressed ad hoc team resuscitation, eleven (18%) articles examined prehospital responders, and five (8%) articles explored intersectoral teams (i.e., identifying members from multiple agencies such as paramedics and police) but none explicitly labelled these teams as intersectoral.

Among the non-technical skills for which we performed descriptive analysis, interpersonal skills were represented in fifty-eight articles (95%), while thirty-five (57%) explicitly examined cognitive skills. One observation from this review involves contrasting the attention directed toward interpersonal and cognitive skills over time. As evident in Table 2 [ 4 , 6 , 8 , 10 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 ], interpersonal skills were the exclusive focus of analyzed articles until 2007—after which articles increasingly focused on both interpersonal and cognitive skills. This expansion of focus coincides with the 2006 release of the Yule et al. taxonomy [ 77 ]; however, our citation analysis found that only 5 (8%) articles [ 4 , 46 , 51 , 62 , 70 ] cited this taxonomy directly.

Framed around Yule et al.’s NTS taxonomy for surgeons [ 77 ] and informed by our descriptive analysis, we created the Proposed Taxonomy of Non-Technical Skills and Team Constructs for Ad Hoc Team Resuscitation (Table 3 ) [ 4 , 5 , 6 , 7 , 10 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 78 , 79 , 80 ]. This taxonomy represents our collective interpretation of definitions and applications presented in the literature integrated within this review, whereby we adapted the original Yule et al. taxonomy [ 17 ] and generated definitions regarding each construct that emerged from articles examining prehospital and ad hoc teams. As one key advance relative to the original taxonomy, the range of constructs has been broadened to include additional constructs identified in our review (i.e., debriefing, followership, and shared mental models). The novel taxonomy also identified a shift regarding the underpinning operationalization and classification of constructs. Whereas original perspectives of this taxonomy focused on skills with an ‘individual’ focus on training and preparation for individuals to contribute to teams, our revised taxonomy defines these constructs fundamentally as team processes (i.e., actions or behaviours observed when members combine their resources, knowledge, and skills as a team). Finally, the definitions and applications of these constructs that have emerged in this taxonomy confound classification as either interpersonal or cognitive and thus these categories have been removed.

Coherence analysis

Through the coherence analysis, we identified that thirty-nine (64%) articles explicitly defined key terms. For example, in one article [ 10 ] the authors described contrasting leadership definitions based on the context of “stable teams” or “action teams”, which were characterized using references to describe both types of teams and leadership tasks associated with each. Twenty-two (34%) articles did not provide definitions for key terms and demonstrated inconsistency in their interpretation and application of key constructs. As an illustrative example, the concept of a shared/team mental model is one construct for which researchers held contrasting definitions and operationalizations.

The second coherence analysis component found that forty-six (75%) of the articles in our dataset were well-situated. In one article that achieved a “yes” rating, the authors used their introduction to extensively detail the history of non-technical skills rating systems [ 51 ]. We characterized the remaining fifteen articles (25%) as poorly situated because the articles did not introduce seminal works to situate key concepts, or because they misinterpreted the evidence base when situating their work.

The third coherence analysis component found that forty-nine articles (80%) used their discussion to contribute back to the understanding of how NTS influences team performance. For instance, a 2017 observational study by Calder et al. conducted interviews and in vivo observation to disentangle the conceptual overlaps in previous literature regarding team situational awareness, shared mental models, and team communication. In their discussion, they identified that their “findings contrast with previous work since we found that team members did in fact have a shared mental model” and that their work represented “the first comprehensive mixed method investigation of how inter-professional teams communicate during ED resuscitation” [ 57 ]. These findings contribute explicitly to the body of literature and move forward our understanding of resuscitation team performance. The twelve (20%) articles that received a “no” rating in this category failed to advance the scholarly conversation largely due to their presentation of non-specific claims that NTS interventions can improve team performance.

Network analysis

Figure  2 is an illustration of the network comprised of articles included in this review. An arrow (tie) from one article to another reflects a citation. This network is limited to only the papers in this review, but it nevertheless characterizes the scholarly communities from which the field has emerged. The network was sparse, in that nine papers were both uncited and had not cited other papers in this review and only eight papers received more than five citations from others.

figure 2

source to another within this review represented by a directed arrow (ties). The size and orientation of each node was based on the number of citations an article received, whereas node colours distinguish articles by year of publication. Circles added to the figure denote papers with the highest number of citations, including: a Cooper (1994) (11 citations), b Capella and colleagues (2011) (9 citations), and c Hunt (2007) (7 citations). Note that this network figure only depicts 52 papers because nine papers included in this review had not cited other papers in this review, nor had they been cited by other papers in this review

Citation network analysis. This network was created using Gephi and depicts the 61 articles in this review (nodes) and citations from a given

The network also provides an opportunity to reflect on the extent to which earlier publications received relatively more attention from subsequent articles in this domain: (a) Cooper’s reflection of leadership approaches in resuscitation [ 24 ] received 11 citations; (b) Capella and colleagues’ teamwork training evaluation with surgical residents [ 33 ] received nine citations, and (c) the review by Hunt et al. exploring simulation as a tool for enhancing team performance [ 29 ] received eight citations. Of particular interest within this network is the relative isolation of articles from outside of traditional clinical resuscitation outlets. Our figure highlights Sarcevic et al. [ 10 ], as a paper involving resuscitation teams that did not cite any earlier articles in this review and was cited only once by later articles in this review. Published in a medical informatics outlet that could have limited its exposure to scholars in other domains, this is one example of the challenge in how resuscitation teams research is dispersed across domains.

Our scoping review has identified the heterogenous nature of the disciplines, methodologies, and scope of articles pertaining to NTS for team resuscitation. While this diversity opens opportunities for growth and novelty, it also creates conditions for disconnected conversations that do not share a language and fail to accumulate into a refined model for how teams work during resuscitation. This discussion reflects on the nature and implications of such disconnected conversations within this field of inquiry. We also reflect on how our revised NTS taxonomy can redefine resuscitation teams research by facilitating consistent use of team-based concepts and by identifying emerging constructs that warrant exploration.

A key observation that has emerged from our coherence analysis and the supporting network analysis is that there are many disconnected, parallel scholarly conversations in the literature. Of particular note is the disconnect between articles published in clinical medicine journals and those published in non-medical domains such as human factors or applied ergonomics. Our coherence analysis revealed that specific non-technical skills were inconsistently defined across such domains, and the network analysis showed minimal cross-referencing occurring both within and between these two domains. These disconnects have profound implications for what we know about NTS for team resuscitation: insights already obtained in one field are ‘rediscovered’ in another; inconsistencies in terminology impede a cumulative refinement of knowledge; and the unique diversity of insight that might accompany interdisciplinary inquiry fails to materialize.

While NTS for individual practitioners [ 77 ] was the model around which this review was based and represents the conventional framing of this topic, the emerging discourse incorporates a wider spectrum of team processes. The Proposed Taxonomy of Non-Technical Skills and Team Constructs for Ad Hoc Team Resuscitation represents our effort mark this transition and bridge the disconnects that we identified within the literature base. Although scoping reviews are often used to aggregate and describe an evidence base, they are also a powerful tool to (re)configure the evidence base and advance theory [ 81 ]. Our taxonomy aims to identify and resolve inconsistencies in terminology that may limit future research and educational progress in this domain. It presents and defines NTS and team constructs that were targeted in studies within this field to-date, synthesizing definitions from the dominant approaches within the literature. Further, it includes examples of how these constructs have been applied in ad hoc teams and is informed by key insights from past empirical research.

This taxonomy could bridge the parallel discussions in this rich literature so that future scholars can contribute more coherently and purposefully to a shared knowledge base; however, the definitive nature of some constructs included in this taxonomy are limited by the quality and breadth of work to date. For instance, constructs of stress and fatigue management were included in our taxonomy because they were included in the initial taxonomy and reflected upon by 10 sources in our review but were often not positioned as a clear team process. Just as our review identified constructs like followership or shared mental models that weren’t integrated in earlier taxonomies, we present this as an evolving taxonomy with an expectation of future empirical investigation and refinement.

A particular area where the taxonomy can build coherence in the field relates to the popular constructs of shared mental models and team situational awareness. Whereas shared mental models refer to a situation in which “team members hold common or overlapping cognitive representations of task requirements, procedures, and role responsibilities” [ 79 ] pp. 222, situational awareness is “the perception of elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.” [ 81 ] pp. 36. Situationally-aware teams are those where members develop and maintain a collective understanding of a specific situation or patient presentation; as an acute ‘state’ of being situationally aware. Team members with shared mental models tend to enter a situation knowing their own (and others) roles as well as the goals of the group when they face given situations. Inconsistency in the use of these terms was a key finding of our coherence analysis. These two terms are often conflated across the studies [ 26 , 56 , 62 , 66 ] or omitted insofar as findings allude to a construct while failing to explicitly reference it [ 23 , 24 , 25 , 61 ].

An example of confounded definitions arises when articles indicate that situation reports develop a shared mental model. Whereas situation reports ‘can’ establish mental models when designed for this goal, the value of such reports is watered-down without considering how such reports also shape situational awareness and other group processes like leadership. The problem of omission is less conspicuous but arises when authors refer to generalized descriptions of effective teams as opposed to tangible and mutually exclusive concepts. For instance, one article argued that teams are optimal when they “have regular training, roles are well defined, and each can make safe assumptions about the level of preparation of others” [ 25 ] pp. 38. This claim lacks the precision that is gained when researchers use established concepts like shared mental models, role communication, or teamwork training. In contrast to the above examples, our dataset contains five recent articles wherein team situational awareness and team/shared mental models are described with the requisite nuance to capture their relationship [ 4 , 6 , 35 , 57 , 58 ]. These articles discuss these concepts as being essential for resuscitation team performance with one study finding that indicators of shared mental models explained as much as 23% of the variance in team performance outcomes [ 42 ].

It is critical for practitioners, researchers, and educators to distinguish between shared mental models and situational awareness because each involves differing challenges within ad hoc settings. Shared mental models are particularly elusive to promote in intersectoral prehospital ad hoc teams because they depend on entering situations with a collective understanding of how the team will ‘work’. Research is needed to examine whether strategies to promote shared mental models from other contexts (e.g., clinical leaders complete a training module on how to develop mental models) should be adapted in the context of ad hoc resuscitation teams. An additional area of focus lies in examining how these teams adapt in settings where a shared mental model does not exist or is not feasible. Ad hoc resuscitation teams clearly constitute a fertile setting to extend what we know about mental models and situational awareness from teams with more stable membership.

With improved clarity and consistency of the constructs associated with NTS in team resuscitation, we might also advance how we measure these constructs. While our descriptive analysis did not include a formal quality assessment, we observed that quantitative studies tended to examine key constructs by coding team interactions that could be observed during clinical experiences and simulations, or by intervening upon non-technical skills and measuring clinical outcomes like patient progress or procedural success. Measurement tools utilized in the studies included in our dataset focused almost exclusively on behavioral aspects of nontechnical skills while failing to evaluate the affective and cognitive components. This observation is mirrored in Cooper et al.’s systematic review examining measurement of situational awareness in emergency settings [ 48 ] as well as Lapierre et al.’s systematic review of studies examining simulation to improve trauma team performance [ 74 ]. These failings have also been identified in reviews involving other clinical contexts, which have recognized that studies examining teams rely on observational methods and are often inconsistent regarding how researchers define and measure group processes [ 83 , 84 ]. The hazard in this approach is evident in the measurement of a team’s shared mental model through observation alone. Observation is a powerful tool for evaluating actions that might promote shared mental models (e.g., frequent communication) or observing the results (e.g., reduced conflict). Yet, observation is only a proxy for a team’s cognition. With observation alone we cannot directly estimate the extent that members share representations. In contrast, validated psychological measures of shared mental models often involve tools to identify critical aspects of teamwork in context, measure members’ individual perceptions of those aspects, and/or evaluate a group based on the degree to which members share representations [ 85 ]. Resuscitation researchers might adapt such tools to support both comprehensive evaluations of healthcare teams and specific measures of identified group processes and emergent states [ 86 ]. With valid measures for these constructs, we can delineate the nature of small group phenomena in resuscitation team performance and identify the active ingredients of interventions.

Limitations

The selected databases focused on clinical medicine journals. While the few articles that we identified from non-clinical medicine journals have given us an indication of the divide that may exist between clinical and non-clinical journals, our search strategy did not capture the full breadth of investigations outside of the clinical medicine literature. Another limitation arises due to the inherent nature of the scoping review as an iterative process that redefines its inclusion and exclusion criteria as it traverses diverse territory. When applied to a heterogenous dataset such as this, the scoping review has the potential to leave those more accustomed to the rigid structure of systematic reviews and meta-analyses discomfited about what may have been left on the cutting room floor. Finally, citation network analysis constitutes an emerging analysis technique not usually included in scoping review methods. Constructing a network including only the studies from this review was useful to document how NTS definitions or measures have emerged within resuscitation literature; however, we did not document citations to sources outside of this review or external papers citing those included in this review. Researchers could use more comprehensive citation analyses to explore connections between resuscitation team literature and research from other clinical settings or areas of study.

The literature on non-technical skills for ad hoc prehospital, emergency department, and trauma resuscitation teams is both diverse and disconnected. This review establishes that ad hoc resuscitation teams, and intersectoral ad hoc prehospital resuscitation teams, present realms that are ripe for future inquiry. We also offer a proposed taxonomy which presents a universal set of definitions for non-technical skills and team constructs for ad hoc resuscitation teams. We anticipate that this taxonomy will support the precision needed to incrementally advance understanding of teams in this context, such that insights obtained in one field can be applied in another, knowledge can accumulate across disciplines, and the rich insights of interdisciplinary inquiry can be realized. We also encourage future investigators to look beyond this literature base in search of validated psychological measures which more comprehensively assess the constructs being evaluated, so that the unique group processes responsible for collaboration in ad hoc teams can be more precisely described and enhanced through targeted training efforts.

Availability of data and materials

Not applicable.

Abbreviations

  • Non-technical skills

Greif R, Lockey A, Breckwoldt J, Carmona F, Conaghan P, Kuzovlev A, et al. European resuscitation council guidelines 2021: education for resuscitation. Resuscitation. 2021;161:388–407. https://doi.org/10.1016/j.resuscitation.2021.02.016 .

Article   PubMed   Google Scholar  

Neumar RW, Shuster M, Callaway CW, Gent LM, Atkins DL, Bhanji F, et al. Part 1: executive summary: 2015 American Heart Association guidelines update for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2015;132(18 suppl 2):S315–67. https://doi.org/10.1161/CIR.0000000000000252 .

Dinh JV, Schweissing EJ, Venkatesh A, Traylor AM, Kilcullen MP, Perez JA, et al. The study of teamwork processes within the dynamic domains of healthcare: a systematic and taxonomic review. Front Commun. 2021;11(6):3. https://doi.org/10.3389/fcomm.2021.617928 .

Article   Google Scholar  

Manser T. Teamwork and patient safety in dynamic domains of healthcare: a review of the literature. Acta Anaesthesiol Scand. 2009;53:143–51. https://doi.org/10.1111/j.1399-6576.2008.01717.x .

Article   CAS   PubMed   Google Scholar  

Salas E, Sims DE, Burke CS. Is there a “big five” in teamwork? Small Group Res. 2005;36:555–99. https://doi.org/10.1177/1046496405277134 .

Hicks C, Petrosoniak A. The human factor: optimizing trauma team performance in dynamic clinical environments. Emerg Med Clin North Am. 2018;1(36):1–17. https://doi.org/10.1016/j.emc.2017.08.003 .

Brindley P, Cardinal P, editors. Optimizing crisis resource management to improve patient safety and team performance: a handbook for all acute care health professionals. Middletown: Royal College of Physicians and Surgeons of Canada; 2017.

Cormack S, Scott S, Stedmon A. Non-technical skills in out-of-hospital cardiac arrest management: a scoping review. Australas J Paramed. 2020;13:17. https://doi.org/10.33151/ajp.17.744 .

Schmutz JB, Lei Z, Eppich WJ, Manser T. Reflection in the heat of the moment: the role of in-action team reflexivity in health care emergency teams. J Org Behav. 2018;1(39):749–65. https://doi.org/10.1002/job.2299 .

Sarcevic A, Marsic I, Waterhouse LJ, Stockwell DC, Burd RS. Leadership structures in emergency care settings: a study of two trauma centers. Int J Med Inform. 2011;80(4):227–38. https://doi.org/10.1016/j.ijmedinf.2011.01.004 .

Andreatta PB. A typology for health care teams. Health Care Manag Rev. 2010;35(4):345–54. https://doi.org/10.1097/HMR.0b013e3181e9fceb .

Adeleye OA, Ofili AN. Strengthening intersectoral collaboration for primary health care in developing countries: can the health sector play broader roles? J Environ Public Health. 2010;29:2010. https://doi.org/10.1155/2010/272896 .

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev. 2012;1(1):1–9. https://doi.org/10.1186/2046-4053-1-28 .

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32. https://doi.org/10.1080/1364557032000119616 .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73. https://doi.org/10.7326/M18-0850 .

Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, Mcewen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Syn Meth. 2014;5:371–85. https://doi.org/10.1002/jrsm.1123 .

Yule S, Flin R, Paterson-Brown S, Maran N. Non-technical skills for surgeons in the operating room: a review of the literature. Surgery. 2006;139(2):140–9. https://doi.org/10.1016/j.surg.2005.06.017 .

Covidence - Better Systematic Review Management. https://www.covidence.org/

Mitchell B, Cristancho S, Nyhof BB, Lingard LA. Mobilising or standing still? a narrative review of surgical safety checklist knowledge as developed in 25 highly cited papers from 2009 to 2016. BMJ Qual Saf. 2017;26(10):837–44. https://doi.org/10.1136/bmjqs-2016-006218 .

Ashton RA, Morris L, Smith I. A qualitative meta-synthesis of emergency department staff experiences of violence and aggression. Int Emerg Nurs. 2018;1(39):13–9. https://doi.org/10.1016/j.ienj.2017.12.004 .

Stomski NJ, Morrison P. Participation in mental healthcare: a qualitative meta-synthesis. Int J Ment Health Syst. 2017;11(1):1–11. https://doi.org/10.1186/s13033-017-0174-y .

Campeau AG. The space-control theory of paramedic scene-management. Symb Interac. 2008;31:285–302. https://doi.org/10.1525/si.2008.31.3.285 .

Driscoll PA, Vincent CA. Organizing an efficient trauma team. Injury. 1992;23:107–10. https://doi.org/10.1016/0020-1383(92)90043-R .

Xiao Y, Hunter WA, Mackenzie CF, Jefferies NJ, Horst RL. Task complexity in emergency medical care and its implications for team coordination. Hum Factors. 1996;38(4):636–45. https://doi.org/10.1518/2F001872096778827206 .

Stohler SA. High performance team interaction in an air medical program. Air Med J. 1998;17(3):116–20. https://doi.org/10.1016/S1067-991X(98)90109-2 .

Cooper S, Wakelam A. Leadership of resuscitation teams: ‘lighthouse leadership.’ Resuscitation. 1999;42(1):27–45. https://doi.org/10.1016/S0300-9572(99)00080-5 .

Meerabeau L, Page S. I’m sorry if I panicked you: nurses’ accounts of teamwork in cardiopulmonary resuscitation. J Interprof Care. 1999;13(1):29–40. https://doi.org/10.3109/13561829909025533 .

Williams KA, Rose WD, Simon R. Teamwork in emergency medical services. Air Med J. 1999;18(4):149–53. https://doi.org/10.1016/S1067-991X(99)90028-7 .

Bergs EA, Rutten FL, Tadros T, Krijnen P, Schipper IB. Communication during trauma resuscitation: do we know what is happening? Injury. 2005;36(8):905–11. https://doi.org/10.1016/j.injury.2004.12.047 .

Cole E, Crichton N. The culture of a trauma team in relation to human factors. J Clin Nurs. 2006;15(10):1257–66. https://doi.org/10.1111/j.1365-2702.2006.01566.x .

Hunt EA, Shilkofski NA, Stavroudis TA, Nelson KL. Simulation: translation to improved team performance. Anesthesiol Clin. 2007;25(2):301–19. https://doi.org/10.1016/j.anclin.2007.03.004 .

Hicks CM, Bandiera GW, Denny CJ. Building a simulation-based crisis resource management course for emergency medicine, phase 1: results from an interdisciplinary needs assessment survey. Acad Emerg Med. 2008;15(11):1136–43. https://doi.org/10.1111/j.1553-2712.2008.00185.x .

Høyer CB, Christensen EF, Eika B. Junior physician skill and behaviour in resuscitation: a simulation study. Resuscitation. 2009;80(2):244–8. https://doi.org/10.1016/j.resuscitation.2008.10.029 .

Andersen PO, Jensen MK, Lippert A, Østergaard D. Identifying non-technical skills and barriers for improvement of teamwork in cardiac arrest teams. Resuscitation. 2010;81(6):695–702. https://doi.org/10.1016/j.resuscitation.2010.01.024 .

Capella J, Smith S, Philp A, Putnam T, Gilbert C, Fry W, et al. Teamwork training improves the clinical care of trauma patients. J Surg Educ. 2010;67(6):439–43. https://doi.org/10.1016/j.jsurg.2010.06.006 .

Hunziker S, Bühlmann C, Tschan F, Balestra G, Legeret C, Schumacher C, et al. Brief leadership instructions improve cardiopulmonary resuscitation in a high-fidelity simulation: a randomized controlled trial. Crit Care Med. 2010;38(4):1086–91. https://doi.org/10.1097/CCM.0b013e3181cf7383 .

Westli H, Johnsen B, Eid J, Rasten I, Brattebø G. Teamwork skills, shared mental models, and performance in simulated trauma teams: an independent group design. Scand J Trauma Resusc Emerg Med. 2010;18(1):47. https://doi.org/10.1186/1757-7241-18-47 .

Article   PubMed   PubMed Central   Google Scholar  

Høyer CB, Christensen EF, Eika B. Standards of resuscitation during inter-hospital transportation: the effects of structured team briefing or guideline review - a randomised, controlled simulation study of two micro-interventions. Scand J Trauma Resusc Emerg Med. 2011;19(1):1–11. https://doi.org/10.1186/1757-7241-19-15 .

Hunziker S, Johansson AC, Tschan F, Semmer NK, Rock L, Howell MD, et al. Teamwork and leadership in cardiopulmonary resuscitation. J Am Coll Cardiol. 2011;57(24):2381–8. https://doi.org/10.1016/j.jacc.2011.03.017 .

Steinemann S, Berg B, Skinner A, DiTulio A, Anzelon K, Terada K, Oliver C, Ho HC, Speck C. In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. J Surg Educ. 2011;68(6):472–7. https://doi.org/10.1016/j.jsurg.2011.05.009 .

Jankouskas TS, Haidet KK, Hupcey JE, Kolanowski A. Targeted crisis resource management training improves performance among randomized nursing and medical students. Simul Healthc. 2011;6(6):316–26. https://doi.org/10.1097/SIH.0b013e31822bc676 .

Miller D, Crandall C, Washington C III, McLaughlin S. Improving teamwork and communication in trauma care though an in situ simulations. Acad Emerg Med. 2012;19(5):608–12. https://doi.org/10.1111/j.1553-2712.2012.01354.x .

Norris EM, Lockey AS. Human factors in resuscitation teaching. Resuscitation. 2012;83(4):423–7. https://doi.org/10.1016/j.resuscitation.2011.11.001 .

Sarcevic A, Marsic I, Burd RS. Teamwork errors in trauma resuscitation. ACM Trans Comput Hum Interact. 2012;19(2):1–30. https://doi.org/10.1145/2240156.2240161 .

Clarke S, Lyon RM, Short S, Crookston C, Clegg GR. A specialist, second-tier response to out-of-hospital cardiac arrest: setting up TOPCAT2. Emerg Med J. 2014;31(5):405–7. https://doi.org/10.1136/emermed-2012-202232 .

Castelao EF, Russo SG, Riethmüller M, Boos M. Effects of team coordination during cardiopulmonary resuscitation: a systematic review of the literature. J Crit Care. 2013;28(4):504–21. https://doi.org/10.1016/j.jcrc.2013.01.005 .

Petrosoniak A, Hicks CM. Beyond crisis resource management: new frontiers in human factors training for acute care medicine. Curr Opin Anaesthesiol. 2013;26(6):699–706. https://doi.org/10.1097/ACO.0000000000000007 .

Shields A, Flin R. Paramedics’ non-technical skills: a literature review. Emerg Med J. 2012;30(5):350–4. https://doi.org/10.1136/emermed-2012-201422 .

Clements A, Curtis K, Horvat L, Shaban RZ. The effect of a nurse team leader on communication and leadership in major trauma resuscitations. Int Emerg Nurs. 2015;23(1):3–7. https://doi.org/10.1016/j.ienj.2014.04.004 .

Gjeraa K, Møller TP, Østergaard D. Efficacy of simulation-based trauma team training of non-technical skills a systematic review. Acta Anaesthesiol Scand. 2014;58(7):775–87. https://doi.org/10.1111/aas.12336 .

Rasmussen MB, Tolsgaard MG, Dieckmann P, Issenberg SB, Østergaard D, Søreide E, Rosenberg J, Ringsted CV. Factors relating to the perceived management of emergency situations: a survey of former advanced life support course participants’ clinical experiences. Resuscitation. 2014;85(12):1726–31. https://doi.org/10.1016/j.resuscitation.2014.08.004 .

Cooper S, Porter J, Peach L. Measuring situation awareness in emergency settings: a systematic review of tools and outcomes. Open Access Emerg Med. 2014;6:1. https://doi.org/10.2147/OAEM.S53679 .

Holly D, Swanson V, Cachia P, Beasant B, Laird C. Development of a behaviour rating system for rural/remote pre-hospital settings. Appl Ergon. 2017;1(58):405–13. https://doi.org/10.1016/j.apergo.2016.08.002 .

Gillman LM, Martin D, Engels PT, Brindley P, Widder S, French C. STARTT plus: addition of prehospital personnel to a national multidisciplinary crisis resource management trauma team training course. J Surg. 2016;59(1):9–11. https://doi.org/10.1503/cjs.010915 .

Lorello GR, Hicks CM, Ahmed SA, Unger Z, Chandra D, Hayter MA. Mental practice: a simple tool to enhance team-based trauma resuscitation. CJEM. 2016;18(2):136–42. https://doi.org/10.1017/cem.2015.4 .

Maluso P, Hernandez M, Amdur RL, Collins L, Schroeder ME, Sarani B. Trauma team size and task performance in adult trauma resuscitations. J Surg Res. 2016;204(1):176–82. https://doi.org/10.1016/j.jss.2016.05.007 .

Steinemann S, Kurosawa G, Wei A, Ho N, Lim E, Suares G, et al. Role confusion and self-assessment in interprofessional trauma teams. Am J Surg. 2016;211(2):482–8. https://doi.org/10.1016/j.amjsurg.2015.11.001 .

Steinemann S, Bhatt A, Suares G, Wei A, Ho N, Kurosawa G, Lim E, Berg B. Trauma team discord and the role of briefing. J Trauma Acute Care Surg. 2016;81(1):184. https://doi.org/10.1097/TA.0000000000001024 .

Calder LA, Mastoras G, Rahimpour M, Sohmer B, Weitzman B, Cwinn AA, Hobin T, Parush A. Team communication patterns in emergency resuscitation: a mixed methods qualitative analysis. Int J Emerg Med. 2017;10(1):1–9. https://doi.org/10.1186/s12245-017-0149-4 .

Johnsen BH, Westli HK, Espevik R, Wisborg T, Brattebø G. High-performing trauma teams: frequency of behavioral markers of a shared mental model displayed by team leaders and quality of medical performance. Scand J Trauma Resusc and Emerg Med. 2017;25(1):1–6. https://doi.org/10.1186/s13049-017-0452-3 .

Myers JA, Powell DM, Aldington S, Sim D, Psirides A, Hathaway K, et al. The impact of fatigue on the non-technical skills performance of critical care air ambulance clinicians. Acta Anaesthesiol Scand. 2017;61(10):1305–13. https://doi.org/10.1111/aas.12994 .

El-Shafy IA, Delgado J, Akerman M, Bullaro F, Christopherson NA, Prince JM. Closed-loop communication improves task completion in pediatric trauma resuscitation. J Surg Educ. 2018;75(1):58–64. https://doi.org/10.1016/j.jsurg.2017.06.025 .

Ghazali DA, Darmian-Rafei I, Ragot S, Oriot D. Performance under stress conditions during multidisciplinary team tmmersive pediatric simulations. Pediatr Crit Care Med. 2018;19(6):270–8. https://doi.org/10.1097/PCC.0000000000001473 .

O’Neill TA, White J, Delaloye N, Gilfoyle E. A taxonomy and rating system to measure situation awareness in resuscitation teams. PLoS ONE. 2018;13(5): e0196825. https://doi.org/10.1371/journal.pone.0196825 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sullivan S, Campbell K, Ross JC, Thompson R, Underwood A, LeGare A, et al. Identifying nontechnical skill deficits in trainees through interdisciplinary trauma simulation. J Surg Educ. 2018;75(4):978–83. https://doi.org/10.1016/j.jsurg.2017.10.007 .

Herzberg S, Hansen M, Schoonover A, Skarica B, McNulty J, Harrod T, Snowden JM, Lambert W, Guise JM. Association between measured teamwork and medical errors: an observational study of prehospital care in the USA. BMJ Open. 2019;9(10):e025314. https://doi.org/10.1136/bmjopen-2018-025314 .

Lazzara EH, Keebler JR, Shuffler ML, Patzer B, Smith DC, Misasi P. Considerations for multiteam systems in emergency medical services. J Patient Saf. 2019;15(2):150–3. https://doi.org/10.1097/PTS.0000000000000213 .

Murphy M, McCloughen A, Curtis K. The impact of simulated multidisciplinary trauma team training on team performance: a qualitative study. Australas Emerg Care. 2019;22(1):1–7. https://doi.org/10.1016/j.auec.2018.11.003 .

Armstrong P, Peckler B, Pilkinton-Ching J, Mcquade D, Rogan A. Effect of simulation training on nurse leadership in a shared leadership model for cardiopulmonary resuscitation in the emergency department. Emerg Med Australas. 2021;33(2):255–61. https://doi.org/10.1111/1742-6723.13605 .

Coggins A, Santos ADL, Zaklama R, Murphy M. Interdisciplinary clinical debriefing in the emergency department: an observational study of learning topics and outcomes. BMC Emerg Med. 2020;20(1):1–10. https://doi.org/10.1186/s12873-020-00370-7 .

Dagnell AJ. Teamwork and leadership in out-of-hospital cardiac arrest – do these non-technical skills require attention? Australas J Paramedicine. 2020;26:17. https://doi.org/10.33151/ajp.17.748 .

Dumas RP, Vella MA, Chreiman KC, Smith BP, Subramanian M, Maher Z, et al. Team assessment and decision making is associated with outcomes: a trauma video review analysis. J Surg Research. 2020;1(246):544–9. https://doi.org/10.1016/j.jss.2019.09.033 .

Fernandez R, Rosenman ED, Olenick J, Misisco A, Brolliar SM, Chipman AK, et al. Simulation-based team leadership training improves team leadership during actual trauma resuscitations: a randomized controlled trial. Crit Care Med. 2020;48(1):73–82. https://doi.org/10.1097/CCM.0000000000004077 .

Gilmartin S, Martin L, Kenny S, Callanan I, Salter N. Promoting hot debriefing in an emergency department. BMJ Open Qual. 2020;9(3):e000913. https://doi.org/10.1136/bmjoq-2020-000913 .

Kristiansen LH, Freund DS, Rölfing JHD, Thoringer R. Trauma team training at a “high risk low incidence hospital.” Dan Med J. 2020;67(3):A03190189.

PubMed   Google Scholar  

Lapierre A, Bouferguene S, Gauvin-Lepage J, Lavoie P, Arbour C. Effectiveness of interprofessional manikin-based simulation training on teamwork among real teams during trauma resuscitation in adult emergency departments: a systematic review. Simul Healthc. 2020;15(6):409–21. https://doi.org/10.1097/SIH.0000000000000443 .

Petrosoniak A, Fan M, Hicks CM, White K, McGowan M, Campbell D, et al. Trauma resuscitation using in situ simulation team training (TRUST) study: Latent safety threat evaluation using framework analysis and video review. BMJ Qual Saf. 2021;30(9):739–46. https://doi.org/10.1136/bmjqs-2020-011363 .

Sherman JM, Chang TP, Ziv N, Nager AL. Barriers to effective teamwork relating to pediatric resuscitations perceptions of pediatric emergency medicine staff. Pediatr Emerg Care. 2020;36(3):146–50. https://doi.org/10.1097/PEC.0000000000001275 .

Salas E, Prince C, Baker DP, Shrestha L. Situation awareness in team performance: implications for measurement and training. Hum Factors. 1995;37(1):123–36. https://doi.org/10.1518/2F001872095779049525 .

Mathieu JE, Goodwin GF, Heffner TS, Salas E, Cannon-Bowers JA. The influence of shared mental models on team process and performance. J Appl Psychol. 2000;85(2):273–83. https://doi.org/10.1037/0021-9010.85.2.273 .

Endsley MR. Toward a theory of situation awareness in dynamic systems. Hum Factors. 1995;37(1):32–65. https://doi.org/10.1037/0021-9010.85.2.273 .

Cannon-Bowers JA, Salas E, Converse S. Shared mental models in expert team decision making. In: Castellan Jr NJ, editor. Individual and group decision making: current issues. Hillsdale, New Jersey: Lawrence Erlbaum Associates; 1993. p. 221–46.

Google Scholar  

Kolbe M, Boos M. Laborious but elaborate: the benefits of really studying team dynamics. Front Psychol. 2019;28(10):1478. https://doi.org/10.3389/fpsyg.2019.01478 .

Rosen MA, Diazgranados D, Dietz AS, Benishek LE, Thompson D, Pronovost PJ, et al. Teamwork in healthcare: key discoveries enabling safer, high-quality care. Am Psychol. 2018;73(4):433. https://doi.org/10.1037/amp0000298 .

Gisick LM, Webster KL, Keebler JR, Lazzara EH, Fouquet S, Fletcher K, et al. Measuring shared mental models in healthcare. J Patient Saf Risk Manag. 2018;23(5):207–19. https://doi.org/10.1177/2F2516043518796442 .

Valentine MA, Nembhard IM, Edmondson AC. Measuring teamwork in health care settings a review of survey instruments. Med Care. 2015;53(4):16–30. https://doi.org/10.2307/26417947 .

Leung C, Lucas A, Brindley P, Anderson S, Park J, Vergis A, et al. Followership: a review of the literature in healthcare and beyond. J Crit Care. 2018;1(46):99–104. https://doi.org/10.1016/j.jcrc.2018.05.001 .

Download references

Acknowledgements

The authors wish to acknowledge research librarian John Costella, Western University, for his support in developing the search strategy for this study. We also feel it imperative to express our sincere gratitude to the entire team at Schulich School of Medicine’s Centre for Education Research and Innovation, without whom this protracted investigation may very well not have had sufficient momentum to reach this conclusion. Finally, we wish state unequivocally that while JCE is the older and physically superior brother, MBE’s influence on JCE cannot be understated and that through this influence, MBE has assumed the role as the harsh wind unintentionally shaping the jack pine in the Evans family.

This work was partially supported by funding from the Department of Medicine’s Program for Experimental Medicine, Schulich School of Medicine & Dentistry, Western University.

Author information

Authors and affiliations.

Division of Emergency Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada

J. Colin Evans & Michael Peddle

Department of Psychology, Western University, London, ON, Canada

M. Blair Evans

Middlesex-London Paramedic Service, London, ON, Canada

Meagan Slack

Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada

Lorelei Lingard

You can also search for this author in PubMed   Google Scholar

Contributions

JCE: conception, design, acquisition and analysis, drafting and revision; MBE: design, analysis, drafting and revision; MS: acquisition and analysis; revision; MP: conception, revision, supervision; LL: conception, design, analysis, drafting and revision, supervision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to J. Colin Evans .

Ethics declarations

Ethics approval and consent to participate, competing interests.

The authors declare they have no competing interests.

Consent for publication

Additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Supplemental Materials: 1. Complete search query for CINAHL. 2. Inclusion Criteria Table. 3. PRISMA-SCR Checklist.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Evans, J.C., Evans, M.B., Slack, M. et al. Examining non-technical skills for ad hoc resuscitation teams: a scoping review and taxonomy of team-related concepts. Scand J Trauma Resusc Emerg Med 29 , 167 (2021). https://doi.org/10.1186/s13049-021-00980-5

Download citation

Received : 15 September 2021

Accepted : 15 November 2021

Published : 04 December 2021

DOI : https://doi.org/10.1186/s13049-021-00980-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Resuscitation
  • Ad Hoc team
  • Scoping review
  • Prehospital
  • Emergency medicine

Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine

ISSN: 1757-7241

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

ad hoc journal article review

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Women’s NCAA title game outdraws the men’s championship with an average of 18.9 million viewers

South Carolina head coach Dawn Staley celebrates after the Final Four college basketball championship game against Iowa in the women's NCAA Tournament, Sunday, April 7, 2024, in Cleveland. South Carolina won 87-75. (AP Photo/Morry Gash)

South Carolina head coach Dawn Staley celebrates after the Final Four college basketball championship game against Iowa in the women’s NCAA Tournament, Sunday, April 7, 2024, in Cleveland. South Carolina won 87-75. (AP Photo/Morry Gash)

Iowa guard Caitlin Clark stands on the court during the second half of the Final Four college basketball championship game against South Carolina in the women’s NCAA Tournament, Sunday, April 7, 2024, in Cleveland. (AP Photo/Carolyn Kaster)

  • Copy Link copied

ad hoc journal article review

The women’s NCAA championship game drew a bigger television audience than the men’s title game for the first time, with an average of 18.9 million viewers watching undefeated South Carolina beat Iowa and superstar Caitlin Clark , according to ratings released Tuesday.

The Sunday afternoon game on ABC and ESPN outdrew Monday’s men’s final between UConn and Purdue by four million. The Huskies’ 75-60 victory averaged 14.82 million on TBS and TNT.

The audience for the women’s game — in which the Gamecocks won their fourth national title and denied Clark’s Hawkeyes their first — peaked at 24.1 million during the final 15 minutes.

“You’re seeing the growth in many places: attendance records, viewership and social media engagement surrounding March Madness,” UCLA coach Cori Close said. “I don’t think you can attribute it just to Iowa, though. A rising tide does lifts all boats. But I think all those boats have been on many different waterways. The product is really good, and the increase of exposure is getting rewarded.”

UConn players celebrate as time expires during the second half of the NCAA college Final Four championship basketball game against Purdue, Monday, April 8, 2024, in Glendale, Ariz. (AP Photo/Brynn Anderson)

It was the second most-watched non-Olympic women’s sporting event on U.S. television, trailing only the 2015 Women’s World Cup final between the United States and Japan, which averaged 25.4 million on Fox. That also was on a Sunday and took place in prime time on the East Coast.

The record for the most-watched women’s basketball game still belongs to the gold medal game of the 1996 Atlanta Olympics between the United States and Brazil, which averaged 19.5 million. South Carolina coach Dawn Staley played for that U.S. team.

Nielsen’s numbers include an estimate of the number of people who watched outside their homes, which wasn’t measured before 2020. Due to cord-cutting, the in-home audience has steadily declined annually.

The audience for the national title game was up 90% over last year when Clark and Iowa fell to LSU. That also was the first time since 1995 that the championship was on network television.

The audience was 289% bigger than the viewership for the Gamecocks’ title two years ago, when they beat UConn on ESPN.

“I had not seen it much (women’s basketball) before this year. I didn’t make it appointment television. This year, it was appointment television,” said former CBS Sports president Neal Pilson, who now runs a sports television consulting company. “That’s what happened when you see those numbers. There were a lot of people making notes to sit down and watch the games.”

During the Final Four, Clark said the audience growth was benefiting all of women’s sports, not just basketball.

“I think you see it across the board, whether it’s softball, whether it’s gymnastics, volleyball. People want to watch. It’s just when they’re given the opportunity, the research and the facts show that people love it,” she said.

Clark and Iowa have the three biggest audiences for women’s college basketball. The Hawkeyes’ victory over UConn on Friday night averaged 14.2 million, and their April 1 victory over LSU in the Elite Eight drew 12.3 million.

Iowa’s six NCAA Tournament games on ESPN and ABC averaged 10.07 million.

However, as Clark heads to the WNBA, many wonder if the college game can continue to attract large audiences.

Clark was the marquee player of March Madness, but there was an impressive group of freshmen, headlined by Southern California’s JuJu Watkins, Notre Dame’s Hannah Hidalgo and Texas’ Madison Booker. Watkins set the NCAA record for points by a freshman with 920.

“There are also some great freshmen coming into the game next year who I think are going to make a tremendous impact. As long as we keep telling the story, there’s still room to grow the game,” Close said.

It also helped that the women’s game got increased exposure during the regular season on network television.

Fox carried 14 women’s games this season, including three in prime time, NBC two, and CBS had the Big Ten Tournament championship. ABC had five regular-season games and nine during the NCAA Tournament.

Fox has announced the Women’s Champions Classic for next season on Dec. 7. The prime-time doubleheader will feature UConn, Iowa, Louisville and Tennessee.

Pilson thinks this year’s tournament has paved the way for further growth. Just maybe not at the level seen with Clark’s following.

“I wouldn’t call it a blip because I think the women’s game is going to definitely improve from ratings and exposure here on out,” he said. “The broadcast and top cable sports channels can still deliver an audience. Midseason coverage tends to focus on the teams and personalities to make the public aware of the athletes. But the game has improved over the last five years. It has become faster and more competitive. I also think they have learned from the NBA and NFL on how to promote its stars.”

Most audiences during March Madness also tuned in before and after Iowa’s games.

UConn’s 80-73 win over Southern California in the Elite Eight on April 1, which tipped off after the Iowa game, averaged 6.7 million. During the Sweet 16 on March 30, LSU’s victory over UCLA, which preceded the Iowa game, averaged 3.8 million.

“The great thing about what’s happened the last two years is there are people who only watch out for Caitlin Clark, but there are also people who watch for Caitlin Clark and became interested in the other games,” said Jon Lewis, who runs the Sports Media Watch site. “It’s not like the Caitlin Clark games did amazingly well and every other game was at the same level that it was two years ago. They were also tuning in on days when Caitlin Clark didn’t play, which is really notable. Now, none of those numbers are at the level of what we saw for the Iowa games, but they are a lot better than what the tournament was getting before.”

The championship game’s return to network television has also benefited the women’s game. UConn’s victory over Tennessee in 1995 averaged 7.44 million on CBS. Despite ESPN’s work raising the profile of the tournament since it acquired the rights in 1996, the closest the network got to that number for the final was in 2002, when 5.68 million watched UConn beat Oklahoma.

“I think our game has been good for a long time and I think people have just missed the boat. Now we’ve finally had the exposure, and people have understood, ‘Wow, I haven’t watched women’s basketball for a long time, I’ve missed something.’ I don’t think they’re going to want to miss anymore,” Iowa coach Lisa Bluder said during the Final Four. “Caitlin has certainly been a tremendous star for our game, but there are so many stars in our game. So we’re just going to latch onto that next one next year.”

AP March Madness bracket: https://apnews.com/hub/ncaa-womens-bracket/ and coverage: https://apnews.com/hub/march-madness

JOE REEDY

IMAGES

  1. Definition Essay: Peer reviewed journal article example

    ad hoc journal article review

  2. Reflective Essay: How to write a journal article review sample

    ad hoc journal article review

  3. How-to-Write-a-Journal-Article

    ad hoc journal article review

  4. Journal Article Review

    ad hoc journal article review

  5. (PDF) How to write a review article

    ad hoc journal article review

  6. Writing a Journal Article Review [New]

    ad hoc journal article review

VIDEO

  1. Soyez Payé +100$ /Jours Pour Tester des APPLIS Google PlayStore

  2. Junk Journal Folio

  3. Veteran and First Responder Transition Tips. #shorts

  4. Ad Hoc Committee Meeting, March 26, 2024

  5. AD-HOC Committee 4/9/24

  6. JOURNAL ARTICLE REVIEW # 15: SOCIAL MEDIA USAGE AND MATERIALISM (KAMAL ET AL, 2013)

COMMENTS

  1. Peer Review Process

    Ad hoc reviewers: This type of reviewer is outside of the journal's or journal section's editorial board and volunteers his/her time to submit comments for a single manuscript. The ad hoc reviewer might be invited to submit comments on a subsequent revised manuscript, if the manuscript enters an additional round of review.

  2. peer review

    5. Ad hoc might be used in the context of conference review where reviews are usually done by the members of a Program Committee and an extra (usually a specialized expert) reviewer is brought in for one or two papers which require careful review beyond the capability or capacity of the PC members. I haven't seen it used in the context of a ...

  3. Acknowledgment of Ad Hoc Reviewers, 2020

    Acknowledgment of Ad Hoc Reviewers. The editorial team would gratefully like to acknowledge those who provided reviews in 2019 for their contribution to the review process. Their efforts—along with those of our authors, editorial board, and associate editor team—ensure that the Journal of Management continues to publish impactful, top ...

  4. Ad Hoc Reviewers, 2019, 2020

    Ad Hoc Reviewers, 2019, 2020. We and our. partners. store and/or access information on a device, such as cookies and process personal data, such as unique identifiers and standard information sent by a device for personalised ads and content, ad and content measurement, and audience insights, as well as to develop and improve products.

  5. What to Expect in Peer Review

    Manuscripts submitted to the ASHA journals go through an editorial board peer review model. In this model, an editor-in-chief (EIC) is responsible for assigning each manuscript to an editor who has the appropriate content expertise. The editor assigns typically two to three reviewers who are editorial board members (EBMs) or one EBM and one ad ...

  6. Acknowledgment of Ad Hoc Reviewers, 2019

    The editorial team would gratefully like to acknowledge those who provided reviews in 2018 and for their contribution to the review process. Their efforts—along with those of our authors, editorial board, and associate editor team—ensure that the Journal of Management continues to publish impactful, top-quality research on a diverse array of important management topics.

  7. Ad Hoc Networks

    The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all …. View full aims & scope $2720

  8. On Becoming a Peer Reviewer for a Neuropsychology Journal

    The current article will review some benefits of being an ad hoc reviewer for journals, and outline some points to keep in mind while conducting reviews. Why Become an Ad Hoc Reviewer? There are many reasons to serve as an ad hoc reviewer for scientific journals. It may seem a truism that volunteerism is its own reward, but nevertheless the ...

  9. Ad-Hoc Reviewers from 2021

    The Journal of Nonverbal Behavior greatly appreciates the assistance of the Editorial Board as well as the following ad hoc manuscript reviewers: Caspar Addyman. Cameron Anderson. Pablo Arias. Toe Aung. Stephen Benning. R. Thora Bjornsdottir. Molly Bowdring.

  10. Interventions to improve team effectiveness within health care: a

    The searches were restricted to articles published in English in peer-reviewed journals between 2008 and July 2018. This resulted in 5763 articles. ... 343 articles were excluded. In total, 297 articles were included in this review. Fourth, ... (ad hoc emergency teams) Changed teamwork and communication behaviour: C:

  11. Classification and comparison of ad hoc networks: A review

    This section compares mobile ad hoc networks, wireless mesh networks, and wireless sensor networks over different aspects such as application, characteristics, limitations, challenges, routing protocols, pros, and cons. Table 3 describes the comparison between MANET, WMN, and WSN based on application.

  12. A Critical Appraisal of "Chronic Lyme Disease"

    The following members of the Ad Hoc International Lyme Disease Group were also authors: ... Journal of Clinical Medicine, 12, 20, ... More Review Articles. Review Article; Apr 11, 2024;

  13. Mobile Ad Hoc Networks: Recent Advances and Future Trends

    For this Special Issue, we invite submissions from all areas relating to the applications and challenges of mobile ad hoc networking, recent advances, and future trends. Contributions must relate to at least one of the following topics of interest: System design for mobile ad hoc networks. Applications of mobile ad hoc networks:

  14. ML-Enhanced Live Video Streaming in Offline Mobile Ad Hoc ...

    Round 1. Reviewer 1 Report Comments and Suggestions for Authors. Summary: The paper aims to enhance live video streaming in Offline Mobile Ad-Hoc Networks (MANETs) using machine learning techniques.The main contributions include proposing a novel architecture for dynamic and individual adaptability of streaming, integrating software-based components for communication management, link ...

  15. A comprehensive review on vehicular ad-hoc networks routing ...

    Vehicular Ad-hoc Networks (VANETs) have received extensive consideration from the industry and the research community because of their expanding emphasis on constructing Intelligent Transportation Systems (ITS) to enhance road safety. ITS is a collection of technologies and applications that aim to improve transportation safety and mobility while lowering the number of accidents. In VANET ...

  16. Editorial Acknowledgment of Ad Hoc Reviewers, 2019

    The editors wish to acknowledge the following people who reviewed one or more manuscripts for the Journal of Health and Social Behavior during the period October 2, 2018 to October 1, 2019. Peer review of manuscripts is an often underappreciated contribution, yet is essential to the discipline and quality of scholarship published in this journal.

  17. Interpreter services and effect on healthcare

    The purpose of this review is to identify the type of interpreter used; professional, ad hoc, relational, any or none and its impact on quality of care. The scope of this review is to assess the utilization and impact of different types of interpreters in healthcare settings.

  18. Ad Hoc Networks

    Design and implementation of beamformed physical downlink control channel for 4G massive MIMO systems. Pavan Reddy M., Harish Kumar D., Saidhiraj Amuru, Kiran Kuchi. Article 102358. View PDF. Article preview. Read the latest articles of Ad Hoc Networks at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature.

  19. Best Practice Recommendations: User Acceptance Testing for Systems

    Table Table1 1 provides the suggested guidelines for testing these components in terms of when formal testing using UAT scripts is recommended as a best practice as opposed to cases where ad hoc testing may be sufficient. Details on the development of UAT scripts are provided in the UAT Documentation section of this paper. eCOA systems can be ...

  20. Ad Hoc Reviewers, 2018, 2019

    Also from SAGE Publishing. CQ Library American political resources opens in new tab; Data Planet A universe of data opens in new tab; Lean Library Increase the visibility of your library opens in new tab; SAGE Business Cases Real-world cases at your fingertips opens in new tab; SAGE Campus Online skills and methods courses opens in new tab; SAGE Knowledge The ultimate social science library ...

  21. Examining non-technical skills for ad hoc resuscitation teams: a

    Background Non-technical skills (NTS) concepts from high-risk industries such as aviation have been enthusiastically applied to medical teams for decades. Yet it remains unclear whether—and how—these concepts impact resuscitation team performance. In the context of ad hoc teams in prehospital, emergency department, and trauma domains, even less is known about their relevance and impact ...

  22. Modeling and Performance Evaluation of Wireless Ad-Hoc Networks

    A primary aim of wireless ad-hoc networks is to deliver data in areas where there is no pre-defined infrastructure. In these networks, the users, but also the network entities can be potentially mobile. Wireless ad-hoc networks have recently witnessed their fastest growth period ever in history. Real wireless ad-hoc networks are now implemented, deployed and tested, and this trend is likely to ...

  23. Jury of 12 Is Seated in Trump Criminal Trial

    Mr. Trump attacked Mr. Cohen again on Wednesday, prosecutors said, when he posted on Truth Social a link to a National Review article with the headline "No, Cohen's Guilty Plea Does Not Prove ...

  24. March Madness: Women's NCAA title ratings beat men's championship

    The women's NCAA championship game drew a bigger television audience than the men's title game for the first time, with an average of 18.9 million viewers watching undefeated South Carolina beat Iowa and superstar Caitlin Clark, according to ratings released Tuesday.. The Sunday afternoon game on ABC and ESPN outdrew Monday's men's final between UConn and Purdue by four million.

  25. Ad Hoc Networks

    GraTree: A gradient boosting decision tree based multimetric routing protocol for vehicular ad hoc networks. Leticia Lemus Cárdenas, Juan Pablo Astudillo León, Ahmad Mohamad Mezher. Article 102995. View PDF. Article preview. Read the latest articles of Ad Hoc Networks at ScienceDirect.com, Elsevier's leading platform of peer-reviewed ...