U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Evans D, Coad J, Cottrell K, et al. Public involvement in research: assessing impact through a realist evaluation. Southampton (UK): NIHR Journals Library; 2014 Oct. (Health Services and Delivery Research, No. 2.36.)

Cover of Public involvement in research: assessing impact through a realist evaluation

Public involvement in research: assessing impact through a realist evaluation.

Chapter 9 conclusions and recommendations for future research.

  • How well have we achieved our original aim and objectives?

The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8 . We have developed and tested this theory of public involvement in research in eight diverse case studies; this has highlighted important contextual factors, in particular PI leadership, which had not previously been prominent in the literature. We have identified how this critical contextual factor shapes key mechanisms of public involvement, including the identification of a senior lead for involvement, resource allocation for involvement and facilitation of research partners. These mechanisms then lead to specific outcomes in improving the quality of research, notably recruitment strategies and materials and data collection tools and methods. We have identified a ‘virtuous circle’ of feedback to research partners on their contribution leading to their improved confidence and motivation, which facilitates their continued contribution. Following feedback from the HS&DR Board on our original application we did not seek to assess the cost-effectiveness of different mechanisms of public involvement but we did cost the different types of public involvement as discussed in Chapter 7 . A key finding is that many research projects undercost public involvement.

In our original proposal we emphasised our desire to include case studies involving young people and families with children in the research process. We recruited two studies involving parents of young children aged under 5 years, and two projects involving ‘older’ young people in the 18- to 25-years age group. We recognise that in doing this we missed studies involving children and young people aged under 18 years; in principle we would have liked to have included studies involving such children and young people, but, given the resources at our disposal and the additional resource, ethical and governance issues this would have entailed, we regretfully concluded that this would not be feasible for our study. In terms of the four studies with parental and young persons’ involvement that we did include, we have not done a separate analysis of their data, but the themes emerging from those case studies were consistent with our other case studies and contributed to our overall analysis.

In terms of the initial objectives, we successfully recruited the sample of eight diverse case studies and collected and analysed data from them (objective 1). As intended, we identified the outcomes of involvement from multiple stakeholders‘ perspectives, although we did not get as many research partners‘ perspectives as we would have liked – see limitations below (objective 2). It was more difficult than expected to track the impact of public involvement from project inception through to completion (objective 3), as all of our projects turned out to have longer time scales than our own. Even to track involvement over a stage of a case study research project proved difficult, as the research usually did not fall into neatly staged time periods and one study had no involvement activity over the study period.

Nevertheless, we were able to track seven of the eight case studies prospectively and in real time over time periods of up to 9 months, giving us an unusual window on involvement processes that have previously mainly been observed retrospectively. We were successful in comparing the contextual factors, mechanisms and outcomes associated with public involvement from different stakeholders‘ perspectives and costing the different mechanisms for public involvement (objective 4). We only partly achieved our final objective of undertaking a consensus exercise among stakeholders to assess the merits of the realist evaluation approach and our approach to the measurement and valuation of economic costs of public involvement in research (objective 5). A final consensus event was held, where very useful discussion and amendment of our theory of public involvement took place, and the economic approach was discussed and helpfully critiqued by participants. However, as our earlier discussions developed more fully than expected, we decided to let them continue rather than interrupt them in order to run the final exercise to assess the merits of the realist evaluation approach. We did, however, test our analysis with all our case study participants by sending a draft of this final report for comment. We received a number of helpful comments and corrections but no disagreement with our overall analysis.

  • What were the limitations of our study?

Realist evaluation is a relatively new approach and we recognise that there were a number of limitations to our study. We sought to follow the approach recommended by Pawson, but we acknowledge that we were not always able to do so. In particular, our theory of public involvement in research evolved over time and initially was not as tightly framed in terms of a testable hypothesis as Pawson recommends. In his latest book Pawson strongly recommends that outcomes should be measured with quantitative data, 17 but we did not do so; we were not aware of the existence of quantitative data or tools that would enable us to collect such data to answer our research questions. Even in terms of qualitative data, we did not capture as much information on outcomes as we initially envisaged. There were several reasons for this. The most important was that capturing outcomes in public involvement is easier the more operational the focus of involvement, and more difficult the more strategic the involvement. Thus, it was relatively easy to see the impact of a patient panel on the redesign of a recruitment leaflet but harder to capture the impact of research partners in a multidisciplinary team discussion of research design.

We also found it was sometimes more difficult to engage research partners as participants in our research than researchers or research managers. On reflection this is not surprising. Research partners are generally motivated to take part in research relevant to their lived experience of a health condition or situation, whereas our research was quite detached from their lived experience; in addition people had many constraints on their time, so getting involved in our research as well as their own was likely to be a burden too far for some. Researchers clearly also face significant time pressures but they had a more direct interest in our research, as they are obliged to engage with public involvement to satisfy research funders such as the NIHR. Moreover, researchers were being paid by their employers for their time during interviews with us, while research partners were not paid by us and usually not paid by their research teams. Whatever the reasons, we had less response from research partners than researchers or research managers, particularly for the third round of data collection; thus we have fewer data on outcomes from research partners‘ perspectives and we need to be aware of a possible selection bias towards more engaged research partners. Such a bias could have implications for our findings; for example payment might have been a more important motivating factor for less engaged advisory group members.

There were a number of practical difficulties we encountered. One challenge was when to recruit the case studies. We recruited four of our eight case studies prior to the full application, but this was more than 1 year before our project started and 15 months or more before data collection began. In this intervening period, we found that the time scales of some of the case studies were no longer ideal for our project and we faced the choice of whether to continue with them, although this timing was not ideal, or seek at a late moment to recruit alternative ones. One of our case studies ultimately undertook no involvement activity over the study period, so we obtained fewer data from it, and it contributed relatively little to our analysis. Similarly, one of the four case studies we recruited later experienced some delays itself in beginning and so we had a more limited period for data collection than initially envisaged. Research governance approvals took much longer than expected, particularly as we had to take three of our research partners, who were going to collect data within NHS projects, through the research passport process, which essentially truncated our data collection period from 1 year to 9 months. Even if we had had the full year initially envisaged for data collection, our conclusion with hindsight was that this was insufficiently long. To compare initial plans and intentions for involvement with the reality of what actually happened required a longer time period than a year for most of our case studies.

In the light of the importance we have placed on the commitment of PIs, there is an issue of potential selection bias in the recruitment of our sample. As our sampling strategy explicitly involved a networking approach to PIs of projects where we thought some significant public involvement was taking place, we were likely (as we did) to recruit enthusiasts and, at worst, those non-committed who were at least open to the potential value of public involvement. There were, unsurprisingly, no highly sceptical PIs in our sample. We have no data therefore on how public involvement may work in research where the PI is sceptical but may feel compelled to undertake involvement because of funder requirements or other factors.

  • What would we do differently next time?

If we were to design this study again, there are a number of changes we would make. Most importantly we would go for a longer time period to be able to capture involvement through the whole research process from initial design through to dissemination. We would seek to recruit far more potential case studies in principle, so that we had greater choice of which to proceed with once our study began in earnest. We would include case studies from the application stage to capture the important early involvement of research partners in the initial design period. It might be preferable to research a smaller number of case studies, allowing a more in-depth ethnographic approach. Although challenging, it would be very informative to seek to sample sceptical PIs. This might require a brief screening exercise of a larger group of PIs on their attitudes to and experience of public involvement.

The economic evaluation was challenging in a number of ways, particularly in seeking to obtain completed resource logs from case study research partners. Having a 2-week data collection period was also problematic in a field such as public involvement, where activity may be very episodic and infrequent. Thus, collecting economic data alongside other case study data in a more integrated way, and particularly with interviews and more ethnographic observation of case study activities, might be advantageous. The new budgeting tool developed by INVOLVE and the MHRN may provide a useful resource for future economic evaluations. 23

We have learned much from the involvement of research partners in our research team and, although many aspects of our approach worked well, there are some things we would do differently in future. Even though we included substantial resources for research partner involvement in all aspects of our study, we underestimated how time-consuming such full involvement would be. We were perhaps overambitious in trying to ensure such full involvement with the number of research partners and the number and complexity of the case studies. We were also perhaps naive in expecting all the research partners to play the same role in the team; different research partners came with different experiences and skills, and, like most of our case studies, we might have been better to be less prescriptive and allow the roles to develop more organically within the project.

  • Implications for research practice and funding

If one of the objectives of R&D policy is to increase the extent and effectiveness of public involvement in research, then a key implication of this research is the importance of influencing PIs to value public involvement in research or to delegate to other senior colleagues in leading on involvement in their research. Training is unlikely to be the key mechanism here; senior researchers are much more likely to be influenced by peers or by their personal experience of the benefits of public involvement. Early career researchers may be shaped by training but again peer learning and culture may be more influential. For those researchers sceptical or agnostic about public involvement, the requirement of funders is a key factor that is likely to make them engage with the involvement agenda. Therefore, funders need to scrutinise the track record of research teams on public involvement to ascertain whether there is any evidence of commitment or leadership on involvement.

One of the findings of the economic analysis was that PIs have consistently underestimated the costs of public involvement in their grant applications. Clearly the field will benefit from the guidance and budgeting tool recently disseminated by MHRN and INVOLVE. It was also notable that there was a degree of variation in the real costs of public involvement and that effective involvement is not necessarily costly. Different models of involvement incur different costs and researchers need to be made aware of the costs and benefits of these different options.

One methodological lesson we learned was the impact that conducting this research had on some participants’ reflection on the impact of public involvement. Particularly for research staff, the questions we asked sometimes made them reflect upon what they were doing and change aspects of their approach to involvement. Thus, the more the NIHR and other funders can build reporting, audit and other forms of evaluation on the impact of public involvement directly into their processes with PIs, the more likely such questioning might stimulate similar reflection.

  • Recommendations for further research

There are a number of gaps in our knowledge around public involvement in research that follow from our findings, and would benefit from further research, including realist evaluation to extend and further test the theory we have developed here:

  • In-depth exploration of how PIs become committed to public involvement and how to influence agnostic or sceptical PIs would be very helpful. Further research might compare, for example, training with peer-influencing strategies in engendering PI commitment. Research could explore the leadership role of other research team members, including research partners, and how collective leadership might support effective public involvement.
  • More methodological work is needed on how to robustly capture the impact and outcomes of public involvement in research (building as well on the PiiAF work of Popay et al. 51 ), including further economic analysis and exploration of impact when research partners are integral to research teams.
  • Research to develop approaches and carry out a full cost–benefit analysis of public involvement in research would be beneficial. Although methodologically challenging, it would be very useful to conduct some longer-term studies which sought to quantify the impact of public involvement on such key indicators as participant recruitment and retention in clinical trials.
  • It would also be helpful to capture qualitatively the experiences and perspectives of research partners who have had mixed or negative experiences, since they may be less likely than enthusiasts to volunteer to participate in studies of involvement in research such as ours. Similarly, further research might explore the (relatively rare) experiences of marginalised and seldom-heard groups involved in research.
  • Payment for public involvement in research remains a contested issue with strongly held positions for and against; it would be helpful to further explore the value research partners and researchers place on payment and its effectiveness for enhancing involvement in and impact on research.
  • A final relatively narrow but important question that we identified after data collection had finished is: what is the impact of the long periods of relative non-involvement following initial periods of more intense involvement for research partners in some types of research, particularly clinical trials?

Included under terms of UK Non-commercial Government License .

  • Cite this Page Evans D, Coad J, Cottrell K, et al. Public involvement in research: assessing impact through a realist evaluation. Southampton (UK): NIHR Journals Library; 2014 Oct. (Health Services and Delivery Research, No. 2.36.) Chapter 9, Conclusions and recommendations for future research.
  • PDF version of this title (4.3M)

In this Page

Other titles in this collection.

  • Health Services and Delivery Research

Recent Activity

  • Conclusions and recommendations for future research - Public involvement in rese... Conclusions and recommendations for future research - Public involvement in research: assessing impact through a realist evaluation

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Example sentences further research

Definition of 'research' research.

IPA Pronunciation Guide

Definition of 'further' further

B2

COBUILD Collocations further research

English Quiz

Browse alphabetically further research

  • further reduction
  • further refinement
  • further reflection
  • further research
  • further restrictions
  • further revelations
  • further savings
  • All ENGLISH words that begin with 'F'

Quick word challenge

Quiz Review

Score: 0 / 5

Tile

Wordle Helper

Tile

Scrabble Tools

Avenues for Further Research

  • First Online: 04 February 2016

Cite this chapter

what do further research mean

  • Dawid Pieper 2 ,
  • Lun Li 3 &
  • Roland Brian Büchter 4  

Overviews of reviews represent a new publication type and a new form of evidence synthesis. They have rapidly gained popularity. The development of their methodology is, however, still in its infancy. We present a bundle of areas where more work is needed in order to make overviews of reviews more valuable and reliable. Firstly, a clear-cut definition of an overview of reviews is needed. Secondly, methods of presenting the results of overviews of reviews need to be further developed. The needs of groups of users should be kept in mind, in this context, including clinicians, patients, and political decision makers. Maintaining a reasonable balance between necessary complexity and an inevitable loss of information from the reviews is a major challenge. A registration of overviews of reviews is called for. All overviews should be gathered in one freely available registry. When registering new overviews of reviews, a special note should be given to the conflicts of interests of the authors. Reporting guidelines for overviews of reviews should be prepared as soon as possible as this area lacks standardization. Furthermore, more attention should be given to different types of overviews of reviews (e.g., comparison of interventions, comparison of populations).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Hartling L, Chisholm A, Thomson D, Dryden DM. A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One. 2012;7:e49667.

Article   PubMed Central   CAS   PubMed   Google Scholar  

Pieper D, Buechter R, Jerinic P, Eikermann M. Overviews of reviews often have limited rigor: a systematic review. J Clin Epidemiol. 2012;65:1267–73.

Article   PubMed   Google Scholar  

van der Feltz-Cornelis CM, Sarchiapone M, Postuvan V, Volker D, Roskar S, Grum AT, et al. Best practice elements of multilevel suicide prevention strategies: a review of systematic reviews. Crisis. 2011;32:319–33.

Article   PubMed Central   PubMed   Google Scholar  

Pieper D, Antoine S-L, Morfeld J-C, Mathes T, Eikermann M. Methodological approaches in conducting overviews: current state in HTA agencies. Res Synth Methods. 2014;5:187–99.

Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11:15.

Fronsdal KB, Facey K, Klemp M, Norderhaug IN, Morland B, Rottingen JA. Health technology assessment to optimize health technology utilization: using implementation initiatives and monitoring processes. Int J Technol Assess Health Care. 2010;26:309–16.

IQWiG. Allgemeine Methoden 4.1. 2013. [06 Jan 2014]. Available from: https://www.iqwig.de/download/IQWiG_Methoden_Version_4-1.pdf .

Wood AJJ. Progress and deficiencies in the registration of clinical trials. N Engl J Med. 2009;360:824–30.

Article   CAS   PubMed   Google Scholar  

Silva V, Grande AJ, Carvalho AP, Martimbianco AL, Riera R. Overview of systematic reviews – a new type of study. Part II. Sao Paulo Med J. 2015;133(3):206–17.

Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, et al. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Syst Rev. 2012;1:2.

Becker LA, Oxman AD. Overviews of reviews. In: Cochrane handbook for systematic reviews of interventions. Chichester: Wiley; 2008. p. 607–31.

Chapter   Google Scholar  

Pieper D, Mathes T, Neugebauer E, Eikermann M. State of evidence on the relationship between high-volume hospitals and outcomes in surgery: a systematic review of systematic reviews. J Am Coll Surg. 2013;216:1015–25.

Bowater RJ, Abdelmalik SM, Lilford RJ. Efficacy of adjuvant chemotherapy after surgery when considered over all cancer types: a synthesis of meta-analyses. Ann Surg Oncol. 2012;19:3343–50.

Chen YF, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014;67(12):1309–19.

Singh JA, Wells GA, Christensen R, Tanjong Ghogomu E, Maxwell L, Macdonald JK, et al. Adverse effects of biologics: a network meta-analysis and Cochrane overview. Cochrane Database Syst Rev. 2011;(2):CD008794.

Google Scholar  

Ozbilen M, Adams CE. Systematic overview of Cochrane reviews for anticholinergic effects of antipsychotic drugs. J Clin Psychopharmacol. 2009;29(2):141–6.

Posadzki P, Watson LK, Ernst E. Adverse effects of herbal medicines: an overview of systematic reviews. Clin Med. 2013;13(1):7–12.

Article   Google Scholar  

Furlan AD, Yazdi F, Tsertsvadze A, Gross A, Van Tulder M, Santaguida L, et al. Complementary and alternative therapies for back pain II. Evid Rep Technol Assess (Full Rep). 2010;194:1–764.

van Middelkoop M, Rubinstein SM, Kuijpers T, Verhagen AP, Ostelo R, Koes BW, et al. A systematic review on the effectiveness of physical and rehabilitation interventions for chronic non-specific low back pain. Eur Spine J. 2011;20(1):19–39.

Glazener CM, Evans JH, Peto RE. Alarm interventions for nocturnal enuresis in children. Cochrane Database Syst Rev. 2005;(2):CD002911.

Deshpande AV, Caldwell PH, Sureshkumar P. Drugs for nocturnal enuresis in children (other than desmopressin and tricyclics). Cochrane Database Syst Rev. 2012;(12):CD002238.

Glazener CM, Evans JH. Desmopressin for nocturnal enuresis in children. Cochrane Database Syst Rev. 2002;(3):CD002112.

Glazener CM, Evans JH, Peto RE. Tricyclic and related drugs for nocturnal enuresis in children. Cochrane Database Syst Rev. 2003;(3):CD002117.

Huang T, Shu X, Huang YS, Cheuk DK. Complementary and miscellaneous interventions for nocturnal enuresis in children. Cochrane Database Syst Rev. 2011;(12):CD005230.

Caldwell PH, Nankivell G, Sureshkumar P. Simple behavioural interventions for nocturnal enuresis in children. Cochrane Database Syst Rev. 2013;(7):CD003637.

Jadad AR, Cook DJ, Browman GP. A guide to interpreting discordant systematic reviews. CMAJ. 1997;156(10):1411–6.

PubMed Central   CAS   PubMed   Google Scholar  

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and ElaborationPRISMA: explanation and elaboration. Ann Intern Med. 2009;151(4):W-65.

Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis of observational studies in epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–12.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12:181.

Altman DG, Moher D. Declaration of transparency for each research article. BMJ. 2013;347:f4796.

Li L, Tian J, Tian H, Sun R, Liu Y, Yang K. Quality and transparency of overviews of systematic reviews. J Evid Based Med. 2012;5(3):166–73.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9. W64.

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, et al. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Kung J, Chiappelli F, Cajulis OO, Avezova R, Kossan G, Chew L, et al. From systematic reviews to clinical recommendations for evidence-based health care: validation of revised assessment of multiple systematic reviews (R-AMSTAR) for grading of clinical relevance. Open Dent J. 2010;4:84–91.

PubMed Central   PubMed   Google Scholar  

Pieper D, Buechter RB, Li L, Prediger B, Eikermann M. Systematic review found AMSTAR, but not R(evised)-AMSTAR, to have good measurement properties. J Clin Epidemiol. 2015;68(5):574–83.

Lo B, Field MJ. Conflict of interest in medical research, education, and practice. Washington, DC: National Academies Press; 2009. p. 436.

Vineis P, Saracci R. Conflicts of interest matter and awareness is needed. J Epidemiol Community Health. 2015;69(10):1018–20.

Barbour V, Clark J, Peiperl L, Veitch E, Wong M, Yamey G. Making sense of non-financial competing interests. PLoS Med. 2008;5:E1999.

Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events? JAMA. 2003;290:921–8.

Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, et al. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ. 2004;170:477–80.

Killin LOJ, Russ TC, Starr JM, Abrahams S, Della Sala S. The effect of funding sources on donezepil randomised controlled trial outcome: a meta-analysis. BMJ Open. 2014;4:e004083.

Kjaergard LL, Als-Nielsen B. Association between competing interests and authors’ conclusions: epidemiological study of randomised clinical trials published in the BMJ. BMJ (Clin Res Ed). 2002;325:249.

Khan NA, Lombeida JI, Singh M, Spencer HJ, Torralba KD. Association of industry funding with the outcome and quality of randomized controlled trials of drug therapy for rheumatoid arthritis. Arthritis Rheum. 2012;64:2059–67.

Mugambi MN, Musekiwa A, Lombard M, Young T, Blaauw R. Association between funding source, methodological quality and research outcomes in randomized controlled trials of synbiotics, probiotics and prebiotics added to infant formula: a systematic review. BMC Med Res Methodol. 2013;13:137.

Bero L, Oostvogel F, Bacchetti P, Lee K. Factors associated with findings of published trials of drug-drug comparisons: why some statins appear more efficacious than others. PLoS Med. 2007;4:1001–10.

Article   CAS   Google Scholar  

Flacco ME, Manzoli L, Boccia S, Capasso L, Aleksovska K, Rosso A, et al. Head-to-head randomized trials are mostly industry-sponsored and almost always favour the industry sponsor. J Clin Epidemiol. 2015;68(7):811–20.

Gartlehner G, Morgan L, Thieda P, Fleg A. The effect of study sponsorship on a systematically evaluated body of evidence of head-to-head trials was modest: secondary analysis of a systematic review. J Clin Epidemiol. 2010;63:117–25.

Norris SL, Holmer HK, Ogden LA, Burda BU. Conflict of interest in clinical practice guideline development: a systematic review. PLoS One. 2011;6:e25153.

Cosgrove L, Bursztajn HJ, Erlich DR, Wheeler EE, Shaughnessy AF. Conflicts of interest and the quality of recommendations in clinical guidelines. J Eval Clin Pract. 2013;19:674–81.

Norris SL, Holmer HK, Ogden LA, Burda BU, Fu R. Conflicts of interest among authors of clinical practice guidelines for glycemic control in type 2 diabetes mellitus. PLoS One. 2013;8:e75284.

Norris SL, Burda BU, Holmer HK, Ogden LA, Fu R, Bero L, et al. Author’s specialty and conflicts of interest contribute to conflicting guidelines for screening mammography. J Clin Epidemiol. 2012;65:725–33.

Schroter S, Tite L, Hutchings A, Black N. Differences in review quality and recommendations for publication between peer reviewers suggested by authors or by editors. JAMA. 2006;295:314–7.

Wager E, Parkin EC, Tamber PS. Are reviewers suggested by authors as good as those chosen by editors? Results of a rater-blinded, retrospective study. BMC Med. 2006;4:61–4.

Goldsmith LA, Blalock EN, Bobkova H, Hall RP. Picking your peers. J Invest Dermatol. 2006;126:1429–30.

Panagiotou OA, Ioannidis JPA. Primary study authors of significant studies are more likely to believe that a strong association exists in a heterogeneous meta-analysis compared with methodologists. J Clin Epidemiol. 2012;65:740–7.

Bes-Rastrollo M, Schulze MB, Ruiz-Canela M, Martinez-Gonzalez MA. Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: a systematic review of systematic reviews. PLoS Med. 2013;10:1–9.

Dunn AG, Arachi D, Hudgins J, Tsafnat G, Coiera E, Bourgeois FT. Financial conflicts of interest and conclusions about neuraminidase inhibitors for influenza. Ann Intern Med. 2014;161:513.

Jørgensen AW, Hilden J, Gøtzsche PC. Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: systematic review. BMJ (Clin Res Ed). 2006;333:782.

Jørgensen AW, Maric KL, Tendal B, Faurschou A, Gøtzsche PC. Industry-supported meta-analyses compared with meta-analyses with non-profit or no support: differences in methodological quality and conclusions. BMC Med Res Methodol. 2008;8:60.

Yank V, Rennie D, Bero LA. Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study. BMJ (Clin Res Ed). 2007;335:1202–5.

Pieper D, Antoine SL, Mathes T, Neugebauer EAM, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol. 2014;67:368–75.

Grimshaw J, McAuley LM, Bero LA, Grilli R, Oxman AD, Ramsay C, et al. Systematic reviews of the effectiveness of quality improvement strategies and programmes. Qual Saf Health Care. 2003;12(4):298–303. Epub 2003/08/05.

Download references

Author information

Authors and affiliations.

Institute for Research in Operative Medicine (IFOM), Evidence-Based Health Services Research, Witten/Herdecke University, Ostmerheimer Str. 200, 51009, Cologne, Germany

Dawid Pieper

Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou, China

Institute for Quality and Efficiency in Health Care (IQWiG), Cologne, Germany

Roland Brian Büchter

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dawid Pieper .

Editor information

Editors and affiliations.

Dept of Medico-Surgical Sci, Sapienza University of Rome, Latina, Italy

Giuseppe Biondi-Zoccai

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Pieper, D., Li, L., Büchter, R.B. (2016). Avenues for Further Research. In: Biondi-Zoccai, G. (eds) Umbrella Reviews. Springer, Cham. https://doi.org/10.1007/978-3-319-25655-9_22

Download citation

DOI : https://doi.org/10.1007/978-3-319-25655-9_22

Published : 04 February 2016

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-25653-5

Online ISBN : 978-3-319-25655-9

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

National Academies Press: OpenBook

Private Transit: Existing Services and Emerging Directions (2018)

Chapter: section 6 - conclusions and areas for further research.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

45 Conclusions For public agencies looking to engage most constructively with operators of private transit services, existing practice and regulatory approaches suggest the following strategies: • Work together with private transit providers to create regulations that work for everyone. Several jurisdictions have been successful in taking a cooperative approach to regulation, bringing both public and private players to the table to develop regulatory approach, and helping work toward providing a level of compliance and information sharing that is benefi- cial for all stakeholders. Jurisdictions and organizations around the Bay Area are the furthest along in this regard, with the Shuttle Census (MTC/Bay Area Council), SFMTA’s Commuter Shuttle Pilot Program, and pending microtransit regulations. • Allocate street space to reflect public priorities without stifling private-sector innovation. Develop policy tools like BART’s curb-use decision tree to prioritize public and private goals for access to curb space and rights-of-way (see Section 3), weighing such factors as the level of demand for space, whether proposed private uses supplement or compete with public transit services, and the size and restrictions on ridership of private services. Akin to zoning codes or other land-use regulations, such policies could help clarify and make predictable for all stakeholders what transportation uses are permitted in which locations and do so with a clear public rationale and process. • Update local and state licensing of private transportation services to reflect evolving busi- ness practices and emerging models, for better understanding the size and extent of the private transportation market as it exists today. Most jurisdictions have no way to know how many and what types of private vehicles are working the streets or whether, for instance, large private buses are serving commuters, sightseers, or charter passengers, each of which would have different impacts on the public right-of-way. The SFMTA shuttle and microtransit regu- latory processes can serve as models for managing evolving private transit services. • Use private transit services as an “early warning” to indicate how and where service needs and markets are changing. The presence of private transit services in a corridor can suggest where new or more frequent public transit routes are needed. New residential or commercial development may be creating a need for express routes or connections that didn’t previously exist or could not have been supported. The private market can respond to these signals more quickly and establish the presence of a potential transit market. In areas that are able to sup- port it, however, expanded public transit service is the best long-term solution due to the transparency, service continuity, and civil rights safeguards built into public provision. • Anticipate that conflict may be heightened by reconfiguration of public space, such as geo- metrical changes to the street and the creation of transit-only lanes. Include private tran- sit providers in project planning, and open lines of communication with private providers known to be operating in a corridor before changes start to take place. S E C T I O N 6 Conclusions and Areas for Further Research

46 Private Transit: Existing Services and Emerging Directions • Explore the use of consortium-based services for locations that need group transport but would be unable to support a productive public transit route. For suburban or low-density workplaces that have a critical mass of employees who don’t drive—particularly in sectors that employ many shift workers, such as hospitals, manufacturers, or warehouse and fulfillment operations—but that couldn’t support a public transit route, consortium-based services provide a variety of potential solutions that often build on existing high-capacity transit. • Promote efficiencies in the use of sponsored services by such means as offering priority to private services that support public goals of equity and efficiency, such as being open to more than just sponsored riders and making an effort to avoid deadhead miles by making service available to the general public whenever possible. • Incorporate private operations into emergency planning and response. Private transit services have been enlisted to provide transportation services when public transit was over- whelmed in the wake of several natural disasters, including in Miami after Hurricane Andrew in 1992 and in New Jersey after Hurricanes Irene (2011) and Sandy (2012) (see discussion of flexible route-based services or jitneys, in Section 2). • Ensure that private transit services are a key part of MaaS developments. The adoption of MaaS principles—which center on expanding and integrating the entire range of non-SOV urban mobility options—has the potential to transform cities and transit agencies. MaaS focuses on moving people easily and efficiently by using an ecosystem of services and removing the institutional silos that can hinder movement between modes and providers. Areas for Further Research This study has identified several areas where further research is warranted to continue expanding knowledge about private transit and related services. Several of these depend on greater availability of operational data from private transportation providers. Such areas include the following: • A more comprehensive investigation of the scale and operational characteristics of employer- and property-sponsored shuttles in a variety of urban settings. • Continued study of microtransit as it spreads to more cities and operating environments, including in service partnerships with public agencies. • A greater understanding of the impacts of TNC-based services, both shared- and exclusive- ride, with particular attention to their effects on VMT, traffic congestion, and related safety impacts. • Ongoing study of the outcomes of various regulatory approaches for private transit services, including statutory, administrative, and cooperative approaches and those originating in different parts of the policymaking apparatus. • Exploration of the equity implications of various private transit service types and partnership formats.

TRB's Transit Cooperative Research Program (TCRP) Research Report 196: Private Transit: Existing Services and Emerging Directions provides information about private transit services and ways they are addressing transportation needs in a variety of operating environments. The document contains an overview and taxonomy of private transit services in the United States, a review of their present scope and operating characteristics, and a discussion of ways they may affect the communities in which they operate along with several case studies and other supporting information.

Private transit services—including airport shuttles, shared taxis, private commuter buses, dollar vans and jitneys—have operated for decades in many American cities. Recently, business innovations and technological advances that allow real-time ride-hailing, routing, tracking, and payment have ushered in a new generation of private transit options. These include new types of public-private partnership that are helping to bridge first/last mile gaps in suburban areas.

The report also examines ways that private transit services are interacting with communities and transit agencies, as well as resulting impacts and benefits.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Science-Based Medicine

Science-Based Medicine

Exploring issues and controversies in the relationship between science and medicine

When Further Research Is NOT Warranted: The “Wisdom of Crowds” Fallacy

Most scientific research studies have at least one thing in common: the conclusion section ends with, “further research is warranted.” I’d say it’s about as common as the “talk to your doctor” disclaimer in TV ads for pharmaceutical products. And in a way, they both serve the same purpose. They’re a “CYA” move.

What does “further research is warranted” mean in plain English? I think it can be roughly translated: “My research study is not of the size or scope to fully explain all the phenomena described in this article. Therefore, draw conclusions beyond the data and study methods at your own risk. And yeah, my work is important and cool – so people should study it further.”

Of course, the first two sentences are reasonable – we should always remember not to draw conclusions beyond the information provided by the data we’ve collected (even though that’s about as challenging as getting a beagle not to eat a table scrap in an empty room). The real problem is the third sentence. Is the research promising enough to require further investment? How are we to know if further research is indeed warranted? I would argue that it should not be based solely on the subjective opinions of the researchers nor the popularity of the research topic to the general public .

As my colleagues here at Science Based Medicine have already explained, plausibility is an often overlooked but critical piece of the value proposition puzzle. What value is there in analyzing the validity of implausible ( or even impossible ) hypotheses? Should we reinvestigate whether or not the world is flat? No, we don’t need to do that because previous inquiries as to its shape have been firmly and incontrovertibly resolved. As David Gorski puts it, “Wild inconsistencies with firmly established knowledge can in some cases be adequate for rejecting a hypothetical treatment as effective (homeopathy, for instance).”

So why are we spending any time on the “shape of the earth” type questions? I think it’s partially because Evidence Based Medicine ( EBM ) has been incorrectly positioned as the one and only analytical tool in the physician’s toolbox . I also suspect that our post-modernist culture encourages us to be silent about Emperors with no clothes. Of course, there are those who are making a handsome profit on “flat earth” memorabilia . But most of all, I believe it’s because we haven’t fully embraced the concept of Science Based Medicine (as opposed to EBM ) as the foundation for appropriate scientific investigation.

Steven Novella explains :

EBM is a vital and positive influence on the practice of medicine, but it has its limitations… [especially] the focus on evidence to the exclusion of scientific plausibility… All of science describes the same reality, and therefore it must (if it is functioning properly) all be mutually compatible. Collectively, science builds one cumulative model of the natural world. This means we can make rational judgments about what is likely to be true based upon what is already well established. This does not necessarily equate to rejecting new ideas out-of-hand, but rather to adjusting the threshold of evidence required to establish a new claim based upon the prior scientific plausibility of the new claim. Failure to do so leads to conclusions and recommendations that are not reliable, and therefore medical practices that are not reliably safe and effective.

The problem with EBM is that its original intent (as described by David Sackett ) has been reduced in scope by popular opinion over time. It’s commonly held (and I’m simplifying here) that EBM means that objective evidence gathered in a randomized, controlled trial (RCT) is the only truly trustworthy means for determining cause and effect, relative efficacy, and mechanisms of action for treatment options.

Therefore it’s believed that incontrovertible conclusions cannot be drawn without a double blind, randomized, placebo-controlled trial. Obviously, it’s impossible to study every possible permutation of disease and treatment – so that relegates the majority of medicine to the “unproven” category. This does two things:

1. Third party payers love EBM because they can use it to deny treatment for things that have not yet been demonstrated to be effective in RCTs.

2. Pseudoscientists love EBM because it suggests that science is limited, and generally unable to offer conclusive evidence about most of its practices. This (they incorrectly believe) puts science and pseudoscience on an equal footing. Neither can be definitively proven effective in all cases, they reason, so they must be equally valid approaches to healing.

Let me give you a specific example of an actual conversation that I had with my favorite pseudoscientist, “Dr. John” (you might remember him from my first post ). This is what he said to me one day:

There are no trials comparing drug X to drug Y in the setting of a patient on multiple other drugs with multiple other comorbidities… therefore you don’t really have EVIDENCE that drug X is appropriate or efficacious under those conditions. Since we don’t know if alternative therapies might be better in this case, there’s just as much reason to use alternative therapies as scientific therapies for this patient.

What happens here is that Dr. John has correctly pointed out that EBM has limitations (there wasn’t a 1:1 match up between the patient’s current circumstances and a clinical trial designed to assess the efficacy of drug X in those exact conditions), but then veers off into a non-sequitur conclusion: since we don’t have a clinical trial informing us regarding the exact best care of this patient, we should offer the patient any treatment we like.

What would Science Based Medicine say to Dr. John? It would say that in considering the best treatment option for the patient, we draw from a broad and deep scientific literature, the sum of which is more likely to help us solve the patients’ problems than the sum of testimonial anecdote. “Collectively, science builds one cumulative model of the natural world. This means we can make rational judgments about what is likely to be true based upon what is already well established.” Ironically, it is SBM that is truly holistic in its approach, not pseudoscience.

In closing I’d like to discuss one final important fallacy that contributes to the “more research is warranted” argument. There is a strong belief in Internet land that the “wisdom of crowds” (based on New Yorker columnist James Suroweicki’s book ) can solve America ’s healthcare crisis. For example, if patients simply got together to share their collective wisdom about their treatment options – the treatment option with the highest rating would surely be the best one for that disease or condition, right? Then we wouldn’t need these narrow-minded, paternalistic doctors telling us what’s best for us. (In other words, more research is warranted, and that research should center upon personal interest and opinion).

Interestingly, in this scenario, narcotics and benzodiazepines rise to the top of the list for most conditions. Got back pain? Vicodin’s the best. Got fibromyalgia? Try Dilauded. Got cellulitis? Xanax. And so on and so forth. Apparently, being high is a great substitute for any medical treatment.

So where is the “wisdom” in all this? The argument stems from the idea that if a crowd of people guessed the number of jelly beans in a jar, the average of their guesses would be closer to the actual number than most individual guesses on their own.

However, the “wisdom of the crowds” is only as wise as the crowd being polled for the question at hand. If I asked a group of you readers about statins, I think I’d get a pretty reliable analysis of their pros, cons, side effect profiles and therapeutic values. Now, if I asked you to translate this sentence into ancient Sanskrit, I’m not sure that even your collective wisdom would suffice. Certainly the average of your attempts would not be more accurate than the one guy out there who could do it.

The wisdom of crowds argument is often used by pseudoscience proponents to justify further research into outdated and ineffective treatment options. You’ve heard this before: “Millions of Eastern peoples over millennia of using [insert favorite herb or treatment here] can’t be wrong!” Well, even modern masses of Americans believe that anxiolytics are great for treating infections – crowds can often be wrong.

Who can rescue us from this misinformation? What will stop the slow bleed of wasted research dollars on implausible therapies? Three simple words: science-based-medicine.

In conclusion : not all research warrants further investigation. Plausibility should be a precondition for medical investigations, and a holistic approach to analyzing the potential value of research is warranted. Evidence Based Medicine (as we commonly understand it) should be recognized as an excellent but limited tool. Science Based Medicine should be embraced as the new lens through which healthcare is evaluated. Only then can we move away from the foolish “wisdom of crowds” approach to offering anecdotally relevant treatment solutions to our patients. They deserve better.

Val Jones , M.D., is the President and CEO of Better Health, PLLC, a health education company devoted to providing scientifically accurate health information to consumers. Most recently she was the Senior Medical Director of Revolution Health, a consumer health portal with over 120 million page views per month in its network. Prior to her work with Revolution Health, Dr. Jones served as the founding editor of Clinical Nutrition & Obesity, a peer-reviewed e-section of the online Medscape medical journal. Dr. Jones is also a consultant for Elsevier Science, ensuring the medical accuracy of First Consult, a decision support tool for physicians. Dr. Jones was the principal investigator of several clinical trials relating to sleep, diabetes and metabolism, and she won first place in the Peter Cyrus Rizzo III research competition. Dr. Jones is the author of the popular blog, “Dr. Val and the Voice of Reason,” which won The Best New Medical Blog award in 2007. Her cartoons have been featured at Medscape, the P&S Journal, and the Placebo Journal. She was inducted as a member of the National Press Club in Washington , DC in July, 2008. Dr. Jones has been quoted by various major media outlets, including USA Today, The Wall Street Journal, and the LA Times. She has been a guest on over 20 different radio shows, and was featured on CBS News.

View all posts

  • Posted in: Clinical Trials , Science and Medicine
  • Tagged in: anecdote , complementary and alternative medicine , David Sackett , evidence-based medicine , flat earth , implausible , James Suroweicki , more research is warranted , plausibility , post-modernism , pseudoscience , science based medicine , wisdom of crowds

Posted by Val Jones

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Further Readings
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Further readings provide references to sources that the author has deemed useful to a reader seeking additional information or context about the research problem. They are items that are not essential to understanding the overall study or were cited as a source the author used or quoted from when writing the paper.

Lester, James D. and James D. Lester, Jr. Writing Research Papers: A Complete Guide . 16th edition. Boston, MA: Pearson, 2021.

Structure and Writing Style

Depending on the writing style you are asked to use [e.g., APA, Chicago, MLA], a list of further readings should be located at the end of your paper after the endnotes or references but before any appendices. The list should begin under the heading "Further Readings." Items can be arranged alphabetically by the author's last name, categorized under sub-headings by material type [e.g., books, articles, websites, etc.], or listed by the type of content [e.g., theory, methods, etc.].

If you choose to include a list of further readings, keep in mind the following:

  • The references to further readings are not critical to understanding the central research problem . In other words, if further readings were not included, the citations to sources used in writing the paper would be sufficient in allowing the reader to evaluate the credibility of your literature review and analysis of prior research on the topic.
  • Although further readings represent additional or suggested sources, they still must be viewed as relevant to the research problem . Don't include further readings simply to show off your skills in searching for materials on your topic. Even though they may not be central to understanding the research problem, every item listed must relate in some way to helping the reader locate additional information or obtain a broader understanding of the topic.
  • Do not include basic survey texts or reference books like encyclopedias and dictionaries . Including these types of sources in a list of further readings implies reference to either very general or, if the source provides very specific information, it likely should have been integrated into the text of your paper. In addition, these types of resources rarely add any significant understanding of the research problem.
  • If you have identified non-textual materials related to your topic that may be of interest to the reader but were not used for your paper, consider including them in a list of further readings. This may include references to sources such as archival collections, documentary or popular films, photograph collections, audio files, or large data sets.

To identify possible titles to include in a list of further readings , examine the sources you found while researching your paper but that you ended up not citing as a item that helped you write your paper. Review these items and, playing the role of reader, think about which ones may provide additional insight or background information about the research problem you have investigated.

Soles, Derek. The Essentials of Academic Writing . 2nd edition. Boston, MA: Cengage Learning Houghton Mifflin, 2010; "Further Reading" and "Wikipedia Talk: Further Reading." Wikipedia.

  • << Previous: Footnotes or Endnotes?
  • Next: Generative AI and Writing >>
  • Last Updated: May 9, 2024 11:05 AM
  • URL: https://libguides.usc.edu/writingguide

Implications in research: A quick guide

Last updated

11 January 2024

Reviewed by

Implications are a bridge between data and action, giving insight into the effects of the research and what it means. It's a chance for researchers to explain the  why  behind the research. 

When writing a research paper , reviewers will want to see you clearly state the implications of your research. If it's missing, they’ll likely reject your article. 

Let's explore what research implications are, why they matter, and how to include them in your next article or research paper. 

  • What are implications in research?

Research implications are the consequences of research findings. They go beyond results and explore your research’s ramifications. 

Researchers can connect their research to the real-world impact by identifying the implications. These can inform further research, shape policy, or spark new solutions to old problems. 

Always clearly state your implications so they’re obvious to the reader. Never leave the reader to guess why your research matters. While it might seem obvious to you, it may not be evident to someone who isn't a subject matter expert. 

For example, you may do important sociological research with political implications. If a policymaker can't understand or connect those implications logically with your research, it reduces your impact.

  • What are the key features of implications?

When writing your implications, ensure they have these key features: 

Implications should be clear, concise, and easily understood by a broad audience. You'll want to avoid overly technical language or jargon. Clearly stating your implications increases their impact and accessibility. 

Implications should link to specific results within your research to ensure they’re grounded in reality. You want them to demonstrate an impact on a particular field or research topic . 

Evidence-based

Give your implications a solid foundation of evidence. They need to be rational and based on data from your research, not conjecture. An evidence-based approach to implications will lend credibility and validity to your work.

Implications should take a balanced approach, considering the research's potential positive and negative consequences. A balanced perspective acknowledges the challenges and limitations of research and their impact on stakeholders. 

Future-oriented

Don't confine your implications to their immediate outcomes. You can explore the long-term effects of the research, including the impact on future research, policy decisions, and societal changes. Looking beyond the immediate adds more relevance to your research. 

When your implications capture these key characteristics, your research becomes more meaningful, impactful, and engaging. 

  • Types of implications in research

The implications of your research will largely depend on what you are researching. 

However, we can broadly categorize the implications of research into two types: 

Practical: These implications focus on real-world applications and could improve policies and practices.

Theoretical: These implications are broader and might suggest changes to existing theories of models of the world. 

You'll first consider your research's implications in these two broad categories. Will your key findings have a real-world impact? Or are they challenging existing theories? 

Once you've established whether the implications are theoretical or practical, you can break your implication into more specific types. This might include: 

Political implications: How findings influence governance, policies, or political decisions

Social implications: Effects on societal norms, behaviors, or cultural practices

Technological implications: Impact on technological advancements or innovation

Clinical implications: Effects on healthcare, treatments, or medical practices

Commercial or business-relevant implications: Possible strategic paths or actions

Implications for future research: Guidance for future research, such as new avenues of study or refining the study methods

When thinking about the implications of your research, keep them clear and relevant. Consider the limitations and context of your research. 

For example, if your study focuses on a specific population in South America, you may not be able to claim the research has the same impact on the global population. The implication may be that we need further research on other population groups. 

  • Understanding recommendations vs. implications

While "recommendations" and "implications" may be interchangeable, they have distinct roles within research.

Recommendations suggest action. They are specific, actionable suggestions you could take based on the research. Recommendations may be a part of the larger implication. 

Implications explain consequences. They are broader statements about how the research impacts specific fields, industries, institutions, or societies. 

Within a paper, you should always identify your implications before making recommendations. 

While every good research paper will include implications of research, it's not always necessary to include recommendations. Some research could have an extraordinary impact without real-world recommendations. 

  • How to write implications in research

Including implications of research in your article or journal submission is essential. You need to clearly state your implications to tell the reviewer or reader why your research matters. 

Because implications are so important, writing them can feel overwhelming.

Here’s our step-by-step guide to make the process more manageable:

1. Summarize your key findings

Start by summarizing your research and highlighting the key discoveries or emerging patterns. This summary will become the foundation of your implications. 

2. Identify the implications

Think critically about the potential impact of your key findings. Consider how your research could influence practices, policies, theories, or societal norms. 

Address the positive and negative implications, and acknowledge the limitations and challenges of your research. 

If you still need to figure out the implications of your research, reread your introduction. Your introduction should include why you’re researching the subject and who might be interested in the results. This can help you consider the implications of your final research. 

3. Consider the larger impact

Go beyond the immediate impact and explore the implications on stakeholders outside your research group. You might include policymakers, practitioners, or other researchers.

4. Support with evidence

Cite specific findings from your research that support the implications. Connect them to your original thesis statement. 

You may have included why this research matters in your introduction, but now you'll want to support that implication with evidence from your research. 

Your evidence may result in implications that differ from the expected impact you cited in the introduction of your paper or your thesis statement. 

5. Review for clarity

Review your implications to ensure they are clear, concise, and jargon-free. Double-check that your implications link directly to your research findings and original thesis statement. 

Following these steps communicates your research implications effectively, boosting its long-term impact. 

Where do implications go in your research paper?

Implications often appear in the discussion section of a research paper between the presentation of findings and the conclusion. 

Putting them here allows you to naturally transition from the key findings to why the research matters. You'll be able to convey the larger impact of your research and transition to a conclusion.

  • Examples of research implications

Thinking about and writing research implications can be tricky. 

To spark your critical thinking skills and articulate implications for your research, here are a few hypothetical examples of research implications: 

Teaching strategies

A study investigating the effectiveness of a new teaching method might have practical implications for educators. 

The research might suggest modifying current teaching strategies or changing the curriculum’s design. 

There may be an implication for further research into effective teaching methods and their impact on student testing scores. 

Social media impact

A research paper examines the impact of social media on teen mental health. 

Researchers find that spending over an hour on social media daily has significantly worse mental health effects than 15 minutes. 

There could be theoretical implications around the relationship between technology and human behavior. There could also be practical implications in writing responsible social media usage guidelines. 

Disease prevalence

A study analyzes the prevalence of a particular disease in a specific population. 

The researchers find this disease occurs in higher numbers in mountain communities. This could have practical implications on policy for healthcare allocation and resource distribution. 

There may be an implication for further research into why the disease appears in higher numbers at higher altitudes.

These examples demonstrate the considerable range of implications that research can generate.

Clearly articulating the implications of research allows you to enhance the impact and visibility of your work as a researcher. It also enables you to contribute to societal advancements by sharing your knowledge.

The implications of your work could make positive changes in the world around us.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

what do further research mean

Users report unexpectedly high data usage, especially during streaming sessions.

what do further research mean

Users find it hard to navigate from the home page to relevant playlists in the app.

what do further research mean

It would be great to have a sleep timer feature, especially for bedtime listening.

what do further research mean

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

what do further research mean

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

What is Research? – Purpose of Research

DiscoverPhDs

  • By DiscoverPhDs
  • September 10, 2020

Purpose of Research - What is Research

The purpose of research is to enhance society by advancing knowledge through the development of scientific theories, concepts and ideas. A research purpose is met through forming hypotheses, collecting data, analysing results, forming conclusions, implementing findings into real-life applications and forming new research questions.

What is Research

Simply put, research is the process of discovering new knowledge. This knowledge can be either the development of new concepts or the advancement of existing knowledge and theories, leading to a new understanding that was not previously known.

As a more formal definition of research, the following has been extracted from the Code of Federal Regulations :

what do further research mean

While research can be carried out by anyone and in any field, most research is usually done to broaden knowledge in the physical, biological, and social worlds. This can range from learning why certain materials behave the way they do, to asking why certain people are more resilient than others when faced with the same challenges.

The use of ‘systematic investigation’ in the formal definition represents how research is normally conducted – a hypothesis is formed, appropriate research methods are designed, data is collected and analysed, and research results are summarised into one or more ‘research conclusions’. These research conclusions are then shared with the rest of the scientific community to add to the existing knowledge and serve as evidence to form additional questions that can be investigated. It is this cyclical process that enables scientific research to make continuous progress over the years; the true purpose of research.

What is the Purpose of Research

From weather forecasts to the discovery of antibiotics, researchers are constantly trying to find new ways to understand the world and how things work – with the ultimate goal of improving our lives.

The purpose of research is therefore to find out what is known, what is not and what we can develop further. In this way, scientists can develop new theories, ideas and products that shape our society and our everyday lives.

Although research can take many forms, there are three main purposes of research:

  • Exploratory: Exploratory research is the first research to be conducted around a problem that has not yet been clearly defined. Exploration research therefore aims to gain a better understanding of the exact nature of the problem and not to provide a conclusive answer to the problem itself. This enables us to conduct more in-depth research later on.
  • Descriptive: Descriptive research expands knowledge of a research problem or phenomenon by describing it according to its characteristics and population. Descriptive research focuses on the ‘how’ and ‘what’, but not on the ‘why’.
  • Explanatory: Explanatory research, also referred to as casual research, is conducted to determine how variables interact, i.e. to identify cause-and-effect relationships. Explanatory research deals with the ‘why’ of research questions and is therefore often based on experiments.

Characteristics of Research

There are 8 core characteristics that all research projects should have. These are:

  • Empirical  – based on proven scientific methods derived from real-life observations and experiments.
  • Logical  – follows sequential procedures based on valid principles.
  • Cyclic  – research begins with a question and ends with a question, i.e. research should lead to a new line of questioning.
  • Controlled  – vigorous measures put into place to keep all variables constant, except those under investigation.
  • Hypothesis-based  – the research design generates data that sufficiently meets the research objectives and can prove or disprove the hypothesis. It makes the research study repeatable and gives credibility to the results.
  • Analytical  – data is generated, recorded and analysed using proven techniques to ensure high accuracy and repeatability while minimising potential errors and anomalies.
  • Objective  – sound judgement is used by the researcher to ensure that the research findings are valid.
  • Statistical treatment  – statistical treatment is used to transform the available data into something more meaningful from which knowledge can be gained.

Finding a PhD has never been this easy – search for a PhD by keyword, location or academic area of interest.

Types of Research

Research can be divided into two main types: basic research (also known as pure research) and applied research.

Basic Research

Basic research, also known as pure research, is an original investigation into the reasons behind a process, phenomenon or particular event. It focuses on generating knowledge around existing basic principles.

Basic research is generally considered ‘non-commercial research’ because it does not focus on solving practical problems, and has no immediate benefit or ways it can be applied.

While basic research may not have direct applications, it usually provides new insights that can later be used in applied research.

Applied Research

Applied research investigates well-known theories and principles in order to enhance knowledge around a practical aim. Because of this, applied research focuses on solving real-life problems by deriving knowledge which has an immediate application.

Methods of Research

Research methods for data collection fall into one of two categories: inductive methods or deductive methods.

Inductive research methods focus on the analysis of an observation and are usually associated with qualitative research. Deductive research methods focus on the verification of an observation and are typically associated with quantitative research.

Research definition

Qualitative Research

Qualitative research is a method that enables non-numerical data collection through open-ended methods such as interviews, case studies and focus groups .

It enables researchers to collect data on personal experiences, feelings or behaviours, as well as the reasons behind them. Because of this, qualitative research is often used in fields such as social science, psychology and philosophy and other areas where it is useful to know the connection between what has occurred and why it has occurred.

Quantitative Research

Quantitative research is a method that collects and analyses numerical data through statistical analysis.

It allows us to quantify variables, uncover relationships, and make generalisations across a larger population. As a result, quantitative research is often used in the natural and physical sciences such as engineering, biology, chemistry, physics, computer science, finance, and medical research, etc.

What does Research Involve?

Research often follows a systematic approach known as a Scientific Method, which is carried out using an hourglass model.

A research project first starts with a problem statement, or rather, the research purpose for engaging in the study. This can take the form of the ‘ scope of the study ’ or ‘ aims and objectives ’ of your research topic.

Subsequently, a literature review is carried out and a hypothesis is formed. The researcher then creates a research methodology and collects the data.

The data is then analysed using various statistical methods and the null hypothesis is either accepted or rejected.

In both cases, the study and its conclusion are officially written up as a report or research paper, and the researcher may also recommend lines of further questioning. The report or research paper is then shared with the wider research community, and the cycle begins all over again.

Although these steps outline the overall research process, keep in mind that research projects are highly dynamic and are therefore considered an iterative process with continued refinements and not a series of fixed stages.

Statistical Treatment of Data in Research

Statistical treatment of data is essential for all researchers, regardless of whether you’re a biologist, computer scientist or psychologist, but what exactly is it?

PhD Imposter Syndrome

Impostor Syndrome is a common phenomenon amongst PhD students, leading to self-doubt and fear of being exposed as a “fraud”. How can we overcome these feelings?

DiscoverPhDs_Binding_Options

Find out the different dissertation and thesis binding options, which is best, advantages and disadvantages, typical costs, popular services and more.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

what do further research mean

Browse PhDs Now

what do further research mean

Thinking about applying to a PhD? Then don’t miss out on these 4 tips on how to best prepare your application.

What is Scientific Misconduct?

Scientific misconduct can be described as a deviation from the accepted standards of scientific research, study and publication ethics.

what do further research mean

Dr Ilesanmi has a PhD in Applied Biochemistry from the Federal University of Technology Akure, Ondo State, Nigeria. He is now a lecturer in the Department of Biochemistry at the Federal University Otuoke, Bayelsa State, Nigeria.

Bijou Basu_Profile

Bijou is a second year MD-PhD candidate, starting her second year of medical school. At the end of this academic year she’ll transition into doing a genetics PhD full time at Case Western Reserve University.

Join Thousands of Students

Google sign-in

How to write the part scope for further research?

The part scope for further research is essential in every academic study such as a thesis , dissertation or journal paper . The main purpose of this part is to make the readers aware of the findings emerging from the study, and its shortcomings. The shortcomings of the research gap guide future researchers on a domain that they must consider to save time and avoid repetitive outcomes.

Furthermore, this section also gives guidelines to researchers on other dimensions and critical estimations from which the topic can be explored.

Emphasize the significance of further research

There are no specific rules or guidelines for this part. However, since it is expected to be brief and informative, the following format is recommended.

Start this section by reflecting on the significance of the present study in brief. Answering questions such as:

  • Whether the research deviated from its initial objectives?
  • What was the original idea behind the research?
  • From where was the inspiration drawn?

Answering such questions is important because the reader should connect to the idea of the research.

Limitations of the study

Furthermore, briefly explain the limitations of the study. This step proves significant for scholars who wish to address areas that can enrich the research topic further. The limitations can either be presented separately, in an independent section called “Limitations of the research”, or can be integrated within the future scope. Also, the limitations should be scalable and relatable, i.e. something that other researchers feel can be accomplished under different circumstances. This is also the key to setting recommendations for future studies.

Justify the future scope

Furthermore, provide justifications for the reasons why the mentioned areas have not been covered in the current study. Identify the probable bottlenecks other researchers might encounter while considering future research related to the topic. This will help them formulate an achievable or practically applicable plan for their own research, including the scope, aim and methodology.

Suggestions

Finally, the approach of the researcher becomes more direct. To be specific, some direct research suggestions should be given to other scholars for future studies. Be precise so that the reader is confident to undertake future studies in the suggested areas.

Answering the following questions can help:

  • What should be explored by others?
  • Why is it worth exploring?
  • What can be achieved from it?
  • Will the suggested study be relevant five to ten years down the line?
  • How does it add to the overall body of the literature?

Steps for writing "Scope for further research" part

Types of writing a future scope

There are different types of future research scope, based on the kind of writing, such as:

  • The future scope is focused solely on study findings.
  • The future scope is focused on the theory or theoretical model misused.
  • The future scope from lack of literary support.
  • The future scope of geographical outreach.
  • The future scope of testing methods and statistics.
  • The future scope on the complete redesigning of methodology.

Points to keep in mind

The most important aspect of writing the future scope part is to present it in an affirmative way. As identified in the former section, it is crucial to identify if the limitations are methods-based or researcher based. It should be concise and critical to the field of study. Refrain from using a reference in the scope for the future research part.

Make sure the points discussed remain achievable in a proximal time frame. In addition, make sure that they are in relation to the theoretical development of the study in focus.

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on WhatsApp (Opens in new window)
  • Click to share on Telegram (Opens in new window)

Notify me of follow-up comments by email.

6 thoughts on “How to write the part scope for further research?”

Proofreading.

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Write an “Implications of Research” Section

How to Write an “Implications of Research” Section

4-minute read

  • 24th October 2022

When writing research papers , theses, journal articles, or dissertations, one cannot ignore the importance of research. You’re not only the writer of your paper but also the researcher ! Moreover, it’s not just about researching your topic, filling your paper with abundant citations, and topping it off with a reference list. You need to dig deep into your research and provide related literature on your topic. You must also discuss the implications of your research.

Interested in learning more about implications of research? Read on! This post will define these implications, why they’re essential, and most importantly, how to write them. If you’re a visual learner, you might enjoy this video .

What Are Implications of Research?

Implications are potential questions from your research that justify further exploration. They state how your research findings could affect policies, theories, and/or practices.

Implications can either be practical or theoretical. The former is the direct impact of your findings on related practices, whereas the latter is the impact on the theories you have chosen in your study.

Example of a practical implication: If you’re researching a teaching method, the implication would be how teachers can use that method based on your findings.

Example of a theoretical implication: You added a new variable to Theory A so that it could cover a broader perspective.

Finally, implications aren’t the same as recommendations, and it’s important to know the difference between them .

Questions you should consider when developing the implications section:

●  What is the significance of your findings?

●  How do the findings of your study fit with or contradict existing research on this topic?

●  Do your results support or challenge existing theories? If they support them, what new information do they contribute? If they challenge them, why do you think that is?

Why Are Implications Important?

You need implications for the following reasons:

● To reflect on what you set out to accomplish in the first place

● To see if there’s a change to the initial perspective, now that you’ve collected the data

● To inform your audience, who might be curious about the impact of your research

How to Write an Implications Section

Usually, you write your research implications in the discussion section of your paper. This is the section before the conclusion when you discuss all the hard work you did. Additionally, you’ll write the implications section before making recommendations for future research.

Implications should begin with what you discovered in your study, which differs from what previous studies found, and then you can discuss the implications of your findings.

Your implications need to be specific, meaning you should show the exact contributions of your research and why they’re essential. They should also begin with a specific sentence structure.

Examples of starting implication sentences:

●  These results build on existing evidence of…

●  These findings suggest that…

●  These results should be considered when…

●  While previous research has focused on x , these results show that y …

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

You should write your implications after you’ve stated the results of your research. In other words, summarize your findings and put them into context.

The result : One study found that young learners enjoy short activities when learning a foreign language.

The implications : This result suggests that foreign language teachers use short activities when teaching young learners, as they positively affect learning.

 Example 2

The result : One study found that people who listen to calming music just before going to bed sleep better than those who watch TV.

The implications : These findings suggest that listening to calming music aids sleep quality, whereas watching TV does not.

To summarize, remember these key pointers:

●  Implications are the impact of your findings on the field of study.

●  They serve as a reflection of the research you’ve conducted.              

●  They show the specific contributions of your findings and why the audience should care.

●  They can be practical or theoretical.

●  They aren’t the same as recommendations.

●  You write them in the discussion section of the paper.

●  State the results first, and then state their implications.

Are you currently working on a thesis or dissertation? Once you’ve finished your paper (implications included), our proofreading team can help ensure that your spelling, punctuation, and grammar are perfect. Consider submitting a 500-word document for free.

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

9-minute read

How to Ace Slack Messaging for Contractors and Freelancers

Effective professional communication is an important skill for contractors and freelancers navigating remote work environments....

3-minute read

How to Insert a Text Box in a Google Doc

Google Docs is a powerful collaborative tool, and mastering its features can significantly enhance your...

2-minute read

How to Cite the CDC in APA

If you’re writing about health issues, you might need to reference the Centers for Disease...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

Words and phrases

Personal account.

  • Access or purchase personal subscriptions
  • Get our newsletter
  • Save searches
  • Set display preferences

Institutional access

Sign in with library card

Sign in with username / password

Recommend to your librarian

Institutional account management

Sign in as administrator on Oxford Academic

research noun 1

  • Hide all quotations

What does the noun research mean?

There are seven meanings listed in OED's entry for the noun research , three of which are labelled obsolete. See ‘Meaning & use’ for definitions, usage, and quotation evidence.

How common is the noun research ?

How is the noun research pronounced, british english, u.s. english, where does the noun research come from.

Earliest known use

The earliest known use of the noun research is in the late 1500s.

OED's earliest evidence for research is from 1577, in ‘F. de L'Isle’'s Legendarie .

research is apparently formed within English, by derivation; modelled on a French lexical item.

Etymons: re- prefix , search n.

Nearby entries

  • rescuing, adj. 1574–
  • resculpt, v. 1926–
  • resculpting, n. 1940–
  • rescussee, n. 1652–1823
  • rescusser, n. 1632–1704
  • rese, n. Old English–1600
  • rese, v.¹ Old English–1450
  • rese, v.² Old English–1582
  • reseal, v. 1624–
  • resealable, adj. 1926–
  • research, n.¹ 1577–
  • re-search, n.² 1605–
  • research, v.¹ 1588–
  • re-search, v.² 1708–
  • researchable, adj. 1927–
  • research and development, n. 1892–
  • researched, adj. 1636–
  • researcher, n. 1615–
  • researchful, adj. a1834–
  • research hospital, n. 1900–
  • researching, n. 1611–

Thank you for visiting Oxford English Dictionary

To continue reading, please sign in below or purchase a subscription. After purchasing, please sign in below to access the content.

Meaning & use

Pronunciation, compounds & derived words, entry history for research, n.¹.

research, n.¹ was revised in March 2010.

research, n.¹ was last modified in September 2023.

oed.com is a living text, updated every three months. Modifications may include:

  • further revisions to definitions, pronunciation, etymology, headwords, variant spellings, quotations, and dates;
  • new senses, phrases, and quotations.

Revisions and additions of this kind were last incorporated into research, n.¹ in September 2023.

Earlier versions of this entry were published in:

OED First Edition (1906)

  • Find out more

OED Second Edition (1989)

  • View research, n.¹ in OED Second Edition

Please submit your feedback for research, n.¹

Please include your email address if you are happy to be contacted about your feedback. OUP will not use this email address for any other purpose.

Citation details

Factsheet for research, n.¹, browse entry.

  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of further

 (Entry 1 of 3)

Farther and further have been used more or less interchangeably throughout most of their history, but currently they are showing signs of diverging. As adverbs they continue to be used interchangeably whenever spatial, temporal, or metaphorical distance is involved. But where there is no notion of distance, further is used.

Further is also used as a sentence modifier

, but farther is not. A polarizing process appears to be taking place in their adjective use. Farther is taking over the meaning of distance

and further the meaning of addition.

Definition of further  (Entry 2 of 3)

Definition of further  (Entry 3 of 3)

transitive verb

advance , promote , forward , further mean to help (someone or something) to move ahead.

advance stresses effective assisting in hastening a process or bringing about a desired end.

promote suggests an encouraging or fostering and may denote an increase in status or rank.

forward implies an impetus forcing something ahead.

further suggests a removing of obstacles in the way of a desired advance.

Examples of further in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'further.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Adverb, Adjective, and Verb

Middle English, from Old English furthor (akin to Old High German furthar further), comparative, from the base of Old English forth forth

before the 12th century, in the meaning defined at sense 1

13th century, in the meaning defined at sense 1

before the 12th century, in the meaning defined above

Phrases Containing further

  • further education
  • go no further
  • nothing could be further from someone's mind
  • nothing could be further from the truth
  • nothing (further) to do with
  • until further notice
  • upon further review of
  • without further ado

Articles Related to further

alt-63dc598234e8f

7 Pairs of Commonly Confused Words

We promise you're not the only one who has trouble with these words

further vs farther usage road sign

Is it 'further' or 'farther'?

Helping you navigate the linguistic road ahead

word matters podcast logo

Word Matters, Episode 41

further vs farther video

Further vs. Farther

They started as same word, but their meanings have drifted apart over time.

Dictionary Entries Near further

furthcoming

furtherance

Cite this Entry

“Further.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/further. Accessed 14 May. 2024.

Kids Definition

Kids definition of further.

Kids Definition of further  (Entry 2 of 3)

Kids Definition of further  (Entry 3 of 3)

More from Merriam-Webster on further

Nglish: Translation of further for Spanish Speakers

Britannica English: Translation of further for Arabic Speakers

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

More commonly misspelled words, your vs. you're: how to use them correctly, every letter is silent, sometimes: a-z list of examples, more commonly mispronounced words, how to use em dashes (—), en dashes (–) , and hyphens (-), popular in wordplay, the words of the week - may 10, a great big list of bread words, 10 scrabble words without any vowels, 8 uncommon words related to love, 9 superb owl words, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

How Much Research Is Being Written by Large Language Models?

New studies show a marked spike in LLM usage in academia, especially in computer science. What does this mean for researchers and reviewers?

research papers scroll out of a computer

In March of this year, a  tweet about an academic paper went viral for all the wrong reasons. The introduction section of the paper, published in  Elsevier’s  Surfaces and Interfaces , began with this line:  Certainly, here is a possible introduction for your topic. 

Look familiar? 

It should, if you are a user of ChatGPT and have applied its talents for the purpose of content generation. LLMs are being increasingly used to assist with writing tasks, but examples like this in academia are largely anecdotal and had not been quantified before now. 

“While this is an egregious example,” says  James Zou , associate professor of biomedical data science and, by courtesy, of computer science and of electrical engineering at Stanford, “in many cases, it’s less obvious, and that’s why we need to develop more granular and robust statistical methods to estimate the frequency and magnitude of LLM usage. At this particular moment, people want to know what content around us is written by AI. This is especially important in the context of research, for the papers we author and read and the reviews we get on our papers. That’s why we wanted to study how much of those have been written with the help of AI.”

In two papers looking at LLM use in scientific publishings, Zou and his team* found that 17.5% of computer science papers and 16.9% of peer review text had at least some content drafted by AI. The paper on LLM usage in peer reviews will be presented at the International Conference on Machine Learning.

Read  Mapping the Increasing Use of LLMs in Scientific Papers and  Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews  

Here Zou discusses the findings and implications of this work, which was supported through a Stanford HAI Hoffman Yee Research Grant . 

How did you determine whether AI wrote sections of a paper or a review?

We first saw that there are these specific worlds – like commendable, innovative, meticulous, pivotal, intricate, realm, and showcasing – whose frequency in reviews sharply spiked, coinciding with the release of ChatGPT. Additionally, we know that these words are much more likely to be used by LLMs than by humans. The reason we know this is that we actually did an experiment where we took many papers, used LLMs to write reviews of them, and compared those reviews to reviews written by human reviewers on the same papers. Then we quantified which words are more likely to be used by LLMs vs. humans, and those are exactly the words listed. The fact that they are more likely to be used by an LLM and that they have also seen a sharp spike coinciding with the release of LLMs is strong evidence.

Charts showing significant shift in the frequency of certain adjectives in research journals.

Some journals permit the use of LLMs in academic writing, as long as it’s noted, while others, including  Science and the ICML conference, prohibit it. How are the ethics perceived in academia?

This is an important and timely topic because the policies of various journals are changing very quickly. For example,  Science said in the beginning that they would not allow authors to use language models in their submissions, but they later changed their policy and said that people could use language models, but authors have to explicitly note where the language model is being used. All the journals are struggling with how to define this and what’s the right way going forward.

You observed an increase in usage of LLMs in academic writing, particularly in computer science papers (up to 17.5%). Math and  Nature family papers, meanwhile, used AI text about 6.3% of the time. What do you think accounts for the discrepancy between these disciplines? 

Artificial intelligence and computer science disciplines have seen an explosion in the number of papers submitted to conferences like ICLR and NeurIPS. And I think that’s really caused a strong burden, in many ways, to reviewers and to authors. So now it’s increasingly difficult to find qualified reviewers who have time to review all these papers. And some authors may feel more competition that they need to keep up and keep writing more and faster. 

You analyzed close to a million papers on arXiv, bioRxiv, and  Nature from January 2020 to February 2024. Do any of these journals include humanities papers or anything in the social sciences?  

We mostly wanted to focus more on CS and engineering and biomedical areas and interdisciplinary areas, like  Nature family journals, which also publish some social science papers. Availability mattered in this case. So, it’s relatively easy for us to get data from arXiv, bioRxiv, and  Nature . A lot of AI conferences also make reviews publicly available. That’s not the case for humanities journals.

Did any results surprise you?

A few months after ChatGPT’s launch, we started to see a rapid, linear increase in the usage pattern in academic writing. This tells us how quickly these LLM technologies diffuse into the community and become adopted by researchers. The most surprising finding is the magnitude and speed of the increase in language model usage. Nearly a fifth of papers and peer review text use LLM modification. We also found that peer reviews submitted closer to the deadline and those less likely to engage with author rebuttal were more likely to use LLMs. 

This suggests a couple of things. Perhaps some of these reviewers are not as engaged with reviewing these papers, and that’s why they are offloading some of the work to AI to help. This could be problematic if reviewers are not fully involved. As one of the pillars of the scientific process, it is still necessary to have human experts providing objective and rigorous evaluations. If this is being diluted, that’s not great for the scientific community.

What do your findings mean for the broader research community?

LLMs are transforming how we do research. It’s clear from our work that many papers we read are written with the help of LLMs. There needs to be more transparency, and people should state explicitly how LLMs are used and if they are used substantially. I don’t think it’s always a bad thing for people to use LLMs. In many areas, this can be very useful. For someone who is not a native English speaker, having the model polish their writing can be helpful. There are constructive ways for people to use LLMs in the research process; for example, in earlier stages of their draft. You could get useful feedback from a LLM in real time instead of waiting weeks or months to get external feedback. 

But I think it’s still very important for the human researchers to be accountable for everything that is submitted and presented. They should be able to say, “Yes, I will stand behind the statements that are written in this paper.”

*Collaborators include:  Weixin Liang ,  Yaohui Zhang ,  Zhengxuan Wu ,  Haley Lepp ,  Wenlong Ji ,  Xuandong Zhao ,  Hancheng Cao ,  Sheng Liu ,  Siyu He ,  Zhi Huang ,  Diyi Yang ,  Christopher Potts ,  Christopher D. Manning ,  Zachary Izzo ,  Yaohui Zhang ,  Lingjiao Chen ,  Haotian Ye , and Daniel A. McFarland .

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

More News Topics

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

Customer Experience

  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

Experience Management

X4 2024 Strategy & Research Showcase: Introducing the future of insights generation

At X4 ® 2024, we showcased how Qualtrics AI ® enables teams to quickly summarize, analyze, and elevate qualitative and quantitative data, transforming the way research is done and amplifying business outcomes at scale.

Customers tell us they’ve been flooded with potential AI solutions and specialist suppliers over the last year. AI is profoundly enhancing their efficiency and effectiveness, but with technology advancing so fast it’s hard to know what’s hype versus product truth. Meanwhile, tighter budgets make it more critical than ever to choose tools and suppliers wisely.

To help our customers conduct deep, powerful research, without breaking the bank, we’ve enhanced the XM ® product suites they know and trust with purpose-built, integrated machine intelligence. These features improve efficiency, erase data silos, and add new dimensions of scalability.

If your market research function is ready to go from a bolted-together mish-mash of tools, agencies, and third parties to a single, unified source of knowledge that generates insights on an unprecedented scale, then we think you’re going to love the new Strategy and Research Suite TM .

Here’s a look at what we unveiled at X4.

AI-powered, centralized data storage and semantic search

Researc Hub takes our single source of knowledge principle to the next level. It’s an AI powered repository of owned research spanning all of your total research knowledge capital, from customer feedback to brand studies to product insights.

Research Hub liberates your mothballed data and combines it with what’s current, so instead of commissioning a new study, you can find the answers you need using information you already own. It unites quantitative and qualitative research from all of your audiences to deliver maximum ROI from your research activities past and present.

Research Hub goes far beyond basic keywords. It uses Qualtrics-tailored AI to understand the intent and context of your search and deliver more meaningful, useful results. It does this across millions of data points, freeing your time and energy for the work that only human minds can do.

You can use Research Hub to collate and curate data collections that are relevant to your projects, then share them with team members. In the future, we’ll add functionality that lets you combine data from across multiple studies, yielding brand new insights.

Research Hub is live now with selected preview customers.

New ways to connect with consumers and get effortless insights, fast

Panels are now easier to integrate. Our customers can now connect to a third party online panel from within Qualtrics, choosing from a network of 200+ partners. You can create your own panel or select from pre-configured panel templates - whatever works best to pinpoint your target audience.

Each panel respondent interacts through our Panelist Portal app , which puts you in control of their experience. Your team can easily customize which studies participants see and even send targeted messages to segments of your audience.

Online panels are ideal for diving straight in and getting results. Where your panel requirements are more complex, our expert Research Services team is on-hand to support you.

A new generation of research features and AI tools

AI really takes the brakes off scalability, allowing you to process high volumes of qualitative data at high speed. Here are just a few of the ways our AI tools save you time and money. Many of these products are live right now or in preview with our customers.

Coming soon:

  • Automated video summaries parse experience data from your panels and in-depth interviews and synthesize it into a clear summary of trends and insights. They identify up to five themes across your response data, and tie them to quotes within the transcript that exemplify the point in the respondent’s own words.
  • Video transcripts are automatic, with up to 10 speakers identified, and they’re broken up into chapters for easy navigation. Naturally, sentiment and topic analysis are at work, showing up in the transcript with helpful highlights and color coding.
  • Interview scheduling tools minimize the admin around online in-depth interviews. A survey screens potential participants, then offers them a time-slot when interviewers are free. They’ll get a link to join a Zoom call, hosted within the Qualtrics platform. This feature is now in preview with selected customers.
  • Using our highlight reels builder , you can create video clip summaries that capture the essence of your unmoderated user testing data and tell a compelling story when you present your results.
  • What if you could have a conversation with all of your respondents at once - and get coherent responses? With conversational qualitative analytics you can delve into your qualitative data using interactive dialogue to find patterns and trends. Like any good conversation, it’s effortless and engaging.
  • Survey response clarity puts AI to work improving the quality of your survey data, helping you avoid having to re-run studies or rely on flawed data when making important business decisions.
  • Adaptive survey follow-up reacts to your respondents’ answers, adaptively requesting additional information to add layers of meaning and context.
  • All of your data forms part of a bigger picture, so we developed Dashboard Summarization to keep track of the most critical insights from across your data universe and deliver them when you need them. Less clicking through dashboards, more taking action.
  • Why write your own report? Executive report generation automatically summarizes critical insights from your dashboards and presents them in a clear, actionable format.

The next generation of digital UX research

We’re bringing UX testing tools and capabilities together into a single, cost-effective product - one that sits on the same platform as the rest of your research. It unites UX essentials like surveys, card sorting, tree testing and video analysis, and methodologies like moderated and unmoderated user testing. And as before, these tools are elevated with AI functionality .

For example, with our unmoderated testing you can set up a task flow for users to complete, inviting them to record themselves. They will put your online prototype through its paces while generating audio and video data for you to transform into insights using AI tools.

Moderated testing with moderator and scheduling tools is currently in preview, and everything else is live right now for Qualtrics customers.

Find out how to transform data into outcomes with Qualtrics AI

Qualtrics // Experience Management

Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.

With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.

Related Articles

May 6, 2024

New innovations for the XM Platform showcased at X4 2024

May 3, 2024

X4 2024: Reimagining human experiences with AI

May 2, 2024

X4 2024: Humanized Intelligence

April 16, 2024

A better way to measure and improve patient access

April 14, 2024

The ultimate guide to X4 Summit: Your guide to an unforgettable experience

February 14, 2024

3 key ways AI can help drive human-centered government

December 15, 2023

Qualtrics named a Voice of the Customer (VoC) Leader in IDC MarketScape report

November 28, 2023

Qualtrics Customer Success Hub: Taking digital self-service to the next level

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

MIT Technology Review

  • Newsletters

Google DeepMind’s new AlphaFold can model a much larger slice of biological life

AlphaFold 3 can predict how DNA, RNA, and other molecules interact, further cementing its leading role in drug discovery and research. Who will benefit?

  • James O'Donnell archive page

Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life.

It’s a development that could help accelerate drug discovery and other scientific research. The tool is currently being used to experiment with identifying everything from resilient crops to new vaccines. 

While the previous model, released in 2020, amazed the research community with its ability to predict proteins structures, researchers have been clamoring for the tool to handle more than just proteins. 

Now, DeepMind says, AlphaFold 3 can predict the structures of DNA, RNA, and molecules like ligands, which are essential to drug discovery. DeepMind says the tool provides a more nuanced and dynamic portrait of molecule interactions than anything previously available. 

“Biology is a dynamic system,” DeepMind CEO Demis Hassabis told reporters on a call. “Properties of biology emerge through the interactions between different molecules in the cell, and you can think about AlphaFold 3 as our first big sort of step toward [modeling] that.”

AlphaFold 2 helped us better map the human heart , model antimicrobial resistance , and identify the eggs of extinct birds , but we don’t yet know what advances AlphaFold 3 will bring. 

Mohammed AlQuraishi, an assistant professor of systems biology at Columbia University who is unaffiliated with DeepMind, thinks the new version of the model will be even better for drug discovery. “The AlphaFold 2 system only knew about amino acids, so it was of very limited utility for biopharma,” he says. “But now, the system can in principle predict where a drug binds a protein.”

Isomorphic Labs, a drug discovery spinoff of DeepMind, is already using the model for exactly that purpose, collaborating with pharmaceutical companies to try to develop new treatments for diseases, according to DeepMind. 

AlQuraishi says the release marks a big leap forward. But there are caveats.

“It makes the system much more general, and in particular for drug discovery purposes (in early-stage research), it’s far more useful now than AlphaFold 2,” he says. But as with most models, the impact of AlphaFold will depend on how accurate its predictions are. For some uses, AlphaFold 3 has double the success rate of similar leading models like RoseTTAFold. But for others, like protein-RNA interactions, AlQuraishi says it’s still very inaccurate. 

DeepMind says that depending on the interaction being modeled, accuracy can range from 40% to over 80%, and the model will let researchers know how confident it is in its prediction. With less accurate predictions, researchers have to use AlphaFold merely as a starting point before pursuing other methods. Regardless of these ranges in accuracy, if researchers are trying to take the first steps toward answering a question like which enzymes have the potential to break down the plastic in water bottles, it’s vastly more efficient to use a tool like AlphaFold than experimental techniques such as x-ray crystallography. 

A revamped model  

AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which AI researchers have been steadily improving in recent years and now power image and video generators like OpenAI’s DALL-E 2 and Sora. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges. That method allows AlphaFold 3 to handle a much larger set of inputs.

That marked “a big evolution from the previous model,” says John Jumper, director at Google DeepMind. “It really simplified the whole process of getting all these different atoms to work together.”

It also presented new risks. As the AlphaFold 3 paper details, the use of diffusion techniques made it possible for the model to hallucinate, or generate structures that look plausible but in reality could not exist. Researchers reduced that risk by adding more training data to the areas most prone to hallucination, though that doesn’t eliminate the problem completely. 

Restricted access

Part of AlphaFold 3’s impact will depend on how DeepMind divvies up access to the model. For AlphaFold 2, the company released the open-source code , allowing researchers to look under the hood to gain a better understanding of how it worked. It was also available for all purposes, including commercial use by drugmakers. For AlphaFold 3, Hassabis said, there are no current plans to release the full code. The company is instead releasing a public interface for the model called the AlphaFold Server , which imposes limitations on which molecules can be experimented with and can only be used for noncommercial purposes. DeepMind says the interface will lower the technical barrier and broaden the use of the tool to biologists who are less knowledgeable about this technology.

Artificial intelligence

What’s next for generative video.

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

  • Will Douglas Heaven archive page

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

  • Melissa Heikkilä archive page

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

  • The Magazine
  • City Journal
  • Contributors
  • Manhattan Institute
  • Email Alerts

what do further research mean

What Does Quality of Evidence Mean?

Gender clinician Jack Turban misunderstands key concepts in evidence-based medicine.

On Wednesday, the Dartmouth Political Union hosted a debate on sex and gender between MIT philosopher Alex Byrne, University of California at San Francisco psychiatrist Jack Turban, and Aston University emerita neuroscientist Gina Rippon.

An interesting moment came when Byrne asked Turban what he thought of the recently published Cass Review , the 388-page comprehensive report on youth gender medicine, authored by British physician Hilary Cass and her colleagues. Turban claimed that the report found “moderate” quality evidence for “gender-affirming care,” and that, contrary to its reception, the review’s findings did not lend support to restrictions on puberty blockers and other medical interventions for pediatric gender dysphoria.

Turban’s characterization is at odds with that of Cass and her team. Cass’s report, published alongside seven new systematic evidence reviews on several issues associated with youth gender transition, concludes that the evidence for the safety and efficacy of puberty blockers and cross-sex hormones as treatments for gender-related distress in adolescents is “remarkably weak.” Youth gender medicine, Cass writes in the prestigious British Medical Journal , “is built on shaky foundations.”

Here, I want to respond to Turban’s comments about evidence quality in the Cass Review. These issues are technical but important for those following the debate over pediatric gender medicine.

First, some background. Jack Turban is one of the nation’s most prominent defenders of pediatric sex-trait modification (“gender-affirming care”). He has garnered a reputation outside of his circle of followers for pursuing agenda-driven research, evading scientific debate, launching ad hominem attacks on scientific critics, and misrepresenting research findings— including his own .

In a recent deposition in a lawsuit over Idaho’s Vulnerable Child Protection Act, which bans sex-trait modification in minors, Turban demonstrated under oath his lack of understanding of, or failure to be honest about, basic principles of evidence-based medicine (EBM). He seemed unaware, for example, that systematic reviews of evidence are meant not only to assess the available research but also to score the quality of that research. Gordon Guyatt, a professor of health research methods, world-renowned expert in EBM, and a founder of the field, has said that when it comes to systematic reviews, Turban has shown he “does not understand what it’s all about.”

The studies Turban and other gender clinicians cite in support of “gender-affirming care” often suffer from high risk of bias and show inconsistent findings regarding mental-health outcomes. Further, these studies often are conducted by gender clinicians with ideological, professional, and even financial stakes in administering drugs and surgeries to minors.

The benefit of systematic reviews is that they do not take authors’ conclusions at face value. Instead, they allow independent experts in research methods and evidence evaluation to scrutinize studies’ designs and conclusions. The research on youth gender medicine interventions generally lacks adequate follow-up time, has high drop-out rates, fails to control for potential confounding factors, and regards as homogeneous a patient population with significantly different clinical presentations.

Because systematic reviews are EBM’s gold standard for furnishing clinicians and guideline developers with reliable information, it’s necessary to respond directly to Turban’s claim in the Dartmouth debate that the new systematic reviews associated with the Cass report found “moderate quality” evidence that puberty blockers improve mental health. (I will focus here on the puberty blockers review, although the analysis applies to the cross-sex hormone review as well.)

Turban’s claim is false, for three reasons. First, he ignores the crucial distinction in EBM between quality of studies and quality of evidence —an admittedly non-obvious distinction, but one that any competent clinician who opines on EBM issues should comprehend. Second, he fails to distinguish between the mental-health and non-mental-health-related research cited in the report. Third, he ignores the fact that the authors of the systematic reviews used a scoring tool that already sets a lower bar for evaluating research. In effect, the reviewers (and the Cass team) performed affirmative action for youth-gender-medicine research and still found it wanting.

Quality of Studies v. Quality of Evidence. To evaluate the quality of studies on puberty suppression, the authors of the systematic review used a modified version of the Newcastle–Ottawa Quality Assessment Scale (NOS), a tool for evaluating nonrandomized studies. Studies assessed by the scale receive one of three grades: low, moderate, or high. Of the 50 studies on puberty suppression the authors identified as relevant, 24 (including one by Turban) were excluded for being low quality. Of the remaining 26, one was determined to be high quality, and 25 moderate quality. Turban’s confusion is therefore understandable: wouldn’t the finding that most of the research is moderate quality mean that the evidence overall is moderate quality?

Not exactly. In EBM, “quality of study” refers to a given study’s risk of bias. “Risk of bias” is a technical term, which Cochrane defines as “systematic error, or deviation from the truth, in results.” To give an obvious example, if you want to test the effects of puberty blockers on mental health and give them to patients who are already receiving psychotherapy, any positive outcomes may be attributable to the drugs, the therapy, or some combination of the two. A study design that is incapable of isolating the effects of puberty blockers from confounding variables like psychotherapy is at high risk of bias.

Quality of evidence , on the other hand, refers to the confidence we can have in our estimate of an intervention’s effect, based on the entire body of information. Quality of studies (based on risk of bias) is one factor that determines quality of evidence; others include publication bias (when, for example, a journal declines to publish an unfavorable study); inconsistency (when studies addressing the same question come to significantly different results); indirectness (when the studies do not directly compare interventions of interest in populations of interest, or when they do not report outcomes deemed important for clinical decisions); and imprecision (when studies are subject to random error, often due to small sample sizes).

Gender medicine research, and youth gender medicine research in particular, suffers from these problems. To give one example, inapplicability is a form of indirectness in which the subjects of a study are different from the patients to whom an intervention is being offered. The gold standard of research in youth gender medicine is the Dutch study. That study suffers from high risk of bias , but it is also inapplicable to the majority of minors now seeking “gender-affirming care” because it was done on patients with a different clinical presentation than the group responsible for the sudden and dramatic rise in gender dysphoria diagnoses and referrals: teen girls with no prepubertal history of gender issues and with high rates of psychiatric and/or neurocognitive challenges.

Turban’s claim that the systematic reviews on puberty blockers and cross-sex hormones found “moderate” quality evidence is therefore incorrect. The reviews found moderate and a few high-quality studies , but they did not find moderate quality evidence . In fact, the University of York authors of the systematic reviews did not even evaluate the quality of evidence using widely accepted and standardized tools such as Grading of Recommendations, Assessment, Development, and Evaluations (GRADE). Instead, they summarized their findings in narrative form. “There is a lack of high-quality research assessing puberty suppression in adolescents experiencing gender dysphoria/incongruence,” they wrote. “No conclusions can be drawn about the impact on gender dysphoria, mental and psychosocial health or cognitive development. Bone health and height may be compromised during treatment.”

Quality With Regard to What Outcomes? Turban’s second mistake is to suggest that the “moderate-quality evidence” was about “improvements in mental health.” A look at the chart included in the systematic review on puberty blockers, however, reveals that of the 25 moderate-quality studies, most appear in four columns: puberty suppression (17 studies), physical health (14), bone health (5), and side effects (3) (most studies examine more than one domain). Many of the studies do not examine mental-health outcomes.

It’s not possible for me to give a detailed account here of what each of the moderate-quality studies examined, but a few examples should be enough to show why Turban’s suggestion is misleading. One moderate-quality study included in the “puberty suppression” category tested whether Histrelin implants (a puberty blocker) are still effective at disrupting the puberty-inducing mechanism of the pituitary gland after one year. Another moderate-quality study , in the “physical health” category, was about the effects on body composition (in terms of height and lean mass) from sudden withdrawal of sex hormones in late-pubertal adolescents. Neither study examined participants’ mental-health outcomes.

Lowering the Bar for “Gender-Affirming Care.” To assess the strength of various studies, the University of York systematic review authors used a scoring tool specifically designed for nonrandomized studies. Such studies already face a higher risk of bias, since their proctors do not randomly assign comparable participants into treatment and control groups. The field of youth gender medicine lacks even a single randomized controlled study—the gold standard for testing causal claims about the safety and efficacy of medical interventions.

I asked Yuan Zhang, an assistant clinical professor of health research methods, evidence, and impact at McMaster University, home of EBM, for his impression of the Cass-linked systematic review’s methods. “With regard to the question of the effects of puberty blockers on mental health, even if the University of York team had done a quality of evidence scoring, it would not have been better than low quality.” Zhang is referring to the lowest score on GRADE. “If you want to produce credible evidence of cause and effect, for instance in order to be able to say that puberty blockers are responsible for improvement in mental health, there is no alternative to a randomized controlled trial.”

Advocates of puberty blockers like Turban argue that conducting a RCT in the gender-medicine context would be unethical, as we already know that puberty blockers are “medically necessary” interventions and that withholding them would cause harm. Of course, this claim assumes the very thing that’s in dispute. Proponents also argue that conducting a double-blinded RCT would be impossible, as there is no way to hide from participants (and their physicians) whether puberty blockers or placebos were being administered. This second objection is more reasonable, but it’s possible to design a non-blinded RCT with active comparators. Non-puberty-suppressed participants can be given antidepressants or psychotherapy, for instance. The passage of time alone may have an effect on mental health (a phenomenon known as “regression to the mean”).

As James Cantor, a psychologist and author of important articles and expert reports on gender medicine, told me, “Even if one accepted, for arguments’ sake, that RCTs couldn’t be done, it still wouldn’t justify barreling ahead as if they had been done and always showed unmitigated success.” The reason should be obvious: drugs and surgeries pose real and potentially serious risks to a person’s physical and mental health. Because in this case they are being given to adolescents who are physically healthy, the burden is on proponents of hormonal interventions to prove their safety and efficacy.

How do reviewers assess the quality of non-randomized studies, which inherently are more prone to bias? The most common tool is “Risk of Bias in Non-randomized Studies—Interventions” (ROBINS-I). It’s not clear why the authors of the Cass systematic reviews chose not to use this tool, but one possible reason is that ROBINS-I is very rigorous in assessing risk of bias in non-randomized research. Applying it to existing gender-medicine research would likely have resulted in all available studies being found to be at “serious” or “critical” risk of bias.

The NOS, which the Cass researchers used, has separate scoring scales for pre-post, cohort, and cross-sectional studies. Pre-post studies examine the effects of an intervention in a single cohort with no comparator group. Cohort studies follow a group of patients over a period of time but also lack adequate controls. Cross-sectional studies capture data at a single point in time, through methods such as surveys or medical-chart reviews.

The only high-quality study of puberty blockers included in the systematic review was a cross-sectional study from the Netherlands . A cross-sectional design is definitionally incapable of ascertaining causal relationships, so how could this study come out above other types of nonrandomized studies? The answer is that the NOS scale scores each type of study differently. A high-quality cross-sectional study means that it is high quality for cross-sectional design , not high quality for nonrandomized research in general.

Turban’s misperceptions about quality in medical research lead to similarly misguided policy conclusions. He claimed in the Dartmouth debate, for example, that moderate-quality evidence “is not particularly unusual in medicine,” adding, “I can’t think of another example in medicine where you have that quality of evidence, and you ban the care. The report also doesn’t say to ban care.”

Turban is correct that this area of medicine has been singled out for special treatment, but not in the way he thinks. Indeed, Hilary Cass, author of the Cass Review, claims that pediatric gender medicine has been “exceptionalised”—too many clinicians in this field have “abandoned normal clinical approaches to holistic assessment” and instead deferred to their patient’s self-diagnosis and desire for medical intervention. No other area of medicine has been allowed to proceed so quickly, with so little evidence, on such vulnerable patients, and with such little follow-up.

Advocates like Turban point out that many medical treatments and protocols in pediatrics are still used despite low-quality evidence. This fact, they claim, shows that gatekeepers are prejudicially motivated to restrict gender transition. An influential Yale report from 2022, for example, cited the recommendation against giving children aspirin for fevers due to risk of developing Reye’s syndrome—a progressive and potentially fatal neurological disease—despite there being only low-quality evidence linking aspirin to Reye’s.

A rule of thumb in EBM is that strong recommendations require strong evidence. In some cases, however, low-quality evidence can justify strong recommendations. Examples of such “discordant recommendations” are when the alternative to nontreatment is death, and when alternative interventions can achieve the same effects with less risk. The Yale team conveniently neglected to mention that kids can be given Tylenol, which isn’t linked to Reye’s, instead of aspirin.

When Turban says that moderate-quality evidence is “not particularly unusual” in medicine, he is thus misleading his audience on two counts. First, he falsely implies that the quality of evidence (rather than of studies ) is moderate, and confuses NOS’s use of “moderate” with the use of this term in GRADE (where quality of evidence is at issue). Second, he suggests that puberty blockers fall under one of the exceptional scenarios in EBM where discordant recommendations are appropriate.

It’s noteworthy that this marks a shift in Turban’s public position, which has been that “the body of research indicates that these interventions result in favorable mental health outcomes.” In his expert witness reports , Turban has claimed that “Existing research shows gender-affirming medical treatments for adolescents with gender dysphoria are consistently linked to improved mental health.” Yet at Dartmouth, he appeared to make a different claim: the evidence is not strong, but it’s common practice in pediatrics to offer medical interventions based on uncertain evidence.

As for banning “care,” Turban is correct that the Cass Review does not recommend a blanket prohibition on puberty blockers. But if Cass’s recommendations were to be implemented in the U.S., most of the kids currently getting them would no longer be eligible, and those who would be eligible would be able to receive them only as part of research. Turban, like other gender clinicians, has conveniently but disingenuously latched on to age restriction laws (“bans”) as a way to avoid acknowledging this important implication.

Advocates of hormonal interventions frame the choice as one between only two alternatives: their own “affirmative” approach or total prohibition. They then use Europeans’ allowance for at least some instances of pubertal suppression as evidence that European countries have rejected the prohibitionist approach, and that, by implication, they agree with advocates’ “affirming” approach.

The only real disagreement between health-care authorities in places like England, Sweden, and Finland, and those in U.S. red states is whether these drugs should be allowed within research settings and administered in exceptional cases. England’s National Health Service has officially ended the routine use of puberty blockers for adolescents with gender dysphoria. Turban, by contrast, has seemed to agree that these drugs should be given out for free, on-demand, without parental consent.

At Dartmouth, Turban warned against “conflating very technical terms from the grading scale, like for medical evidence, with lay terminology saying it’s all low-quality evidence.” I agree. But Turban appears not to understand the technical terms. Perhaps someone should explain them to him in lay terminology.

Leor Sapir  is a fellow at the Manhattan Institute.

Photo: krisanapong detraphiphat/Moment via Getty Images

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Copyright © 2024 Manhattan Institute for Policy Research, Inc. All rights reserved.

  • Eye on the News
  • From the Magazine
  • Books and Culture
  • Privacy Policy
  • Terms of Use

EIN #13-2912529

  • Share full article

Advertisement

Supported by

A Peek Inside the Brains of ‘Super-Agers’

New research explores why some octogenarians have exceptional memories.

Close up of a grey haired, wrinkled older woman’s eye.

By Dana G. Smith

When it comes to aging, we tend to assume that cognition gets worse as we get older. Our thoughts may slow down or become confused, or we may start to forget things, like the name of our high school English teacher or what we meant to buy at the grocery store.

But that’s not the case for everyone.

For a little over a decade, scientists have been studying a subset of people they call “super-agers.” These individuals are age 80 and up, but they have the memory ability of a person 20 to 30 years younger.

Most research on aging and memory focuses on the other side of the equation — people who develop dementia in their later years. But, “if we’re constantly talking about what’s going wrong in aging, it’s not capturing the full spectrum of what’s happening in the older adult population,” said Emily Rogalski, a professor of neurology at the University of Chicago, who published one of the first studies on super-agers in 2012.

A paper published Monday in the Journal of Neuroscience helps shed light on what’s so special about the brains of super-agers. The biggest takeaway, in combination with a companion study that came out last year on the same group of individuals, is that their brains have less atrophy than their peers’ do.

The research was conducted on 119 octogenarians from Spain: 64 super-agers and 55 older adults with normal memory abilities for their age. The participants completed multiple tests assessing their memory, motor and verbal skills; underwent brain scans and blood draws; and answered questions about their lifestyle and behaviors.

The scientists found that the super-agers had more volume in areas of the brain important for memory, most notably the hippocampus and entorhinal cortex. They also had better preserved connectivity between regions in the front of the brain that are involved in cognition. Both the super-agers and the control group showed minimal signs of Alzheimer’s disease in their brains.

“By having two groups that have low levels of Alzheimer’s markers, but striking cognitive differences and striking differences in their brain, then we’re really speaking to a resistance to age-related decline,” said Dr. Bryan Strange, a professor of clinical neuroscience at the Polytechnic University of Madrid, who led the studies.

These findings are backed up by Dr. Rogalski’s research , initially conducted when she was at Northwestern University, which showed that super-agers’ brains looked more like 50- or 60-year-olds’ brains than their 80-year-old peers. When followed over several years, the super-agers’ brains atrophied at a slower rate than average.

No precise numbers exist on how many super-agers there are among us, but Dr. Rogalski said they’re “relatively rare,” noting that “far less than 10 percent” of the people she sees end up meeting the criteria.

But when you meet a super-ager, you know it, Dr. Strange said. “They are really quite energetic people, you can see. Motivated, on the ball, elderly individuals.”

Experts don’t know how someone becomes a super-ager, though there were a few differences in health and lifestyle behaviors between the two groups in the Spanish study. Most notably, the super-agers had slightly better physical health, both in terms of blood pressure and glucose metabolism, and they performed better on a test of mobility . The super-agers didn’t report doing more exercise at their current age than the typical older adults, but they were more active in middle age. They also reported better mental health .

But overall, Dr. Strange said, there were a lot of similarities between the super-agers and the regular agers. “There are a lot of things that are not particularly striking about them,” he said. And, he added, “we see some surprising omissions, things that you would expect to be associated with super-agers that weren’t really there.” For example, there were no differences between the groups in terms of their diets, the amount of sleep they got, their professional backgrounds or their alcohol and tobacco use.

The behaviors of some of the Chicago super-agers were similarly a surprise. Some exercised regularly, but some never had; some stuck to a Mediterranean diet, others subsisted off TV dinners; and a few of them still smoked cigarettes. However, one consistency among the group was that they tended to have strong social relationships , Dr. Rogalski said.

“In an ideal world, you’d find out that, like, all the super-agers, you know, ate six tomatoes every day and that was the key,” said Tessa Harrison, an assistant project scientist at the University of California, Berkeley, who collaborated with Dr. Rogalski on the first Chicago super-ager study.

Instead, Dr. Harrison continued, super-agers probably have “some sort of lucky predisposition or some resistance mechanism in the brain that’s on the molecular level that we don’t understand yet,” possibly related to their genes.

While there isn’t a recipe for becoming a super-ager, scientists do know that, in general , eating healthily, staying physically active, getting enough sleep and maintaining social connections are important for healthy brain aging.

Dana G. Smith is a Times reporter covering personal health, particularly aging and brain health. More about Dana G. Smith

A Guide to Aging Well

Looking to grow old gracefully we can help..

The “car key conversation,” when it’s time for an aging driver to hit the brakes, can be painful for families to navigate . Experts say there are ways to have it with empathy and care.

Calorie restriction and intermittent fasting both increase longevity in animals, aging experts say. Here’s what that means for you .

Researchers are investigating how our biology changes as we grow older — and whether there are ways to stop it .

You need more than strength to age well — you also need power. Here’s how to measure how much power you have  and here’s how to increase yours .

Ignore the hyperbaric chambers and infrared light: These are the evidence-backed secrets to aging well .

Your body’s need for fuel shifts as you get older. Your eating habits should shift , too.

People who think positively about getting older often live longer, healthier lives. These tips can help you reconsider your perspective .

  • Privacy Policy

Research Method

Home » Scope of the Research – Writing Guide and Examples

Scope of the Research – Writing Guide and Examples

Table of Contents

Scope of the Research

Scope of the Research

Scope of research refers to the range of topics, areas, and subjects that a research project intends to cover. It is the extent and limitations of the study, defining what is included and excluded in the research.

The scope of a research project depends on various factors, such as the research questions , objectives , methodology, and available resources. It is essential to define the scope of the research project clearly to avoid confusion and ensure that the study addresses the intended research questions.

How to Write Scope of the Research

Writing the scope of the research involves identifying the specific boundaries and limitations of the study. Here are some steps you can follow to write a clear and concise scope of the research:

  • Identify the research question: Start by identifying the specific question that you want to answer through your research . This will help you focus your research and define the scope more clearly.
  • Define the objectives: Once you have identified the research question, define the objectives of your study. What specific goals do you want to achieve through your research?
  • Determine the population and sample: Identify the population or group of people that you will be studying, as well as the sample size and selection criteria. This will help you narrow down the scope of your research and ensure that your findings are applicable to the intended audience.
  • Identify the variables: Determine the variables that will be measured or analyzed in your research. This could include demographic variables, independent variables , dependent variables , or any other relevant factors.
  • Define the timeframe: Determine the timeframe for your study, including the start and end date, as well as any specific time intervals that will be measured.
  • Determine the geographical scope: If your research is location-specific, define the geographical scope of your study. This could include specific regions, cities, or neighborhoods that you will be focusing on.
  • Outline the limitations: Finally, outline any limitations or constraints of your research, such as time, resources, or access to data. This will help readers understand the scope and applicability of your research findings.

Examples of the Scope of the Research

Some Examples of the Scope of the Research are as follows:

Title : “Investigating the impact of artificial intelligence on job automation in the IT industry”

Scope of Research:

This study aims to explore the impact of artificial intelligence on job automation in the IT industry. The research will involve a qualitative analysis of job postings, identifying tasks that can be automated using AI. The study will also assess the potential implications of job automation on the workforce, including job displacement, job creation, and changes in job requirements.

Title : “Developing a machine learning model for predicting cyberattacks on corporate networks”

This study will develop a machine learning model for predicting cyberattacks on corporate networks. The research will involve collecting and analyzing network traffic data, identifying patterns and trends that are indicative of cyberattacks. The study aims to build an accurate and reliable predictive model that can help organizations identify and prevent cyberattacks before they occur.

Title: “Assessing the usability of a mobile app for managing personal finances”

This study will assess the usability of a mobile app for managing personal finances. The research will involve conducting a usability test with a group of participants, evaluating the app’s ease of use, efficiency, and user satisfaction. The study aims to identify areas of the app that need improvement, and to provide recommendations for enhancing its usability and user experience.

Title : “Exploring the effects of mindfulness meditation on stress reduction among college students”

This study aims to investigate the impact of mindfulness meditation on reducing stress levels among college students. The research will involve a randomized controlled trial with two groups: a treatment group that receives mindfulness meditation training and a control group that receives no intervention. The study will examine changes in stress levels, as measured by self-report questionnaires, before and after the intervention.

Title: “Investigating the impact of social media on body image dissatisfaction among young adults”

This study will explore the relationship between social media use and body image dissatisfaction among young adults. The research will involve a cross-sectional survey of participants aged 18-25, assessing their social media use, body image perceptions, and self-esteem. The study aims to identify any correlations between social media use and body image dissatisfaction, and to determine if certain social media platforms or types of content are particularly harmful.

When to Write Scope of the Research

Here is a guide on When to Write the Scope of the Research:

  • Before starting your research project, it’s important to clearly define the scope of your study. This will help you stay focused on your research question and avoid getting sidetracked by irrelevant information.
  • The scope of the research should be determined by the research question or problem statement. It should outline what you intend to investigate and what you will not be investigating.
  • The scope should also take into consideration any limitations of the study, such as time, resources, or access to data. This will help you realistically plan and execute your research.
  • Writing the scope of the research early in the research process can also help you refine your research question and identify any gaps in the existing literature that your study can address.
  • It’s important to revisit the scope of the research throughout the research process to ensure that you stay on track and make any necessary adjustments.
  • The scope of the research should be clearly communicated in the research proposal or study protocol to ensure that all stakeholders are aware of the research objectives and limitations.
  • The scope of the research should also be reflected in the research design, methods, and analysis plan. This will ensure that the research is conducted in a systematic and rigorous manner that is aligned with the research objectives.
  • The scope of the research should be written in a clear and concise manner, using language that is accessible to all stakeholders, including those who may not be familiar with the research topic or methodology.
  • When writing the scope of the research, it’s important to be transparent about any assumptions or biases that may influence the research findings. This will help ensure that the research is conducted in an ethical and responsible manner.
  • The scope of the research should be reviewed and approved by the research supervisor, committee members, or other relevant stakeholders. This will ensure that the research is feasible, relevant, and contributes to the field of study.
  • Finally, the scope of the research should be clearly stated in the research report or dissertation to provide context for the research findings and conclusions. This will help readers understand the significance of the research and its contribution to the field of study.

Purpose of Scope of the Research

Purposes of Scope of the Research are as follows:

  • Defines the boundaries and extent of the study.
  • Determines the specific objectives and research questions to be addressed.
  • Provides direction and focus for the research.
  • Helps to identify the relevant theories, concepts, and variables to be studied.
  • Enables the researcher to select the appropriate research methodology and techniques.
  • Allows for the allocation of resources (time, money, personnel) to the research.
  • Establishes the criteria for the selection of the sample and data collection methods.
  • Facilitates the interpretation and generalization of the results.
  • Ensures the ethical considerations and constraints are addressed.
  • Provides a framework for the presentation and dissemination of the research findings.

Advantages of Scope of the Research

Here are some advantages of having a well-defined scope of research:

  • Provides clarity and focus: Defining the scope of research helps to provide clarity and focus to the study. This ensures that the research stays on track and does not deviate from its intended purpose.
  • Helps to manage resources: Knowing the scope of research allows researchers to allocate resources effectively. This includes managing time, budget, and personnel required to conduct the study.
  • Improves the quality of research: A well-defined scope of research helps to ensure that the study is designed to achieve specific objectives. This helps to improve the quality of the research by reducing the likelihood of errors or bias.
  • Facilitates communication: A clear scope of research enables researchers to communicate the goals and objectives of the study to stakeholders, such as funding agencies or participants. This facilitates understanding and enhances cooperation.
  • Enables replication : A well-defined scope of research makes it easier to replicate the study in the future. This allows other researchers to validate the findings and build upon them, leading to the advancement of knowledge in the field.
  • Increases the relevance of research: Defining the scope of research helps to ensure that the study is relevant to the problem or issue being investigated. This increases the likelihood that the findings will be useful and applicable to real-world situations.
  • Reduces the risk of scope creep : Scope creep occurs when the research expands beyond the original scope, leading to an increase in the time, cost, and resources required to complete the study. A clear definition of the scope of research helps to reduce the risk of scope creep by establishing boundaries and limitations.
  • Enhances the credibility of research: A well-defined scope of research helps to enhance the credibility of the study by ensuring that it is designed to achieve specific objectives and answer specific research questions. This makes it easier for others to assess the validity and reliability of the study.
  • Provides a framework for decision-making : A clear scope of research provides a framework for decision-making throughout the research process. This includes decisions related to data collection, analysis, and interpretation.

Scope of the Research Vs Scope of the Project

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

  • Skip to main content
  • Keyboard shortcuts for audio player

Fresh Air

Author Interviews

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Google Podcasts
  • Amazon Music

Your support helps make our show possible and unlocks access to our sponsor-free feed.

Plants can communicate and respond to touch. Does that mean they're intelligent?

Headshot of Tonya Mosley.

Tonya Mosley

what do further research mean

"The primary way plants communicate with each other is through a language, so to speak, of chemical gasses," journalist Zoë Schlanger says. Mohd Rasfan/AFP via Getty Images hide caption

"The primary way plants communicate with each other is through a language, so to speak, of chemical gasses," journalist Zoë Schlanger says.

In the 1960s and '70s, a series of questionable experiments claimed to prove that plants could behave like humans, that they had feelings, responded to music and could even take a polygraph test .

Though most of those claims have since been debunked, climate journalist Zoë Schlanger says a new wave of research suggests that plants are indeed "intelligent" in complex ways that challenge our understanding of agency and consciousness.

"Agency is this effect of having ... an active stake in the outcome of your life," Schlanger says. "And when I was looking at plants and speaking to botanists, it became very clear to me that plants have this."

In her new book, The Light Eaters: How the Unseen World of Plant Intelligence Offers a New Understanding of Life on Earth , Schlanger, a staff reporter at The Atlantic, writes about how plants use information from the environment, and from the past, to make "choices" for the future.

Happy Arbor Day! These 20 books will change the way you think about trees

Happy Arbor Day! These 20 books will change the way you think about trees

Schlanger notes that some tomato plants, when being eaten by caterpillars, fill their leaves with a chemical that makes them so unappetizing that the caterpillars start eating each other instead. Corn plants have been known to sample the saliva of predator caterpillars — and then use that information to emit a chemical to attract a parasitic wasp that will attack the caterpillar.

Stop overwatering your houseplants, and other things plant experts want you to know

Stop overwatering your houseplants, and other things plant experts want you to know

Schlanger acknowledges that our understanding of plants is still developing — as are the definitions of "intelligence" and "consciousness." "Science is there [for] observation and to experiment, but it can't answer questions about this ineffable, squishy concept of intelligence and consciousness," she says.

But, she adds, "part of me feels like it almost doesn't matter, because what we see plants doing — what we now understand they can do — simply brings them into this realm of alert, active processing beings, which is a huge step from how many of us were raised to view them, which is more like ornaments in our world or this decorative backdrop for our our lives."

Interview highlights

The Light Eaters, by Zoë Schlanger

On the concept of plant "intelligence"

Intelligence is this thing that's loaded with so much human meaning. It's too muddled up sometimes with academic notions of intelligence. ... Is this even something we want to layer on to plants? And that's something that I hear a lot of plant scientists talk about. They recognize more than anyone that plants are not little humans. They don't want their subjects to be reduced in a way to human tropes or human standards of either of those things.

On the debate over if plants have nervous systems

I was able to go to a lab in Wisconsin where there [were] plants that had ... been engineered to glow, but only to glow when they've been touched. So I used tweezers to pinch a plant on its vein, ... the kind of mid-rib of a leaf. And I got to watch this glowing green signal emanate from the point where I pinch the plant out to the whole rest of the plant. Within two minutes, the whole plant had received a signal of my touch, of my "assault," so to speak, with these tweezers. And research like that is leading people within the plant sciences, but also people who work on neurobiology in people to question whether or not it's time to expand the notion of a nervous system.

On if plants feel pain

Plants don't have brains — but they sure act smart

TED Radio Hour

Plants don't have brains — but they sure act smart.

We have nothing at the moment to suggest that plants feel pain, but do they sense being touched, or sense being eaten, and respond with a flurry of defensive chemicals that suggest that they really want to prevent whatever's going on from continuing? Absolutely. So this is where we get into tricky territory. Do we ascribe human concepts like pain ... to a plant, even though it has no brain? And we can't ask it if it feels pain. We have not found pain receptors in a plant. But then again, I mean, the devil's advocate view here is that we only found the mechanoreceptors for pain in humans, like, fairly recently. But we do know plants are receiving inputs all the time. They know when a caterpillar is chewing on them, and they will respond with aggressive defensiveness. They will do wild things to keep that caterpillar from destroying them further.

On how plants communicate with each other

what do further research mean

Zoë Schlanger is a staff writer at The Atlantic. Heather Sten/Harper Collins hide caption

Zoë Schlanger is a staff writer at The Atlantic.

The primary way plants communicate with each other is through a language, so to speak, of chemical gasses. ... And there's little pores on plants that are microscopic. And under the microscope, they look like little fish lips. ... And they open to release these gasses. And those gasses contain information. So when a plant is being eaten or knocked over by an animal or hit by wind too hard, it will release an alarm call that other plants in the area can pick up on. And this alarm call can travel pretty long distances, and the plants that receive it will prime their immune systems and their defense systems to be ready for this invasion, for this group of chewing animals before they even arrive. So it's a way of saving themselves, and it makes evolutionary sense. If you're a plant, you don't want to be standing out in a field alone, so to speak. It's not good for reproductive fitness. It's not good for attracting pollinators. It's often in the interest of plants to warn their neighbors of attacks like this.

On plant "memory"

Orangutan in the wild applied medicinal plant to heal its own injury, biologists say

Research News

Orangutan in the wild applied medicinal plant to heal its own injury, biologists say.

There's one concept that I think is very beautiful, called the "memory of winter." And that's this thing where many plants, most of our fruit trees, for example, have to have the "memory," so to speak, of a certain number of days of cold in the winter in order to bloom in the spring. It's not enough that the warm weather comes. They have to get this profound cold period as well, which means to some extent they're counting. They're counting the elapsed days of cold and then the elapsed days of warmth to make sure they're also not necessarily emerging in a freak warm spell in February. This does sometimes happen, of course. We hear stories about farmers losing their crops to freak warm spells. But there is evidence to suggest there's parts of plants physiology that helps them record this information. But much like in people, we don't quite know the substrate of that memory. We can't quite locate where or how it's possibly being recorded.

On not anthropomorphizing plants

What's interesting is that scientists and botany journals will do somersaults to avoid using human language for plants. And I totally get why. But when you go meet them in their labs, they are willing to anthropomorphize the heck out of their study subjects. They'll say things like, "Oh, the plants hate when I do that." Or, "They really like this when I do this or they like this treatment." I once heard a scientist talk about, "We're going to go torture the plant again." So they're perfectly willing to do that in private. And the reason for that is not because they're holding some secret about how plants are actually just little humans. It's that they've already resolved that complexity in their mind. They trust themselves to not be reducing their subjects to human, simplistic human tropes. And that's going to be a task for all of us to somehow come to that place.

It's a real challenge for me. So much of what I was learning while doing research for this book was super intangible. You can't see a plant communicating, you can't watch a plant priming its immune system or manipulating an insect. A lot of these things are happening in invisible ways. ... Now when I go into a park, I feel totally surrounded by little aliens. I know that there is immense plant drama happening all over the place around me.

Sam Briger and Susan Nyakundi produced and edited this interview for broadcast. Bridget Bentz and Molly Seavy-Nesper adapted it for the web.

IMAGES

  1. What is Research

    what do further research mean

  2. How to write the part scope for further research?

    what do further research mean

  3. PPT

    what do further research mean

  4. What is Research?

    what do further research mean

  5. Research Terms And Their Meaning

    what do further research mean

  6. Research: Meaning, Definition, Importance & Types

    what do further research mean

VIDEO

  1. Research, Educational research

  2. Research Meaning

  3. LECTURE 1. THE MEANING OF RESEARCH

  4. Metho1: What Is Research?

  5. What are Causal Research Question? #causalresearchquestion

  6. Defining Undergraduate Research and Inquiry

COMMENTS

  1. Conclusions and recommendations for future research

    Similarly, further research might explore the (relatively rare) experiences of marginalised and seldom-heard groups involved in research. Payment for public involvement in research remains a contested issue with strongly held positions for and against; it would be helpful to further explore the value research partners and researchers place on ...

  2. FURTHER RESEARCH definition and meaning

    FURTHER RESEARCH definition | Meaning, pronunciation, translations and examples

  3. Organizing Your Social Sciences Research Paper

    Presents the underlying meaning of your research, notes possible implications in other areas of study, and explores possible improvements that can be made in order to further develop the concerns of your research; ... As noted, recommendations for further research can be included in either the discussion or conclusion of your paper, but do not ...

  4. Future Research

    Definition: Future research refers to investigations and studies that are yet to be conducted, and are aimed at expanding our understanding of a particular subject or area of interest. Future research is typically based on the current state of knowledge and seeks to address unanswered questions, gaps in knowledge, and new areas of inquiry.

  5. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  6. What Is Research and Why We Do It

    Research can also generate further research indirectly, typically by inspiring the birth of a new research thread in a new field. ... Understanding and acknowledging diversity, however, does not mean that research and researchers should be isolated into separate silos that do not communicate. Different approaches can be appropriate for ...

  7. How to Write Recommendations in Research

    Recommendations for future research should be: Concrete and specific. Supported with a clear rationale. Directly connected to your research. Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

  8. Exploratory Research

    Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth. Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive ...

  9. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  10. Avenues for Further Research

    Firstly, a clear-cut definition of an overview of reviews is needed. Secondly, methods of presenting the results of overviews of reviews need to be further developed. The needs of groups of users should be kept in mind, in this context, including clinicians, patients, and political decision makers.

  11. Section 6

    Areas for Further Research This study has identified several areas where further research is warranted to continue expanding knowledge about private transit and related services. Several of these depend on greater availability of operational data from private transportation providers.

  12. When Further Research Is NOT Warranted: The "Wisdom of Crowds" Fallacy

    Most scientific research studies have at least one thing in common: the conclusion section ends with, "further research is warranted." I'd say it's about as common as the "talk to your doctor" disclaimer in TV ads for pharmaceutical products. And in a way, they both serve the same purpose. They're a "CYA" move. What does "further research is warranted" mean in plain…

  13. Implications in Research

    This is also a good place to discuss the potential implications of the research. The researcher can identify gaps in the literature and suggest areas for further research. Conclusion or discussion section: The conclusion or discussion section is where the researcher summarizes the findings of the study and interprets their meaning. This is a ...

  14. Further Readings

    The references to further readings are not critical to understanding the central research problem. In other words, if further readings were not included, the citations to sources used in writing the paper would be sufficient in allowing the reader to evaluate the credibility of your literature review and analysis of prior research on the topic.

  15. Research Findings

    Definition: Research findings refer to the results obtained from a study or investigation conducted through a systematic and scientific approach. These findings are the outcomes of the data analysis, interpretation, and evaluation carried out during the research process. ... Recommendations: This section suggests areas for further research and ...

  16. What Are Implications In Research? Definition, Examples

    Implications in research: A quick guide. Research expands our knowledge of the world around us. The real impact lies in the implications of the research. Implications are a bridge between data and action, giving insight into the effects of the research and what it means. It's a chance for researchers to explain the why behind the research.

  17. What is Research?

    The purpose of research is to further understand the world and to learn how this knowledge can be applied to better everyday life. It is an integral part of problem solving. Although research can take many forms, there are three main purposes of research: Exploratory: Exploratory research is the first research to be conducted around a problem ...

  18. How to write the part scope for further research?

    The most important aspect of writing the future scope part is to present it in an affirmative way. As identified in the former section, it is crucial to identify if the limitations are methods-based or researcher based. It should be concise and critical to the field of study. Refrain from using a reference in the scope for the future research part.

  19. Further research is needed

    Here the summary (bottom diamond) shows that the treatment prevented babies from dying. Further studies like these are not needed. The phrases " further research is needed " ( FRIN ), " more research is needed " and other variants are commonly used in research papers. The cliché is so common that it has attracted research, regulation and ...

  20. How to Write an "Implications of Research" Section

    To summarize, remember these key pointers: Implications are the impact of your findings on the field of study. They serve as a reflection of the research you've conducted. They show the specific contributions of your findings and why the audience should care. They can be practical or theoretical. They aren't the same as recommendations.

  21. research, n.¹ meanings, etymology and more

    What does the noun research mean? There are seven meanings listed in OED's entry for the noun research, three of which are labelled obsolete. See 'Meaning & use' for definitions, usage, and quotation evidence. ... further revisions to definitions, pronunciation, etymology, headwords, variant spellings, quotations, and dates;

  22. Further Definition & Meaning

    The meaning of FURTHER is farther. How to use further in a sentence. Farther vs. Further: Usage Guide Synonym Discussion of Further. ... Verb Their efforts greatly furthered the state of research. The funds are to be used to further the public good. See More.

  23. How Much Research Is Being Written by Large Language Models?

    That's why we wanted to study how much of those have been written with the help of AI.". In two papers looking at LLM use in scientific publishings, Zou and his team* found that 17.5% of computer science papers and 16.9% of peer review text had at least some content drafted by AI. The paper on LLM usage in peer reviews will be presented at ...

  24. X4 2024 Strategy & Research: The Future of Insights Generation- Qualtrics

    X4 2024 Strategy & Research Showcase: Introducing the future of insights generation. At X4® 2024, we showcased how Qualtrics AI® enables teams to quickly summarize, analyze, and elevate qualitative and quantitative data, transforming the way research is done and amplifying business outcomes at scale. Customers tell us they've been flooded ...

  25. Google DeepMind's new AlphaFold can model a much larger slice of

    AlphaFold 3 can predict how DNA, RNA, and other molecules interact, further cementing its leading role in drug discovery and research. Who will benefit? Google DeepMind has released an improved ...

  26. What Does Quality of Evidence Mean?

    Gender medicine research, and youth gender medicine research in particular, suffers from these problems. To give one example, inapplicability is a form of indirectness in which the subjects of a study are different from the patients to whom an intervention is being offered. The gold standard of research in youth gender medicine is the Dutch study.

  27. A Peek Inside the Brains of 'Super-Agers'

    For a little over a decade, scientists have been studying a subset of people they call "super-agers.". These individuals are age 80 and up, but they have the memory ability of a person 20 to ...

  28. Scope of the Research

    Scope of research refers to the range of topics, areas, and subjects that a research project intends to cover. It is the extent and limitations of the study, defining what is included and excluded in the research. The scope of a research project depends on various factors, such as the research questions, objectives, methodology, and available ...

  29. Zoë Schlanger makes the case for plant intelligence in 'The Light

    Climate journalist Zoë Schlanger says research suggests that plants are indeed "intelligent" in complex ways that challenge our understanding of agency and consciousness. Her book is The Light ...

  30. What Does Child Empowerment Mean Today?

    Empowering all children to make the most of digital opportunities starts with further narrowing the gap in terms of access to digital tools and the Internet, where inequalities are persistent and pervasive. ... what does child empowerment mean today? Empowered children have the opportunity and ability to act on issues important and relevant to ...