• Privacy Policy

Research Method

Home » Research Recommendations – Examples and Writing Guide

Research Recommendations – Examples and Writing Guide

Table of Contents

Research Recommendations

Research Recommendations

Definition:

Research recommendations refer to suggestions or advice given to someone who is looking to conduct research on a specific topic or area. These recommendations may include suggestions for research methods, data collection techniques, sources of information, and other factors that can help to ensure that the research is conducted in a rigorous and effective manner. Research recommendations may be provided by experts in the field, such as professors, researchers, or consultants, and are intended to help guide the researcher towards the most appropriate and effective approach to their research project.

Parts of Research Recommendations

Research recommendations can vary depending on the specific project or area of research, but typically they will include some or all of the following parts:

  • Research question or objective : This is the overarching goal or purpose of the research project.
  • Research methods : This includes the specific techniques and strategies that will be used to collect and analyze data. The methods will depend on the research question and the type of data being collected.
  • Data collection: This refers to the process of gathering information or data that will be used to answer the research question. This can involve a range of different methods, including surveys, interviews, observations, or experiments.
  • Data analysis : This involves the process of examining and interpreting the data that has been collected. This can involve statistical analysis, qualitative analysis, or a combination of both.
  • Results and conclusions: This section summarizes the findings of the research and presents any conclusions or recommendations based on those findings.
  • Limitations and future research: This section discusses any limitations of the study and suggests areas for future research that could build on the findings of the current project.

How to Write Research Recommendations

Writing research recommendations involves providing specific suggestions or advice to a researcher on how to conduct their study. Here are some steps to consider when writing research recommendations:

  • Understand the research question: Before writing research recommendations, it is important to have a clear understanding of the research question and the objectives of the study. This will help to ensure that the recommendations are relevant and appropriate.
  • Consider the research methods: Consider the most appropriate research methods that could be used to collect and analyze data that will address the research question. Identify the strengths and weaknesses of the different methods and how they might apply to the specific research question.
  • Provide specific recommendations: Provide specific and actionable recommendations that the researcher can implement in their study. This can include recommendations related to sample size, data collection techniques, research instruments, data analysis methods, or other relevant factors.
  • Justify recommendations : Justify why each recommendation is being made and how it will help to address the research question or objective. It is important to provide a clear rationale for each recommendation to help the researcher understand why it is important.
  • Consider limitations and ethical considerations : Consider any limitations or potential ethical considerations that may arise in conducting the research. Provide recommendations for addressing these issues or mitigating their impact.
  • Summarize recommendations: Provide a summary of the recommendations at the end of the report or document, highlighting the most important points and emphasizing how the recommendations will contribute to the overall success of the research project.

Example of Research Recommendations

Example of Research Recommendations sample for students:

  • Further investigate the effects of X on Y by conducting a larger-scale randomized controlled trial with a diverse population.
  • Explore the relationship between A and B by conducting qualitative interviews with individuals who have experience with both.
  • Investigate the long-term effects of intervention C by conducting a follow-up study with participants one year after completion.
  • Examine the effectiveness of intervention D in a real-world setting by conducting a field study in a naturalistic environment.
  • Compare and contrast the results of this study with those of previous research on the same topic to identify any discrepancies or inconsistencies in the findings.
  • Expand upon the limitations of this study by addressing potential confounding variables and conducting further analyses to control for them.
  • Investigate the relationship between E and F by conducting a meta-analysis of existing literature on the topic.
  • Explore the potential moderating effects of variable G on the relationship between H and I by conducting subgroup analyses.
  • Identify potential areas for future research based on the gaps in current literature and the findings of this study.
  • Conduct a replication study to validate the results of this study and further establish the generalizability of the findings.

Applications of Research Recommendations

Research recommendations are important as they provide guidance on how to improve or solve a problem. The applications of research recommendations are numerous and can be used in various fields. Some of the applications of research recommendations include:

  • Policy-making: Research recommendations can be used to develop policies that address specific issues. For example, recommendations from research on climate change can be used to develop policies that reduce carbon emissions and promote sustainability.
  • Program development: Research recommendations can guide the development of programs that address specific issues. For example, recommendations from research on education can be used to develop programs that improve student achievement.
  • Product development : Research recommendations can guide the development of products that meet specific needs. For example, recommendations from research on consumer behavior can be used to develop products that appeal to consumers.
  • Marketing strategies: Research recommendations can be used to develop effective marketing strategies. For example, recommendations from research on target audiences can be used to develop marketing strategies that effectively reach specific demographic groups.
  • Medical practice : Research recommendations can guide medical practitioners in providing the best possible care to patients. For example, recommendations from research on treatments for specific conditions can be used to improve patient outcomes.
  • Scientific research: Research recommendations can guide future research in a specific field. For example, recommendations from research on a specific disease can be used to guide future research on treatments and cures for that disease.

Purpose of Research Recommendations

The purpose of research recommendations is to provide guidance on how to improve or solve a problem based on the findings of research. Research recommendations are typically made at the end of a research study and are based on the conclusions drawn from the research data. The purpose of research recommendations is to provide actionable advice to individuals or organizations that can help them make informed decisions, develop effective strategies, or implement changes that address the issues identified in the research.

The main purpose of research recommendations is to facilitate the transfer of knowledge from researchers to practitioners, policymakers, or other stakeholders who can benefit from the research findings. Recommendations can help bridge the gap between research and practice by providing specific actions that can be taken based on the research results. By providing clear and actionable recommendations, researchers can help ensure that their findings are put into practice, leading to improvements in various fields, such as healthcare, education, business, and public policy.

Characteristics of Research Recommendations

Research recommendations are a key component of research studies and are intended to provide practical guidance on how to apply research findings to real-world problems. The following are some of the key characteristics of research recommendations:

  • Actionable : Research recommendations should be specific and actionable, providing clear guidance on what actions should be taken to address the problem identified in the research.
  • Evidence-based: Research recommendations should be based on the findings of the research study, supported by the data collected and analyzed.
  • Contextual: Research recommendations should be tailored to the specific context in which they will be implemented, taking into account the unique circumstances and constraints of the situation.
  • Feasible : Research recommendations should be realistic and feasible, taking into account the available resources, time constraints, and other factors that may impact their implementation.
  • Prioritized: Research recommendations should be prioritized based on their potential impact and feasibility, with the most important recommendations given the highest priority.
  • Communicated effectively: Research recommendations should be communicated clearly and effectively, using language that is understandable to the target audience.
  • Evaluated : Research recommendations should be evaluated to determine their effectiveness in addressing the problem identified in the research, and to identify opportunities for improvement.

Advantages of Research Recommendations

Research recommendations have several advantages, including:

  • Providing practical guidance: Research recommendations provide practical guidance on how to apply research findings to real-world problems, helping to bridge the gap between research and practice.
  • Improving decision-making: Research recommendations help decision-makers make informed decisions based on the findings of research, leading to better outcomes and improved performance.
  • Enhancing accountability : Research recommendations can help enhance accountability by providing clear guidance on what actions should be taken, and by providing a basis for evaluating progress and outcomes.
  • Informing policy development : Research recommendations can inform the development of policies that are evidence-based and tailored to the specific needs of a given situation.
  • Enhancing knowledge transfer: Research recommendations help facilitate the transfer of knowledge from researchers to practitioners, policymakers, or other stakeholders who can benefit from the research findings.
  • Encouraging further research : Research recommendations can help identify gaps in knowledge and areas for further research, encouraging continued exploration and discovery.
  • Promoting innovation: Research recommendations can help identify innovative solutions to complex problems, leading to new ideas and approaches.

Limitations of Research Recommendations

While research recommendations have several advantages, there are also some limitations to consider. These limitations include:

  • Context-specific: Research recommendations may be context-specific and may not be applicable in all situations. Recommendations developed in one context may not be suitable for another context, requiring adaptation or modification.
  • I mplementation challenges: Implementation of research recommendations may face challenges, such as lack of resources, resistance to change, or lack of buy-in from stakeholders.
  • Limited scope: Research recommendations may be limited in scope, focusing only on a specific issue or aspect of a problem, while other important factors may be overlooked.
  • Uncertainty : Research recommendations may be uncertain, particularly when the research findings are inconclusive or when the recommendations are based on limited data.
  • Bias : Research recommendations may be influenced by researcher bias or conflicts of interest, leading to recommendations that are not in the best interests of stakeholders.
  • Timing : Research recommendations may be time-sensitive, requiring timely action to be effective. Delayed action may result in missed opportunities or reduced effectiveness.
  • Lack of evaluation: Research recommendations may not be evaluated to determine their effectiveness or impact, making it difficult to assess whether they are successful or not.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Enago Academy

Research Recommendations – Guiding policy-makers for evidence-based decision making

' src=

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of exploration. In an era marked by rapid technological advancements and an ever-expanding knowledge base, refining the process of generating research recommendations becomes imperative.

But, what is a research recommendation?

Research recommendations are suggestions or advice provided to researchers to guide their study on a specific topic . They are typically given by experts in the field. Research recommendations are more action-oriented and provide specific guidance for decision-makers, unlike implications that are broader and focus on the broader significance and consequences of the research findings. However, both are crucial components of a research study.

Difference Between Research Recommendations and Implication

Although research recommendations and implications are distinct components of a research study, they are closely related. The differences between them are as follows:

Difference between research recommendation and implication

Types of Research Recommendations

Recommendations in research can take various forms, which are as follows:

These recommendations aim to assist researchers in navigating the vast landscape of academic knowledge.

Let us dive deeper to know about its key components and the steps to write an impactful research recommendation.

Key Components of Research Recommendations

The key components of research recommendations include defining the research question or objective, specifying research methods, outlining data collection and analysis processes, presenting results and conclusions, addressing limitations, and suggesting areas for future research. Here are some characteristics of research recommendations:

Characteristics of research recommendation

Research recommendations offer various advantages and play a crucial role in ensuring that research findings contribute to positive outcomes in various fields. However, they also have few limitations which highlights the significance of a well-crafted research recommendation in offering the promised advantages.

Advantages and limitations of a research recommendation

The importance of research recommendations ranges in various fields, influencing policy-making, program development, product development, marketing strategies, medical practice, and scientific research. Their purpose is to transfer knowledge from researchers to practitioners, policymakers, or stakeholders, facilitating informed decision-making and improving outcomes in different domains.

How to Write Research Recommendations?

Research recommendations can be generated through various means, including algorithmic approaches, expert opinions, or collaborative filtering techniques. Here is a step-wise guide to build your understanding on the development of research recommendations.

1. Understand the Research Question:

Understand the research question and objectives before writing recommendations. Also, ensure that your recommendations are relevant and directly address the goals of the study.

2. Review Existing Literature:

Familiarize yourself with relevant existing literature to help you identify gaps , and offer informed recommendations that contribute to the existing body of research.

3. Consider Research Methods:

Evaluate the appropriateness of different research methods in addressing the research question. Also, consider the nature of the data, the study design, and the specific objectives.

4. Identify Data Collection Techniques:

Gather dataset from diverse authentic sources. Include information such as keywords, abstracts, authors, publication dates, and citation metrics to provide a rich foundation for analysis.

5. Propose Data Analysis Methods:

Suggest appropriate data analysis methods based on the type of data collected. Consider whether statistical analysis, qualitative analysis, or a mixed-methods approach is most suitable.

6. Consider Limitations and Ethical Considerations:

Acknowledge any limitations and potential ethical considerations of the study. Furthermore, address these limitations or mitigate ethical concerns to ensure responsible research.

7. Justify Recommendations:

Explain how your recommendation contributes to addressing the research question or objective. Provide a strong rationale to help researchers understand the importance of following your suggestions.

8. Summarize Recommendations:

Provide a concise summary at the end of the report to emphasize how following these recommendations will contribute to the overall success of the research project.

By following these steps, you can create research recommendations that are actionable and contribute meaningfully to the success of the research project.

Download now to unlock some tips to improve your journey of writing research recommendations.

Example of a Research Recommendation

Here is an example of a research recommendation based on a hypothetical research to improve your understanding.

Research Recommendation: Enhancing Student Learning through Integrated Learning Platforms

Background:

The research study investigated the impact of an integrated learning platform on student learning outcomes in high school mathematics classes. The findings revealed a statistically significant improvement in student performance and engagement when compared to traditional teaching methods.

Recommendation:

In light of the research findings, it is recommended that educational institutions consider adopting and integrating the identified learning platform into their mathematics curriculum. The following specific recommendations are provided:

  • Implementation of the Integrated Learning Platform:

Schools are encouraged to adopt the integrated learning platform in mathematics classrooms, ensuring proper training for teachers on its effective utilization.

  • Professional Development for Educators:

Develop and implement professional programs to train educators in the effective use of the integrated learning platform to address any challenges teachers may face during the transition.

  • Monitoring and Evaluation:

Establish a monitoring and evaluation system to track the impact of the integrated learning platform on student performance over time.

  • Resource Allocation:

Allocate sufficient resources, both financial and technical, to support the widespread implementation of the integrated learning platform.

By implementing these recommendations, educational institutions can harness the potential of the integrated learning platform and enhance student learning experiences and academic achievements in mathematics.

This example covers the components of a research recommendation, providing specific actions based on the research findings, identifying the target audience, and outlining practical steps for implementation.

Using AI in Research Recommendation Writing

Enhancing research recommendations is an ongoing endeavor that requires the integration of cutting-edge technologies, collaborative efforts, and ethical considerations. By embracing data-driven approaches and leveraging advanced technologies, the research community can create more effective and personalized recommendation systems. However, it is accompanied by several limitations. Therefore, it is essential to approach the use of AI in research with a critical mindset, and complement its capabilities with human expertise and judgment.

Here are some limitations of integrating AI in writing research recommendation and some ways on how to counter them.

1. Data Bias

AI systems rely heavily on data for training. If the training data is biased or incomplete, the AI model may produce biased results or recommendations.

How to tackle: Audit regularly the model’s performance to identify any discrepancies and adjust the training data and algorithms accordingly.

2. Lack of Understanding of Context:

AI models may struggle to understand the nuanced context of a particular research problem. They may misinterpret information, leading to inaccurate recommendations.

How to tackle: Use AI to characterize research articles and topics. Employ them to extract features like keywords, authorship patterns and content-based details.

3. Ethical Considerations:

AI models might stereotype certain concepts or generate recommendations that could have negative consequences for certain individuals or groups.

How to tackle: Incorporate user feedback mechanisms to reduce redundancies. Establish an ethics review process for AI models in research recommendation writing.

4. Lack of Creativity and Intuition:

AI may struggle with tasks that require a deep understanding of the underlying principles or the ability to think outside the box.

How to tackle: Hybrid approaches can be employed by integrating AI in data analysis and identifying patterns for accelerating the data interpretation process.

5. Interpretability:

Many AI models, especially complex deep learning models, lack transparency on how the model arrived at a particular recommendation.

How to tackle: Implement models like decision trees or linear models. Provide clear explanation of the model architecture, training process, and decision-making criteria.

6. Dynamic Nature of Research:

Research fields are dynamic, and new information is constantly emerging. AI models may struggle to keep up with the rapidly changing landscape and may not be able to adapt to new developments.

How to tackle: Establish a feedback loop for continuous improvement. Regularly update the recommendation system based on user feedback and emerging research trends.

The integration of AI in research recommendation writing holds great promise for advancing knowledge and streamlining the research process. However, navigating these concerns is pivotal in ensuring the responsible deployment of these technologies. Researchers need to understand the use of responsible use of AI in research and must be aware of the ethical considerations.

Exploring research recommendations plays a critical role in shaping the trajectory of scientific inquiry. It serves as a compass, guiding researchers toward more robust methodologies, collaborative endeavors, and innovative approaches. Embracing these suggestions not only enhances the quality of individual studies but also contributes to the collective advancement of human understanding.

Frequently Asked Questions

The purpose of recommendations in research is to provide practical and actionable suggestions based on the study's findings, guiding future actions, policies, or interventions in a specific field or context. Recommendations bridges the gap between research outcomes and their real-world application.

To make a research recommendation, analyze your findings, identify key insights, and propose specific, evidence-based actions. Include the relevance of the recommendations to the study's objectives and provide practical steps for implementation.

Begin a recommendation by succinctly summarizing the key findings of the research. Clearly state the purpose of the recommendation and its intended impact. Use a direct and actionable language to convey the suggested course of action.

Rate this article Cancel Reply

Your email address will not be published.

recommendations in research

Enago Academy's Most Popular Articles

PDF Citation Guide for APA, MLA, AMA and Chicago Style

  • Reporting Research

How to Effectively Cite a PDF (APA, MLA, AMA, and Chicago Style)

The pressure to “publish or perish” is a well-known reality for academics, striking fear into…

AI in journal selection

  • AI in Academia
  • Trending Now

Using AI for Journal Selection — Simplifying your academic publishing journey in the smart way

Strategic journal selection plays a pivotal role in maximizing the impact of one’s scholarly work.…

Understand Academic Burnout: Spot the Signs & Reclaim Your Focus

  • Career Corner

Recognizing the signs: A guide to overcoming academic burnout

As the sun set over the campus, casting long shadows through the library windows, Alex…

How to Promote an Inclusive and Equitable Lab Environment

  • Diversity and Inclusion

Reassessing the Lab Environment to Create an Equitable and Inclusive Space

The pursuit of scientific discovery has long been fueled by diverse minds and perspectives. Yet…

AI Summarization Tools

Simplifying the Literature Review Journey — A comparative analysis of 6 AI summarization tools

Imagine having to skim through and read mountains of research papers and books, only to…

How to Optimize Your Research Process: A step-by-step guide

Digital Citations: A comprehensive guide to citing of websites in APA, MLA, and CMOS…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

recommendations in research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

recommendations in research

What should universities' stance be on AI tools in research and academic writing?

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • How to formulate...

How to formulate research recommendations

  • Related content
  • Peer review
  • Polly Brown ( pbrown{at}bmjgroup.com ) , publishing manager 1 ,
  • Klara Brunnhuber , clinical editor 1 ,
  • Kalipso Chalkidou , associate director, research and development 2 ,
  • Iain Chalmers , director 3 ,
  • Mike Clarke , director 4 ,
  • Mark Fenton , editor 3 ,
  • Carol Forbes , reviews manager 5 ,
  • Julie Glanville , associate director/information service manager 5 ,
  • Nicholas J Hicks , consultant in public health medicine 6 ,
  • Janet Moody , identification and prioritisation manager 6 ,
  • Sara Twaddle , director 7 ,
  • Hazim Timimi , systems developer 8 ,
  • Pamela Young , senior programme manager 6
  • 1 BMJ Publishing Group, London WC1H 9JR,
  • 2 National Institute for Health and Clinical Excellence, London WC1V 6NA,
  • 3 Database of Uncertainties about the Effects of Treatments, James Lind Alliance Secretariat, James Lind Initiative, Oxford OX2 7LG,
  • 4 UK Cochrane Centre, Oxford OX2 7LG,
  • 5 Centre for Reviews and Dissemination, University of York, York YO10 5DD,
  • 6 National Coordinating Centre for Health Technology Assessment, University of Southampton, Southampton SO16 7PX,
  • 7 Scottish Intercollegiate Guidelines Network, Edinburgh EH2 1EN,
  • 8 Update Software, Oxford OX2 7LG
  • Correspondence to: PBrown
  • Accepted 22 September 2006

“More research is needed” is a conclusion that fits most systematic reviews. But authors need to be more specific about what exactly is required

Long awaited reports of new research, systematic reviews, and clinical guidelines are too often a disappointing anticlimax for those wishing to use them to direct future research. After many months or years of effort and intellectual energy put into these projects, authors miss the opportunity to identify unanswered questions and outstanding gaps in the evidence. Most reports contain only a less than helpful, general research recommendation. This means that the potential value of these recommendations is lost.

Current recommendations

In 2005, representatives of organisations commissioning and summarising research, including the BMJ Publishing Group, the Centre for Reviews and Dissemination, the National Coordinating Centre for Health Technology Assessment, the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network, and the UK Cochrane Centre, met as members of the development group for the Database of Uncertainties about the Effects of Treatments (see bmj.com for details on all participating organisations). Our aim was to discuss the state of research recommendations within our organisations and to develop guidelines for improving the presentation of proposals for further research. All organisations had found weaknesses in the way researchers and authors of systematic reviews and clinical guidelines stated the need for further research. As part of the project, a member of the Centre for Reviews and Dissemination under-took a rapid literature search to identify information on research recommendation models, which found some individual methods but no group initiatives to attempt to standardise recommendations.

Suggested format for research recommendations on the effects of treatments

Core elements.

E Evidence (What is the current state of the evidence?)

P Population (What is the population of interest?)

I Intervention (What are the interventions of interest?)

C Comparison (What are the comparisons of interest?)

O Outcome (What are the outcomes of interest?)

T Time stamp (Date of recommendation)

Optional elements

d Disease burden or relevance

t Time aspect of core elements of EPICOT

s Appropriate study type according to local need

In January 2006, the National Coordinating Centre for Health Technology Assessment presented the findings of an initial comparative analysis of how different organisations currently structure their research recommendations. The National Institute for Health and Clinical Excellence and the National Coordinating Centre for Health Technology Assessment request authors to present recommendations in a four component format for formulating well built clinical questions around treatments: population, intervention, comparison, and outcomes (PICO). 1 In addition, the research recommendation is dated and authors are asked to provide the current state of the evidence to support the proposal.

Clinical Evidence , although not directly standardising its sections for research recommendations, presents gaps in the evidence using a slightly extended version of the PICO format: evidence, population, intervention, comparison, outcomes, and time (EPICOT). Clinical Evidence has used this inherent structure to feed research recommendations on interventions categorised as “unknown effectiveness” back to the National Coordinating Centre for Health Technology Assessment and for inclusion in the Database of Uncertainties about the Effects of Treatments ( http://www.duets.nhs.uk/ ).

We decided to propose the EPICOT format as the basis for its statement on formulating research recommendations and tested this proposal through discussion and example. We agreed that this set of components provided enough context for formulating research recommendations without limiting researchers. In order for the proposed framework to be flexible and more widely applicable, the group discussed using several optional components when they seemed relevant or were proposed by one or more of the group members. The final outcome of discussions resulted in the proposed EPICOT+ format (box).

A recent BMJ article highlighted how lack of research hinders the applicability of existing guidelines to patients in primary care who have had a stroke or transient ischaemic attack. 2 Most research in the area had been conducted in younger patients with a recent episode and in a hospital setting. The authors concluded that “further evidence should be collected on the efficacy and adverse effects of intensive blood pressure lowering in representative populations before we implement this guidance [from national and international guidelines] in primary care.” Table 1 outlines how their recommendations could be formulated using the EPICOT+ format. The decision on whether additional research is indeed clinically and ethically warranted will still lie with the organisation considering commissioning the research.

Research recommendation based on gap in the evidence identified by a cross sectional study of clinical guidelines for management of patients who have had a stroke

  • View inline

Table 2 shows the use of EPICOT+ for an unanswered question on the effectiveness of compliance therapy in people with schizophrenia, identified by the Database of Uncertainties about the Effects of Treatments.

Research recommendation based on a gap in the evidence on treatment of schizophrenia identified by the Database of Uncertainties about the Effects of Treatments

Discussions around optional elements

Although the group agreed that the PICO elements should be core requirements for a research recommendation, intense discussion centred on the inclusion of factors defining a more detailed context, such as current state of evidence (E), appropriate study type (s), disease burden and relevance (d), and timeliness (t).

Initially, group members interpreted E differently. Some viewed it as the supporting evidence for a research recommendation and others as the suggested study type for a research recommendation. After discussion, we agreed that E should be used to refer to the amount and quality of research supporting the recommendation. However, the issue remained contentious as some of us thought that if a systematic review was available, its reference would sufficiently identify the strength of the existing evidence. Others thought that adding evidence to the set of core elements was important as it provided a summary of the supporting evidence, particularly as the recommendation was likely to be abstracted and used separately from the review or research that led to its formulation. In contrast, the suggested study type (s) was left as an optional element.

A research recommendation will rarely have an absolute value in itself. Its relative priority will be influenced by the burden of ill health (d), which is itself dependent on factors such as local prevalence, disease severity, relevant risk factors, and the priorities of the organisation considering commissioning the research.

Similarly, the issue of time (t) could be seen to be relevant to each of the core elements in varying ways—for example, duration of treatment, length of follow-up. The group therefore agreed that time had a subsidiary role within each core item; however, T as the date of the recommendation served to define its shelf life and therefore retained individual importance.

Applicability and usability

The proposed statement on research recommendations applies to uncertainties of the effects of any form of health intervention or treatment and is intended for research in humans rather than basic scientific research. Further investigation is required to assess the applicability of the format for questions around diagnosis, signs and symptoms, prognosis, investigations, and patient preference.

When the proposed format is applied to a specific research recommendation, the emphasis placed on the relevant part(s) of the EPICOT+ format may vary by author, audience, and intended purpose. For example, a recommendation for research into treatments for transient ischaemic attack may or may not define valid outcome measures to assess quality of life or gather data on adverse effects. Among many other factors, its implementation will also depend on the strength of current findings—that is, strong evidence may support a tightly focused recommendation whereas a lack of evidence would result in a more general recommendation.

The controversy within the group, especially around the optional components, reflects the different perspectives of the participating organisations—whether they were involved in commissioning, undertaking, or summarising research. Further issues will arise during the implementation of the proposed format, and we welcome feedback and discussion.

Summary points

No common guidelines exist for the formulation of recommendations for research on the effects of treatments

Major organisations involved in commissioning or summarising research compared their approaches and agreed on core questions

The essential items can be summarised as EPICOT+ (evidence, population, intervention, comparison, outcome, and time)

Further details, such as disease burden and appropriate study type, should be considered as required

We thank Patricia Atkinson and Jeremy Wyatt.

Contributors and sources All authors contributed to manuscript preparation and approved the final draft. NJH is the guarantor.

Competing interests None declared.

  • Richardson WS ,
  • Wilson MC ,
  • Nishikawa J ,
  • Hayward RSA
  • McManus RJ ,
  • Leonardi-Bee J ,
  • PROGRESS Collaborative Group
  • Warburton E
  • Rothwell P ,
  • McIntosh AM ,
  • Lawrie SM ,
  • Stanfield AC
  • O'Donnell C ,
  • Donohoe G ,
  • Sharkey L ,
  • Jablensky A ,
  • Sartorius N ,
  • Ernberg G ,

recommendations in research

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • How to Write Discussions and Conclusions

How to Write Discussions and Conclusions

The discussion section contains the results and outcomes of a study. An effective discussion informs readers what can be learned from your experiment and provides context for the results.

What makes an effective discussion?

When you’re ready to write your discussion, you’ve already introduced the purpose of your study and provided an in-depth description of the methodology. The discussion informs readers about the larger implications of your study based on the results. Highlighting these implications while not overstating the findings can be challenging, especially when you’re submitting to a journal that selects articles based on novelty or potential impact. Regardless of what journal you are submitting to, the discussion section always serves the same purpose: concluding what your study results actually mean.

A successful discussion section puts your findings in context. It should include:

  • the results of your research,
  • a discussion of related research, and
  • a comparison between your results and initial hypothesis.

Tip: Not all journals share the same naming conventions.

You can apply the advice in this article to the conclusion, results or discussion sections of your manuscript.

Our Early Career Researcher community tells us that the conclusion is often considered the most difficult aspect of a manuscript to write. To help, this guide provides questions to ask yourself, a basic structure to model your discussion off of and examples from published manuscripts. 

recommendations in research

Questions to ask yourself:

  • Was my hypothesis correct?
  • If my hypothesis is partially correct or entirely different, what can be learned from the results? 
  • How do the conclusions reshape or add onto the existing knowledge in the field? What does previous research say about the topic? 
  • Why are the results important or relevant to your audience? Do they add further evidence to a scientific consensus or disprove prior studies? 
  • How can future research build on these observations? What are the key experiments that must be done? 
  • What is the “take-home” message you want your reader to leave with?

How to structure a discussion

Trying to fit a complete discussion into a single paragraph can add unnecessary stress to the writing process. If possible, you’ll want to give yourself two or three paragraphs to give the reader a comprehensive understanding of your study as a whole. Here’s one way to structure an effective discussion:

recommendations in research

Writing Tips

While the above sections can help you brainstorm and structure your discussion, there are many common mistakes that writers revert to when having difficulties with their paper. Writing a discussion can be a delicate balance between summarizing your results, providing proper context for your research and avoiding introducing new information. Remember that your paper should be both confident and honest about the results! 

What to do

  • Read the journal’s guidelines on the discussion and conclusion sections. If possible, learn about the guidelines before writing the discussion to ensure you’re writing to meet their expectations. 
  • Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. 
  • Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and limitations of the research. 
  • State whether the results prove or disprove your hypothesis. If your hypothesis was disproved, what might be the reasons? 
  • Introduce new or expanded ways to think about the research question. Indicate what next steps can be taken to further pursue any unresolved questions. 
  • If dealing with a contemporary or ongoing problem, such as climate change, discuss possible consequences if the problem is avoided. 
  • Be concise. Adding unnecessary detail can distract from the main findings. 

What not to do

Don’t

  • Rewrite your abstract. Statements with “we investigated” or “we studied” generally do not belong in the discussion. 
  • Include new arguments or evidence not previously discussed. Necessary information and evidence should be introduced in the main body of the paper. 
  • Apologize. Even if your research contains significant limitations, don’t undermine your authority by including statements that doubt your methodology or execution. 
  • Shy away from speaking on limitations or negative results. Including limitations and negative results will give readers a complete understanding of the presented research. Potential limitations include sources of potential bias, threats to internal or external validity, barriers to implementing an intervention and other issues inherent to the study design. 
  • Overstate the importance of your findings. Making grand statements about how a study will fully resolve large questions can lead readers to doubt the success of the research. 

Snippets of Effective Discussions:

Consumer-based actions to reduce plastic pollution in rivers: A multi-criteria decision analysis approach

Identifying reliable indicators of fitness in polar bears

  • How to Write a Great Title
  • How to Write an Abstract
  • How to Write Your Methods
  • How to Report Statistics
  • How to Edit Your Work

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

How to formulate research recommendations

Affiliation.

  • 1 BMJ Publishing Group, London WC1H 9JR. [email protected]
  • PMID: 17038740
  • PMCID: PMC1602035
  • DOI: 10.1136/bmj.38987.492014.94

“More research is needed” is a conclusion that fits most systematic reviews. But authors need to be more specific about what exactly is required

Publication types

  • Biomedical Research / methods
  • Biomedical Research / organization & administration*
  • Biomedical Research / standards
  • Diffusion of Innovation*
  • Evidence-Based Medicine

Implications or Recommendations in Research: What's the Difference?

  • Peer Review

High-quality research articles that get many citations contain both implications and recommendations. Implications are the impact your research makes, whereas recommendations are specific actions that can then be taken based on your findings, such as for more research or for policymaking.

Updated on August 23, 2022

yellow sign reading opportunity ahead

That seems clear enough, but the two are commonly confused.

This confusion is especially true if you come from a so-called high-context culture in which information is often implied based on the situation, as in many Asian cultures. High-context cultures are different from low-context cultures where information is more direct and explicit (as in North America and many European cultures).

Let's set these two straight in a low-context way; i.e., we'll be specific and direct! This is the best way to be in English academic writing because you're writing for the world.

Implications and recommendations in a research article

The standard format of STEM research articles is what's called IMRaD:

  • Introduction
  • Discussion/conclusions

Some journals call for a separate conclusions section, while others have the conclusions as the last part of the discussion. You'll write these four (or five) sections in the same sequence, though, no matter the journal.

The discussion section is typically where you restate your results and how well they confirmed your hypotheses. Give readers the answer to the questions for which they're looking to you for an answer.

At this point, many researchers assume their paper is finished. After all, aren't the results the most important part? As you might have guessed, no, you're not quite done yet.

The discussion/conclusions section is where to say what happened and what should now happen

The discussion/conclusions section of every good scientific article should contain the implications and recommendations.

The implications, first of all, are the impact your results have on your specific field. A high-impact, highly cited article will also broaden the scope here and provide implications to other fields. This is what makes research cross-disciplinary.

Recommendations, however, are suggestions to improve your field based on your results.

These two aspects help the reader understand your broader content: How and why your work is important to the world. They also tell the reader what can be changed in the future based on your results.

These aspects are what editors are looking for when selecting papers for peer review.

how to write the conclusion section of a research manuscript

Implications and recommendations are, thus, written at the end of the discussion section, and before the concluding paragraph. They help to “wrap up” your paper. Once your reader understands what you found, the next logical step is what those results mean and what should come next.

Then they can take the baton, in the form of your work, and run with it. That gets you cited and extends your impact!

The order of implications and recommendations also matters. Both are written after you've summarized your main findings in the discussion section. Then, those results are interpreted based on ongoing work in the field. After this, the implications are stated, followed by the recommendations.

Writing an academic research paper is a bit like running a race. Finish strong, with your most important conclusion (recommendation) at the end. Leave readers with an understanding of your work's importance. Avoid generic, obvious phrases like "more research is needed to fully address this issue." Be specific.

The main differences between implications and recommendations (table)

 the differences between implications and recommendations

Now let's dig a bit deeper into actually how to write these parts.

What are implications?

Research implications tell us how and why your results are important for the field at large. They help answer the question of “what does it mean?” Implications tell us how your work contributes to your field and what it adds to it. They're used when you want to tell your peers why your research is important for ongoing theory, practice, policymaking, and for future research.

Crucially, your implications must be evidence-based. This means they must be derived from the results in the paper.

Implications are written after you've summarized your main findings in the discussion section. They come before the recommendations and before the concluding paragraph. There is no specific section dedicated to implications. They must be integrated into your discussion so that the reader understands why the results are meaningful and what they add to the field.

A good strategy is to separate your implications into types. Implications can be social, political, technological, related to policies, or others, depending on your topic. The most frequently used types are theoretical and practical. Theoretical implications relate to how your findings connect to other theories or ideas in your field, while practical implications are related to what we can do with the results.

Key features of implications

  • State the impact your research makes
  • Helps us understand why your results are important
  • Must be evidence-based
  • Written in the discussion, before recommendations
  • Can be theoretical, practical, or other (social, political, etc.)

Examples of implications

Let's take a look at some examples of research results below with their implications.

The result : one study found that learning items over time improves memory more than cramming material in a bunch of information at once .

The implications : This result suggests memory is better when studying is spread out over time, which could be due to memory consolidation processes.

The result : an intervention study found that mindfulness helps improve mental health if you have anxiety.

The implications : This result has implications for the role of executive functions on anxiety.

The result : a study found that musical learning helps language learning in children .

The implications : these findings suggest that language and music may work together to aid development.

What are recommendations?

As noted above, explaining how your results contribute to the real world is an important part of a successful article.

Likewise, stating how your findings can be used to improve something in future research is equally important. This brings us to the recommendations.

Research recommendations are suggestions and solutions you give for certain situations based on your results. Once the reader understands what your results mean with the implications, the next question they need to know is "what's next?"

Recommendations are calls to action on ways certain things in the field can be improved in the future based on your results. Recommendations are used when you want to convey that something different should be done based on what your analyses revealed.

Similar to implications, recommendations are also evidence-based. This means that your recommendations to the field must be drawn directly from your results.

The goal of the recommendations is to make clear, specific, and realistic suggestions to future researchers before they conduct a similar experiment. No matter what area your research is in, there will always be further research to do. Try to think about what would be helpful for other researchers to know before starting their work.

Recommendations are also written in the discussion section. They come after the implications and before the concluding paragraphs. Similar to the implications, there is usually no specific section dedicated to the recommendations. However, depending on how many solutions you want to suggest to the field, they may be written as a subsection.

Key features of recommendations

  • Statements about what can be done differently in the field based on your findings
  • Must be realistic and specific
  • Written in the discussion, after implications and before conclusions
  • Related to both your field and, preferably, a wider context to the research

Examples of recommendations

Here are some research results and their recommendations.

A meta-analysis found that actively recalling material from your memory is better than simply re-reading it .

  • The recommendation: Based on these findings, teachers and other educators should encourage students to practice active recall strategies.

A medical intervention found that daily exercise helps prevent cardiovascular disease .

  • The recommendation: Based on these results, physicians are recommended to encourage patients to exercise and walk regularly. Also recommended is to encourage more walking through public health offices in communities.

A study found that many research articles do not contain the sample sizes needed to statistically confirm their findings .

The recommendation: To improve the current state of the field, researchers should consider doing power analysis based on their experiment's design.

What else is important about implications and recommendations?

When writing recommendations and implications, be careful not to overstate the impact of your results. It can be tempting for researchers to inflate the importance of their findings and make grandiose statements about what their work means.

Remember that implications and recommendations must be coming directly from your results. Therefore, they must be straightforward, realistic, and plausible.

Another good thing to remember is to make sure the implications and recommendations are stated clearly and separately. Do not attach them to the endings of other paragraphs just to add them in. Use similar example phrases as those listed in the table when starting your sentences to clearly indicate when it's an implication and when it's a recommendation.

When your peers, or brand-new readers, read your paper, they shouldn't have to hunt through your discussion to find the implications and recommendations. They should be clear, visible, and understandable on their own.

That'll get you cited more, and you'll make a greater contribution to your area of science while extending the life and impact of your work.

The AJE Team

The AJE Team

See our "Privacy Policy"

msevans3’s Site

How to write recommendations in a research paper

Many students put in a lot of effort and write a good report however they are not able to give proper recommendations. Recommendations in the research paper should be included in your research. As a researcher, you display a deep understanding of the topic of research. Therefore you should be able to give recommendations. Here are a few tips that will help you to give appropriate recommendations. 

Recommendations in the research paper should be the objective of the research. Therefore at least one of your objectives of the paper is to provide recommendations to the parties associated or the parties that will benefit from your research. For example, to encourage higher employee engagement HR department should make strategies that invest in the well-being of employees. Additionally, the HR department should also collect regular feedback through online surveys.

Recommendations in the research paper should come from your review and analysis For example It was observed that coaches interviewed were associated with the club were working with the club from the past 2-3 years only. This shows that the attrition rate of coaches is high and therefore clubs should work on reducing the turnover of coaches.

Recommendations in the research paper should also come from the data you have analysed. For example, the research found that people over 65 years of age are at greater risk of social isolation. Therefore, it is recommended that policies that are made for combating social isolation should target this specific group.

Recommendations in the research paper should also come from observation. For example, it is observed that Lenovo’s income is stable and gross revenue has displayed a negative turn. Therefore the company should analyse its marketing and branding strategy.

Recommendations in the research paper should be written in the order of priority. The most important recommendations for decision-makers should come first. However, if the recommendations are of equal importance then it should come in the sequence in which the topic is approached in the research. 

Recommendations in a research paper if associated with different categories then you should categorize them. For example, you have separate recommendations for policymakers, educators, and administrators then you can categorize the recommendations. 

Recommendations in the research paper should come purely from your research. For example, you have written research on the impact on HR strategies on motivation. However, nowhere you have discussed Reward and recognition. Then you should not give recommendations for using rewards and recognition measures to boost employee motivation.

The use of bullet points offers better clarity rather than using long paragraphs. For example this paragraph “ It is recommended  that Britannia Biscuit should launch and promote sugar-free options apart from the existing product range. Promotion efforts should be directed at creating a fresh and healthy image. A campaign that conveys a sense of health and vitality to the consumer while enjoying biscuit  is recommended” can be written as:

  • The company should launch and promote sugar-free options
  • The company should work towards creating s fresh and healthy image
  • The company should run a campaign to convey its healthy image

The inclusion of an action plan along with recommendation adds more weightage to your recommendation. Recommendations should be clear and conscience and written using actionable words. Recommendations should display a solution-oriented approach and in some cases should highlight the scope for further research. 

  • Open access
  • Published: 22 April 2024

RecSOI: recommending research directions using statements of ignorance

  • Adrien Bibal 1 ,
  • Nourah M. Salem 1 ,
  • Rémi Cardon 2 ,
  • Elizabeth K. White 1 ,
  • Daniel E. Acuna 3   na1 ,
  • Robin Burke 3   na1 &
  • Lawrence E. Hunter 4   na1  

Journal of Biomedical Semantics volume  15 , Article number:  2 ( 2024 ) Cite this article

38 Accesses

Metrics details

The more science advances, the more questions are asked. This compounding growth can make it difficult to keep up with current research directions. Furthermore, this difficulty is exacerbated for junior researchers who enter fields with already large bases of potentially fruitful research avenues. In this paper, we propose a novel task and a recommender system for research directions, RecSOI, that draws from statements of ignorance (SOIs) found in the research literature. By building researchers’ profiles based on textual elements, RecSOI generates personalized recommendations of potential research directions tailored to their interests. In addition, RecSOI provides context for the recommended SOIs, so that users can quickly evaluate how relevant the research direction is for them. In this paper, we provide an overview of RecSOI’s functioning, implementation, and evaluation, demonstrating its effectiveness in guiding researchers through the vast landscape of potential research directions.

Finding new research topics is a task that researchers must handle very often, especially when starting a PhD degree. However, navigating the increasingly vast expanse of scientific knowledge, which sees a doubling of publication output every 17.3 years [ 1 ], is an arduous task for even the most experienced academics. Amid the many papers published each year and the surge of scientists joining the workforce, pinpointing the most suitable research direction becomes increasingly challenging. Some argue that this phenomenon could be one of the reasons behind the seeming slowdown of novel scientific progress [ 2 , 3 , 4 ]. This observation underlines the importance of managing the vast and rapidly increasing volume of existing knowledge and being able to discern gaps and opportunities for innovation. It stands to reason that researchers in science, and especially newcomers, would therefore benefit from a recommender system that provides them with research directions that align with their profile or the profiles of their collaborators or their supervisor. While this paper focuses on helping new researcher find research directions that are relevant to them, many other use cases exist for our recommender system (see, e.g., Boguslav et al. [ 5 ] for some ideas of use cases). For instance, another use case could be to help principal investigators (PIs) navigate the literature in order to find the crucial state-of-the-art problems that match the expertise of their lab. Not only would this help PIs target suitable grant funding, but it would also help society, as difficult problems would be matched to researchers with the corresponding skills.

Such a recommender system is only possible if new research directions can be extracted from papers in the literature. In order to accomplish that, Boguslav et al. [ 5 , 6 ] recently provided ways to identify sentences in papers stating a lack of knowledge, or ignorance, that can then be used to discover possible research directions. Therefore, starting from the premise that identifying such statements of ignorance (SOIs) is possible, we propose a novel task and a new system, RecSOI (Recommender of research directions using Statements Of Ignorance), to recommend to researchers, based on their profile, SOIs that they would be interested in investigating. Furthermore, RecSOI’s pipeline provides a module for extracting the SOI’s context from the paper. With this background information, researchers may be able to get the gist of most recommended research directions without needing to read the papers that mention them. A user evaluation is proposed in this paper to make it possible to assess the importance of extracting context. The overall RecSOI pipeline can be seen in Fig.  1 . Our main contributions are the following:

A description of a way to recommend research directions based on statements of ignorance in papers;

An estimation of the difficulty of the task;

A system, called RecSOI, for recommending research directions to researchers;

A user evaluation of the context that can be provided alongside recommended research directions;

A detailed discussion about the task, including about potential fairness issues.

figure 1

RecSOI pipeline, from an author name provided as input to a list of recommended directions and their context

In order to introduce our novel task and RecSOI, we first start by providing some work related to ours in the Related work  section. Then, we provide some background about statements of ignorance in the Statements of ignorance  section. We introduce the problem of identifying relevant research directions from these statements of ignorance in the Research directions using SOIs  section. RecSOI and its evaluation are then presented in the Methods  section. The results of our evaluation are presented in the Results  section. An analysis of the extraction of context to better understand the recommendations is provided in the Extracting ignorance context  section. We close the paper with a detailed discussion in the Discussion  section and our conclusion in the Conclusion  section.

Related work

Our work is closely related to Boguslav et al.’s [ 5 , 6 ], which detects SOIs. However, we extend that work by recommending new research directions based on these SOIs to researchers.

There are many components of scientific discourse that can help us navigate the current state of knowledge. One such type is claims, which are related to finding the current knowledge, or answers, in the literature. Achakulvisut et al. [ 7 ] propose to extract scientific claims from the literature. Another type of discourse is arguments , which are logical and evidence-based processes that seek to establish or support a particular scientific claim or hypothesis. The work of Stab et al. [ 8 ] aims to develop argument-mining methods in the context of scientific papers and persuasive essays and draws conclusions about these tasks. Another type, and the focus of our paper, concerns discourses about the known unknowns , which Boguslav et al. [ 5 ] call statements of ignorance (SOIs). In contrast to claims and arguments, these SOIs have been much less studied.

Close to our work, Lahav et al. [ 9 ] propose a search engine for research directions given a certain topic (e.g., COVID-19). Like Boguslav et al. [ 5 ], who identify sentences with SOIs, Lahav et al. identify sentences that contain mentions of challenges or research directions and then index these sentences based on the entities contained in them. One of the main differences between these two works is that Boguslav et al. focus on analyzing, describing and categorizing the SOIs, while Lahav et al. focus on providing a search engine on top of the detected challenges/directions. Like Lahav et al., we focus on recommending research directions, but our reliance on researcher profiles, rather than keywords, allows us to tailor searches to the interests and expertise of the researcher, so that, for example, instead of finding new research directions related to “COVID-19”, we can provide directions that are more specific to the interests of the researcher (e.g., “the impact of COVID-19 on the heart”).

The field of recommender systems is rich with studies aiming to select research papers from various perspectives (for more information, see Bai et al. [ 10 ]), and many papers in the literature tackle this problem, each through a different lens (see, e.g., [ 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]). Our work is unique in recommending research directions mentioned in these research papers. Although one can argue that by recommending research directions, we are also implicitly recommending papers from the literature (i.e., the papers that contain the directions), we also justify why each paper is recommended based on the potentially relevant research directions that it mentions. This innovation takes us a step beyond traditional paper recommendation to create a richer, more helpful guidance system for researchers.

  • Statements of ignorance

Boguslav et al. recently developed the term “statements of ignorance” [ 5 ] (SOIs), inspired by the work of Firestein [ 23 ], and defined it as “statements about knowledge that does not exist yet” [ 5 ]. In their work, Boguslav et al. thoroughly studied the concept by identifying the different categories of SOIs in the literature. The authors also analyzed the different lexical cues that are often present in each category of SOIs.

In this work, we are interested in the SOIs that can indicate a possible new research direction for a researcher. Indeed, if a paper states that something is still unknown and deserves more investigation, then this lead can probably be used to start a new research project. This, therefore, means that we are interested in the subset of the SOI categories that indicate possible new research directions. All of the categories highlighted by Boguslav et al. [ 5 ] (see Table  1 ) pertain to a lack of knowledge. Some categories are related to how the lack of knowledge is expressed (e.g., “explicit questions” and “future work”). Other categories relate to the intent of the statements (e.g., “question answered by this work” serves the purpose of motivating the paper stating it). The SOI categories that are relevant for our recommendation of research directions are “full unknown”, “explicit question”, “problem or complication”, “future work” and “future prediction”.

Research directions using SOIs

As SOIs state a certain lack of knowledge, one can investigate this lack of knowledge to pursue new research directions. For instance, a sentence that mentions that “the relation between X and Y is unexpected and requires further investigations” indicates that a new research direction would be to investigate this relation between X and Y more deeply.

However, one issue with SOIs, which Boguslav et al. [ 5 ] found in their study, is that many sentences in papers can be considered to be SOIs. In fact, as we have also discovered in our research, approximately half of the sentences in papers can be perceived to contain a certain form of ignorance. As a consequence, parsing all papers of the literature (or of a certain field of the literature) and extracting the SOIs would leave researchers with hundreds of thousands, if not millions, of SOIs to explore - an impractical number for researchers to leverage them for finding new research directions.

This motivates the need to build a recommender system on top of these SOIs in order to rank them with respect to the researcher’s interests and expertise. This entails (1) building a researcher profile that can be used for recommendation, (2) vectorizing the SOIs in the database to make them candidates for a recommendation system, and (3) linking researcher profiles to relevant research directions for them. Please note that for (3), we make the assumption that research directions are more interesting for a researcher if they relate to research subjects that are close to the researcher’s own work. One can argue that some researchers may be interested in research directions that deviate from their own work. We leave the recommendation of this kind of research directions as a future work. In the next section, we show how RecSOI, our proposed method, can recommend new research directions to researchers, leveraging a database of SOIs.

This section introduces RecSOI through two steps: building the profile of the researcher and then recommending research directions based on that profile.

Researcher profile embedding

The first step to recommending research directions is summarizing researchers’ work in a certain vector, or embedding, space. Two strategies can be used to summarize researcher profiles. First, to achieve good recommendation performance, it can be important to consider specific combinations of concepts that are often invoked by the researcher, rather than full texts. This is not effective for all researchers, as it ignores the big picture. Second, having a comprehensive view of what and how the researcher wrote can also be important for knowing what to recommend. However, this strategy is weaker when important concepts are buried in many irrelevant texts. Although these two strategies tend to work for different subsets of researchers, neither one provides a “one size fits all” solution.

In our recommender system of statements of ignorance (RecSOI), we propose in this section to combine the best of the two worlds to embed researcher profiles. For a particular abstract a (without the title of the article) from researcher r , a sentence-BERT model [ 24 ] (more precisely, “sentence-transformers/all-MiniLM-L6-v2”) is first run on each sentence \(s_i\) of this abstract to obtain the corresponding embedding \(e_i\) .

Different versions of BERT (like, e.g., BioBERT [ 25 ]) have been tested in our preliminary experiments. sentence-BERT was chosen has the embedding models for four reasons. First, we observed during our preliminary experiments that the recommendation results were not better when using more specific versions of BERT. Second, due to the sensitivity of our recommender to overfitting (see the Discussion  section for a discussion about that), we decided to opt for the most generic version of BERT. Third, by not using very specific versions of BERT like BioBERT and BlueBERT [ 26 ], we also want to show that the task can be extended to other fields than the biomedical field. Finally, as our most important elements (SOIs) are expressed with sentences, we decided to work with a version of BERT that is fine-tuned to embed sentences.

For the next step, a logistic regression model (LR) is run on the same sentences to get the probability that the sentence was written by this author. In order to do so, LR is trained on a dataset of abstracts in a binary classification setup: 1 if the first author of the abstract is r and 0 otherwise. Then, in a similar fashion as the Rocchio algorithm [ 27 ], the final embedding of the abstract a is the average of the embedding of the sentences in a , positively or negatively weighted by LR predictions. The LR predictions are therefore used to estimate the relevance of the sentences in the average. The objective is to obtain a representation of the abstract in the embedding space that is as close as possible to the sentences that are representative to the author. More formally, the weight for each sentence \(s_i\) is given by

with LR \((s_i)\) being a probability given by the LR prediction for \(s_i\) when a TF-IDF vectorization of \(s_i\) is considered. The use of a TF-IDF vectorization of the researcher’s papers allows to put an emphasis on the concepts that are specifically used by the researcher. The abstract embedding( a ) is then given by

where \(e_i\) is the sentence-BERT embedding of \(s_i\) and \(|\text {weight}(s_i)|\) is the absolute value of \(\text {weight}(s_i)\) .

The final profile of researcher r is then built by keeping a list of each \(\text {embedding}(a_j)\) for all available abstracts \(a_j\) of researcher r . In our case, as we consider the use case in which junior researchers look for research directions, the number of abstracts for each researcher is between 1 and 5.

As our profile embedding is designed to work on previous abstracts, very new researchers (i.e. with no previous papers) may face limitations to use our system. Three solutions can help bootstrap the system in such a case. In the first solution, if the new researcher has at least one paper but is not first author for any of them, these papers can be used if they are close enough to the general research direction the new researcher will take in their main research. In the second solution, one can use the profile of another researcher with similar research interests (e.g., their supervisor, or a PhD student working on a similar question). In the third solution, one can use methods relying on keywords, such as the one of Lahav et al. [ 9 ], until at least one abstract is available to build a profile.

Recommending research directions

During our preliminary experiments, we found that using metric learning methods to learn the best distance between user profiles and SOIs did not perform well. Based on other results discussed later in this paper, our assumption is that building such a metric learning model has the tendency to overfit in our task. On the other hand, classic metrics like the Euclidean distance and the cosine similarity perform quite well. As overcomplicating the solution tends to provide a lower performance (because of the overfitting effect), our best solution was to simply compute the Euclidean distance between the author profile and each SOI candidate. The candidates that are then recommended are the ones for which the distance to at least one abstract in the author’s profile is the smallest.

Note that one important advantage of RecSOI is that given the definition of the profile embedding in the Researcher profile embedding  section and the use of an Euclidean distance for matching profiles to research directions, RecSOI is not dependent on numerical hyperparameters to tune. The only components of RecSOI that can be investigated and improved in future work are (1) the model used to weight the abstract’s sentences in the user profiles, and (2) the distance measure between the user profiles and the research directions. The choices made in this paper for these components are the ones that provided the best results during our preliminary experiments. Another interesting feature of the recommendations of RecSOI is that they are deterministic, which means that, for a given user profile and a given database of SOIs, the recommendations will always be the same.

In order to evaluate RecSOI, we propose a quantitative experiment followed by a qualitative analysis of the errors to better understand the results. In the quantitative evaluation, three heuristics are used to assess the quality of the recommendations. In the qualitative evaluation, particular SOIs are studied to better understand the difficulty of the problem.

Experimental setup

In order to explain our experimental setup, three elements need to be presented. First, we base our evaluation on a uniquely annotated dataset from the biomedical literature, but we needed to expand it further. The dataset and the process used to augment it are described in the Dataset  section. Then, as it is not realistic to gather experts to evaluate 500 recommendations from a very specific field of science, three heuristics are proposed in the Evaluation heuristics  section to assess the quality of the recommendation. Finally, we present in the Baseline methods  section the baseline methods we use to compare to RecSOI.

Boguslav et al. developed classifiers with a high performance for classifying whether a sentence is a SOI or not [ 5 ]. The testing F1-score that they report is 0.85 when the positive class contains SOIs of all categories and the negative class contains the other regular sentences [ 5 ]. These classifiers came alongside a dataset of papers on prenatal nutrition. This dataset is the only dataset in the literature that contains hand-crafted annotations about SOIs and their category. Indeed, the main feature of this dataset is that it went through a thorough annotation campaign with experts in the domain of prenatal nutrition. During the annotation campaign, the sentences containing a certain lack of knowledge were annotated alongside their corresponding category of ignorance.

For our study, we consider Boguslav et al.’s dataset of SOIs as potential research directions because of its unique expert annotations and the well-performing classifiers [ 5 , 6 ]. However, for three reasons, we needed to extend Boguslav et al.’s dataset to make the evaluation of our recommendations possible. Indeed, (1) there were only 60 papers in the dataset, (2) each first author in the dataset has only one paper as first author, and (3) there are no previous papers, abstracts or other information provided in Boguslav et al.’s dataset, which would be used to build a profile for each author and make recommendations based on it.

In order to augment Boguslav et al.’s dataset, we proceed in three steps. In the first step, we gather the PubMed ID (PMID) of the 10,000 papers that are the closest to the “prenatal nutrition” subject using the PubMed API called Entrez [ 28 ]. The query was performed using “prenatal nutrition” as a free text keyword (without the quotation marks) in order to take into account the multiple combinations of MeSH terms referring to the subject. Among these 10,000 PMIDs, 2,818 openly accessible papers could be fetched using the BioC API [ 29 ]. The reason for the focus on “prenatal nutrition” in this dataset augmentation procedure is because of the second step, where Boguslav et al.’s classifiers trained on prenatal nutrition papers are used.

In the second step, Boguslav et al.’s classifiers are used to annotate the SOIs in our new papers. The reason why classifiers are used instead of annotators is because (1) annotating 2,818 papers (i.e., 715,545 sentences) with experts is unrealistic, and (2) it has been proven by Boguslav et al. that the classifiers had a very good performance on these data. In order to ensure that the classifiers still keep their good performance, we stay as close as possible to the scientific field in Boguslav et al.’s dataset (i.e., prenatal nutrition). Since all ignorance categories are not necessarily interesting for our recommendation setup (e.g., “question answered by this work”, which indicates that the research direction is already tackled in the study in question), a specific subset of ignorance categories are selected (as presented in the Statements of ignorance  section): “full unknown”, “explicit question”, “problem or complication”, “future work” and “future prediction”.

In the third step, we gather, for each author in our augmented dataset, the abstract of all papers for which they are first author prior to their oldest paper in our augmented dataset using the OpenAlex API [ 30 ]. The rationale is that we want to be able to leverage these abstracts to build a profile of each author prior to what they published in the augmented dataset. Abstracts have the advantage of being generally openly accessible, even when the full papers are not. This renders our technique independent of the open-access status of the author’s papers. The full dataset of abstracts contains 85,342 abstracts.

However, our experiments involve vectorizing our augmented dataset with TF-IDF and such a large dataset does not fit in memory with a reasonable amount of RAM (i.e., more than 20 GB of RAM). In consequence, a methodological subsampling was used in the experiment to make it possible to evaluate the recommendations in different setups. We therefore subsampled, at random, the augmented dataset containing the full papers to 500 unique first authors. This corresponds to 152,189 sentences. Among these sentences, 61,511 were annotated as SOIs and were therefore considered as recommendation candidates.

After subsampling our dataset of full papers, our dataset of abstracts was also subsampled so that it contains the same 500 first authors. In addition, the number of abstracts per author is limited to 5. These 5 abstracts are chosen at random, to avoid any biases. The rationale for selecting 5 abstracts in the experiment is that authors with lots of abstracts (30, 50, or 100, but probably also for 10) are likely to have a well-developed sense of their field and potential research directions. Through this constraint, we, therefore, limited our scope to new researchers. In a real setting, outside the experiment, we would have considered all the author’s abstracts, even if there are more than 5. In the end, the subsampled version of our dataset of abstracts for our experiment contained 1,923 abstracts. The distribution of abstracts per author was the following: 72 authors have 1 abstract, 57 have 2 abstracts, 38 have 3, 42 have 4 and 291 have 5.

Note that our augmented dataset of papers (and its 61,511 SOIs) is exclusively used for testing the recommendations. Indeed, in order to train the methods in our experiments, only the dataset of abstracts is used.

Evaluation heuristics

As we cannot easily gather the evaluations of the 500 authors in our dataset to determine the ground truth related to the interestingness of SOIs, we defined heuristics that would allow us to assess the quality of our recommendations. We propose 3 heuristics that are summarized in Table  2 . Each of these 3 heuristics (the first-author heuristic, the co-authors heuristic and the concepts heuristic) has pros and cons, so considering them together can provide a more realistic assessment of the recommendation quality. We observed in preliminary experiments that author concepts from tools like OpenAlex are often (1) noisy (i.e., they contain irrelevant concepts for the author) and (2) generic (with concepts such as “computer science”). Because of that, we rely on concepts that can be extracted from the abstracts of the author using a named entity recognition tool. For the concepts heuristic, the concepts in the abstracts and in the SOIs are therefore retrieved using the named entity recognition tool from the work of Raza et al. [ 31 ].

Baseline methods

The recommendation of research directions based on researcher profiles is, to the best of our knowledge, not investigated in the literature. Indeed, the literature on recommender systems for researchers is mainly focused on recommending papers [ 10 , 32 ], and not specific research directions inside these papers. Furthermore, contrary to our approach, modeling the user profile is generally not performed, as keywords search is proposed instead [ 32 ]. As the literature lacks a task like ours, as well as methods and baselines that would come with it, we propose two baselines in this study. The first one is based on BERT and the second one relies on classic machine learning models.

The first baseline sums up researcher profiles using sentence-BERT embeddings [ 24 ] (more precisely, “sentence-transformers/all-MiniLM-L6-v2”) on our dataset of abstracts. For this baseline, the embedding of a researcher consists of the average of the embedding of the researchers’ abstracts for which they are the first author. An abstract embedding is defined by the average of all sentence embeddings (given by sentence-BERT) in that abstract. We then also embed the SOIs with sentence-BERT, and the recommendation is provided by the Euclidean distance between the researcher embedding and the SOI embedding (the closer the two embeddings are in the space, the better). This baseline was the most “simple”, yet well-performing, method we could find during our preliminary experiments. In fact, because of the pervasive overfitting issues in this task (briefly discussed in the Discussion  section), this simple model was one of the best and outperformed more complex approaches.

The second baseline makes use of more classic machine learning models. In order to have a different representation of the features than the first baseline, we use TF-IDF to vectorize each abstract in the dataset of abstracts. Then, the training phase consists of learning the specific features of each author. In order to do that, we use, for a particular researcher, a classification setup with two classes: whether the abstract belongs to the researcher (i.e., the researcher is the first author) or not. By doing so, the model learns what is specific to the researcher in their abstracts. For the recommendation phase, the trained model is then used on all SOIs and the ones recommended to the author are the ones for which the probability of belonging to the researcher is the highest. The rationale behind this is that if a SOI is considered very close to what the researcher writes in their paper, then it may be a SOI of interest for them.

We noted during our preliminary experiments that, like the first strategy based on sentence-BERT, this last strategy was highly prone to overfitting (see the discussion in the Discussion  section). Because of that, more complex models (e.g., neural networks or random forests) yielded worse results in the recommendation phase. Simpler models were systematically better in this setup, as they seem to get rid of the noisy elements in the abstracts (i.e., the textual elements that are not necessary for the recommendation). One model that seemed to outperform the others, because of this overfitting issue, was a Logistic Regression with a Ridge penalty and with C=1. The hyper-parameters of all models were optimized by cross-validation on an external dataset, namely the one of Boguslav et al. [ 5 , 6 ].

Note that we do not include in the baseline models solutions that rely on keyword searches (like, e.g., the one in the work of Lahav et al. [ 9 ]). This is because (1) searching by keywords does not correspond to our setup of modeling users and (2) it is not clear what keywords would best correspond to a researcher to model them. Also note that the proposed heuristics and baseline methods are as much to evaluate RecSOI as to assess the difficulty of our novel task.

The results for the two baselines and RecSOI, given the three heuristics explained in the Evaluation heuristics  section and Table  2 , are presented in Table  3 . Note that, for each heuristic, a different number of researchers is considered: 500 for the first-author heuristic, 59 for the co-authors heuristic and 496 for the concepts heuristic. Indeed, first, the 500 authors in the first-author heuristic are set by design (we subsampled to have 500 unique first authors in our dataset). Second, concerning the co-authors heuristic, there are only 59 co-authors who are themselves first authors among the 500 first authors. Finally, there are 4 first authors among the 500 for whom there are no concepts in common between the concepts in the SOIs to recommend and the concepts in their abstracts. For these 4 authors, the evaluation of the recommendation quality (according to the concepts heuristic) cannot be computed. As a result, these 4 authors are not considered for this heuristic, which leaves 496 authors to compute the concept heuristic. Note that this has no impact on the use of the recommendation techniques in practice. It only means that for the evaluation of the recommendations in our paper, the concept heuristic cannot be computed for 4 authors.

Concerning the evaluation metric, as we want to know the proportion of researchers for which such a recommendation works, we use a score derived from the mean average precision at k (MAP@k). Indeed, while MAP@k is defined as

where R is the set of researchers and n is the number of researchers in R , we instead use

The percentage given by MAP \(_\exists\) @k therefore corresponds to the percentage of researchers for which at least one relevant research direction was given in their top k recommendations. In practice, we use MAP \(_\exists\) @5, MAP \(_\exists\) @10 and MAP \(_\exists\) @20 in our experiments. Note that the notion of “good” or “relevant” recommendation is defined by our three heuristics defined in the Evaluation heuristics  section.

Confidence intervals are also provided in Table  3 . As each percentage in the table is the mean of binary trials (i.e., “Did the researcher get at least one relevant SOI recommended, yes or no?”), the percentages follow a Binomial distribution. The intervals provided in the table are therefore defined accordingly.

In order to assess the quality of the solution when SOIs are drawn at random in the database, the expected random results are also shown in Table  3 . Picking at random, in our setup, corresponds to a hypergeometric distribution, as the question is: how many relevant SOIs would I get if I draw k SOIs from a large pool of SOIs from which a certain number are relevant for the researcher (depending on the chosen heuristic)? If k is the number of SOIs drawn (5, 10, or 20 in our experiments), n is the total number of SOIs in the dataset and \(n_R\) is the number of SOIs relevant for the researcher, the expected number of relevant SOIs that can be retrieved at random is defined for a hypergeometric distribution as \(k*(n_R/n)\) . As, in MAP \(_\exists\) @k, we consider for each author whether at least one relevant SOI has been found in the top k recommendations, “Random” in Table  3 corresponds to the percentage of authors for which \(k*(n_R/n) \ge 1\) .

Analysis of the results

One first thing to note when looking at Table  3 is that the problem of finding a relevant SOI, according to the first-author and the co-authors heuristics, is very hard. One can see that the percentage of authors for which \(k*(n_R/n) \ge 1\) , for \(k = 5, 10 \text { and } 20\) is equal to 0% for these two heuristics. This means that for none of the authors, picking k SOIs at random lead to an expected number of relevant SOIs retrieved greater or equal to 1. In order to provide a concrete example, let’s consider the author with the median number of SOIs belonging to them in the dataset (i.e., relevance defined by the first-author heuristic), which is \(n_R = 99\) . Given that the number of SOIs in our dataset is \(n = 61,511\) , the expected number of relevant SOIs (according to the first-author heuristic) when \(k = 5\) SOIs are picked at random is 0.008 relevant SOIs out of a maximum of 5. For \(k = 20\) SOIs picked at random, the expected number of relevant SOIs is 0.032 out of a maximum of 20. We are therefore far from having at least 1 relevant SOI for this “median reseacher” when 5 or 20 statements are picked at random. It is therefore not possible to solve the problem by picking SOIs at random.

On another note, by looking at the co-authors heuristic, it seems like recommending SOIs from co-authors’ papers is a very difficult task. Indeed, the best result is a MAP \(_\exists\) @20 of 23.7% for sentence-BERT and RecSOI. This may be explained by the fact that, in general, co-authors write very different papers when they are first authors themselves. A potential improvement of this heuristic can therefore be to weight the closeness between author r and the papers of their co-authors \(r'\) based on the order of the co-authors \(r'\) in the papers of r . We leave this challenging definition of the co-author heuristic as a future work. But while the scores of the current definition of the heuristic may indicate that this heuristic may not be the best to assess recommendation quality, its results may shed some light on the difficulty of our task.

Another element that is interesting to note is that increasing the number k of recommendations mainly benefits the methods that do not perform well to begin with. For instance, for MAP \(_\exists\) @20 and the first-author heuristic, the performance increases by only 4.8% for RecSOI, while it increases by 10% for TF-IDF+LR, with respect to the performance for MAP \(_\exists\) @10. A similar observation can be made for the concepts heuristic. This seems to indicate that there is a performance saturation for each heuristic. In other words, while methods performing poorly can always do better, correctly recommending starts to become extremely difficult (for a given heuristic) for the remaining percentage of authors.

Finally, we note that if we consider MAP \(_\exists\) @5 for the concepts heuristic, the problem can be solved with RecSOI for 75.6% of the authors. In other words, 75.6% of the authors have, in their top 5 recommendations, at least one SOI that share one or several concepts that were found in their abstracts. We also note that this problem is not trivial, as if the SOIs were picked at random, 0% of the authors would get recommendations with a relevant topic.

While the results are high enough for the concepts heuristic and explainably low for the co-author heuristic, it is not clear without further analysis why the results are not higher for the first-author heuristic. In the next section, we aim at clarifying these results and at better understanding these errors.

Analysis of the first-author heuristic errors

This section aims to analyze why it is hard to obtain better results with the first-author heuristic. Given the description of the first-author heuristic in Table  2 , a recommended SOI is relevant for researcher r , in the context of our evaluation, if the SOI was in fact written by r .

Let’s now consider the worst recommendations according to this heuristic. In order to find them, we consider the authors for which the 5 best-ranked SOIs written by them have the worst ranks for them. This means that, while, ideally, these 5 SOIs written by r should be in the top 5 for r , they are, for instance, ranked \(\sim\) 10,000 or worse.

One pattern that was identified with this analysis is the “generic SOI issue”. Examples of such issues are shown in Table  4 . These SOIs are very generic and do not contain any specific concepts. Because of that, the SOIs cannot be recommended to the authors (e.g., to N. Liu in the table), despite being written by the authors themselves. This kind of SOIs can be observed for different authors for whom the recommendation results were bad (according to the first-author heuristic). The poor performance of the first-author heuristic can be partly explained by the tendency of the recommender to discard generic SOIs, sentences written by the first author but containing few useful concepts, in favor of other SOIs that contain more relevant concepts for the author, but that are written by someone else. One such example of a SOI containing lots of concepts is the following statement from Monk et al. [ 33 ] that is recommended to Wei Wu (an author given as example in Table  4 ): “In addition to these structural abnormalities, biochemical effects include reduced oxidative metabolism in the hippocampus and frontal cortex and altered fatty acid and myelin profiles throughout the brain have been observed.”

An insight that can be highlighted by these examples of “generic SOIs” is that SOIs may, by nature, be more frequently generic than claims. While it is very difficult to automatically assess how generic a sentence is, one can argue that scientific claims, by nature, more often state their findings in detail. However, stating something that is unknown or unexpected inherently restricts the possibility of going into details. If this is true, this adds to the intrinsic complexity of the task of recommending research directions based on SOIs, as many SOIs would in fact be written in generic terms, such as in the cases of the authors in Table  4 .

One solution to this issue is to consider multi-sentence SOIs. Indeed, thanks to additional sentences, the “generic SOIs” could be contextualized, which could solve the issue. However, this solution suffers from a major drawback: the longer the text representing the statement is, the more difficult it is to adequately embed the concepts inside it. As a result, recommendation performance could suffer. Because of this, embedding statements at the sentence level, as is done in this paper, may be preferable.

This “generic SOI” issue lowers the probability for the author, in the experiment, to be recommended SOIs that they wrote themselves. However, this does not explain why the few specific SOIs written by that author are not recommended to them. We propose two reasons for this. First, it may be that the few SOIs written by the author do not directly relate to their work (e.g., when proposing future works within another field). Second, the few specific SOIs may relate to the current work of the author, but not the previous work used to build their profile. Indeed, let us recall that the abstracts used to build the researchers’ profiles are strictly prior (by construction) to their papers in the dataset of SOIs used for the experiment. This issue should however not be frequent with junior researchers, as their few papers are generally closely related.

If this second hypothesis is true, then this would suggest the relevance of analyzing the changes in researchers’ interests when recommending research directions. To solve this issue, one may try to combine, for instance, RecSOI (our contribution), which is focused on the past, with keyword-based search engines (such as the one of Lahav et al. [ 9 ]). While these search engines cannot provide recommendations that match the profile of the researchers based on their past work, as RecSOI does, they can help find interesting directions that are not aligned with the researcher’s past profile.

Extracting ignorance context

Because RecSOI is based on the extractor of SOIs from Boguslav et al. [ 5 ], one strong limitation of our recommender system is that it recommends single sentences only. While this is not an issue for the recommendation algorithm itself (as the context of the sentence is embedded in the sentence-BERT embedding), it can be very difficult for users to know if the recommendations are relevant based on a single sentence only. Indeed, contextual information outside the sentence may be important to understand the future work. This means that, in most cases, users would have to read the paper for each recommended SOI to really know if the statement is relevant for them to pursue or not. In this section, we show and evaluate different ways to provide context to the user.

In order to solve the issue presented above, the Insights on context extraction section will first present our findings on how to provide context to the user. Then, the Evaluation of the usefulness of context  section will describe our user study evaluating the usefulness of different ways of providing context.

Insights on context extraction

When research directions are recommended, the researcher must read the paper containing the research direction to get more information. In some cases, the researcher might realize that they are not interested in pursuing a particular direction. To avoid reading papers of uninteresting directions and to save time for researchers, we propose an analysis related to the extraction of contextual information about SOIs. This task is close to the extractive summarization task.

Providing contextual information about SOIs is not an easy task. Indeed, in many cases, SOIs are not connected to explicit pieces of information in the paper. For instance, some SOIs refer to information that is implicitly absent from, e.g., the experiment. An example of that from Qiu et al. [ 39 ] is: “Though consistent with studies of men and non-pregnant women, larger studies that include objective measures of sleep duration, quality and apnea are needed to obtain more precise estimates of observed associations.” The implicit information behind this SOI is that apnea was not really measured in the authors’ experiments (only if the participants were snoring), making the association of apnea with other measures difficult to objectively establish. This information, however, is not explicitly present in the paper and is implicitly inferred by the reader after reading the paper and the SOI.

When information about a SOI is explicitly provided, however, the relevant pieces of information are generally in the vicinity of the SOI. Indeed, more often than not, the sentences that immediately precede the SOI provide the necessary context to understand the statement. The problem, therefore, becomes “what are the passages in the SOI’s paragraph that contain enough contextual information?”

During our preliminary experiments, the most powerful methods to solve this problem were large language models (LLMs). More specifically, we observed that the results of available open-source LLMs did not compare with LLMs such as GPT. In the next section, we show some results from GPT to solve that problem and propose an experiment to quantitatively assess the usefulness of LLMs with respect to naive heuristics.

Evaluation of the usefulness of context

One way to quickly grasp the context of a SOI is to provide the paragraph that contains that statement. This section evaluates how often it is the case that providing the paragraph is useful to understand the SOI. On top of that, because paragraphs can sometimes be lengthy, we also evaluate when highlighting shorter passages within the paragraph is helpful. In order to evaluate the usefulness of highlighting, we use both a simple heuristic (highlighting the sentence before the SOI) and a more complex solution (using prompt engineering to tune GPT-3.5 [ 40 ] to provide relevant highlights in the paragraph).

To present our evaluation and the corresponding results, this section comprises three parts. The Experimental setup  section first explains the overall experimental setup. Then, the GPT prompt engineering and other LLMs  section digs deeper into the prompt engineering phase that led to the results of GPT-3.5 in the experiment. This section also extends to other LLMs and their results. Finally, our results are reported and analyzed in the Results and analysis  section.

The dataset used for this experiment is a subset of Boguslav et al.’s dataset [ 5 , 6 ]. Because the purpose of this experiment is to assess if the additional context is useful to better understand the SOI, and not to assess if the SOI is indeed about ignorance, a manually annotated dataset is used. Furthermore, to focus the evaluator’s attention on the context rather than on the SOI itself, only SOIs that explicitly stated future work were selected. With this selection, we expect that the evaluators will focus on whether the context helps understand what the future direction is about and not how to use the SOI as a research direction.

The interface is composed of two panels: a main panel to gather the evaluation of the evaluators and a secondary panel to get some optional comments. The main panel of the interface used for the evaluation can be seen in Fig.  2 . In this main panel, the evaluator can see the SOI (highlighted in yellow), the paragraph surrounding this statement, and some blue highlights depending on the strategy. The question for each SOI was: “Is this paragraph and its potentially highlighted part(s) useful for you to understand what this statement of ignorance is about?” While waiting for an answer (“Useful” or “Not Useful”), the interface records the time it takes for the evaluator to give their answer. In this main panel, the abstract of the paper containing the SOI is accessible by clicking on a button. Each time a decision, “Useful” or “Not Useful”, is made by the evaluator, the secondary panel opens, asking whether the evaluator has any comment regarding the decision they just made. For each evaluator, the evaluation ends when a “Useful” or “Not Useful” decision has been provided for all SOIs.

figure 2

Main panel of the interface used for the experiment about context. The paragraph (from McGrath et al. [ 41 ], in this example) in which the future work sentence (in yellow) is mentioned is provided. In this example, the blue highlight corresponds to the important contextual sentences according to GPT-3.5

The experiments are built upon 30 SOIs selected at random. Each SOI is presented three times during the experiment: once with its contextual paragraph but no highlight, once with the previous sentence highlighted in the paragraph, and once with highlights provided by GPT in the paragraph. This resulted in 90 “Useful”/“Not Useful” trials to perform per participant. The strategies behind the highlights were not provided to the participants (i.e., they only saw highlights, without knowing what generated them). Each participant received the trials in a different order. The randomization was designed by block: for each SOI, each of the three highlight strategies is randomly assigned to one of three blocks. The SOIs inside each block are then further shuffled. This ensures that the same statements with two highlight strategies are not close to each other in the experiment. A further condition was added to filter and only keep the randomized orders where at least five trials separate two highlight strategies on the same SOIs. This condition is necessary because, in some rare cases, two highlight strategies for the same SOI can be assigned at the end of a block and the beginning of the next block. In that case, the same SOI (but with different highlight strategies) would be seen twice in a row, which would bias the experiment as the evaluator would remember their previous judgment when making the second one.

The eight evaluators are all researchers in bioinformatics or have a strong knowledge of biology. This ensures that (1) as they are all researchers, they have a good understanding of what constitutes a future work statement in a research paper, and (2) they have sufficient background to understand the biomedical papers in our dataset. The experiment took between around 60 and 90 minutes, depending on the participant.

In the next section, we discuss in greater length the use of LLMs to provide the highlighted parts of our experiment.

GPT prompt engineering and other LLMs

After some prompt engineering, we discovered that developing a very complex prompt was not necessary to obtain good results on our extraction task. The prompt that worked the best with GPT-3.5 was the following:

Given the following paragraph from a scientific paper: “{PARAGRAPH}” Please provide the most relevant passage(s) from this paragraph that can help a researcher understand “{STATEMENT OF IGNORANCE}" in the paragraph and that is not “{STATEMENT OF IGNORANCE}" itself. Please do not add anything other than the passage(s) in your response.

with {PARAGRAPH} being the paragraph that contains the SOI referred by {STATEMENT OF IGNORANCE}. GPT-4 [ 42 ] offered similar results on our task for a much greater cost. Given the lack of difference in the results, GPT-3.5 is considered for the whole experiment.

Other LLMs, particularly open-source ones like BLOOM [ 43 ], have been tested on our task with different prompts. Unfortunately, none could rival the performance of GPT-3.5. Common issues were (1) not sticking exactly to (i.e., modifying) the text of the paper and (2) adding additional, non-requested information. For the worst LLMs on our task, the outputs were not relevant.

Results and analysis

Table  5 contains the preferences of each participant, where a preference of a given highlight strategy A over B is defined by the fact that the participant found A useful but not B for the same future work statement and the same paragraph. Each row in the table provides the percentage of time, for the same paragraph, a participant preferred a certain combination of methods (with GPT highlights (“GPT” in the table), with the previous sentence highlighted (“PS” in the table), and the paragraph without any highlight (“Paragraph” in the table)). For instance, P1 having 10% for “Both GPT and PS” means that for 10% of the provided future work statements, P1 considered that highlighting using GPT and highlighting using the previous sentence in the paragraph was useful, but having the paragraph only without highlights was not useful.

A paired t-test analysis over each pair of methods and all participants shows that there is no one-fits-all solution. Indeed, the participants can be clustered in groups of preferences, which are canceling out when considered all together. This signals that a more detailed analysis must be performed, in particular, to identify clusters among participants.

One first thing to note is that while the preferences for methods are spread differently among the participants, considering everything useful (first row called “Everything” in Table  5 ) is always the most frequent option. In the most extreme case, P1 and P3 consider that everything is useful (all the possibilities, i.e., the paragraph without highlight, with the previous sentence highlighted and with GPT’s highlights) more than half of the time.

A second thing to note is that the case where nothing is useful (last row called “Nothing” in Table  5 ), always has a low percentage of preference (except for P4). In other words, presenting something alongside the future work statement (the paragraph with or without highlight) was almost always useful for the participants (100% of the time for P1, 83.3% for P2 and P3, 56.7% for P4, 90% for P5 and P7, 76.7% for P6 and 96.7% for P8).

Another trend that has been observed is that participants tend to always prefer GPT highlights (“GPT Only” line in the table), no highlight (“Paragraph Only” line in the table), or both (“Both GPT & Paragraph” line in the table), over naively highlighting the previous sentence (“Paragraph Only” line in the table). Indeed, it can be seen in Table  6 that P2 and P4 consider that GPT is significantly more useful than the previous sentence (“PS”), P5 considers that providing the paragraph without highlights (“Paragraph”) is more useful than the other options, and P3 considers that both GPT and no highlight (“Paragraph”) are better than highlighting the previous sentence (“PS”) only. This result is expected, as it means the sentence before the future work statement is not always related to the future work in question. In these cases, it is better to either use a smarter strategy (e.g., GPT) or to not provide anything at all.

As the most frequent case, for all participants, is when everything is useful, the usefulness of highlights when the paragraph without highlights is considered useful was analyzed (see Table  7 ). The purpose of this analysis is to assess the usefulness of highlights when the participants detected useful information in the paragraph when there were no highlights. In that case, when the paragraph without highlights is considered useful, all participants considered that highlighting the previous sentence was not useful. This indicates that when the participants can identify the useful information in the paragraph, highlighting the previous sentence is not useful for them. This is more rarely the case for GPT, where only P5, P7, and P8 significantly considered that GPT’s highlights were not useful when the paragraph alone is useful.

Another interesting insight comes from the opposite case, i.e., when the paragraph is considered not useful by the participants (see Table  8 ). A paragraph without highlights can be considered not useful for two reasons: (1) the paragraph does not contain useful information to understand the future work statement, or (2) the useful information in the paragraph is hidden in noise and, because of that, the participant did not see the useful information. Five participants considered that it is significantly useful to provide highlights in this context: P1, P2, and P6 considering that any way to highlight is useful in that case, while P7 has a preference for highlights provided by GPT, in this context, and P8 has a preference for highlighting the previous sentence.

Several things can be concluded from this analysis. First, the future work statement should always be shown embedded in the paragraph in which it appears, as it is very rare that this is not useful (see the last row of Table  5 ). Second, as there is a possibility that the useful information in the paragraph has been missed by the researcher, and as providing highlights rarely hurts (see the row “Paragraph Only” in Table  5 to see the percentage of time providing the paragraph has been considered useful, but not the highlights), some highlights should be proposed with the paragraph containing the future work statement. Finally, these highlights should come from an advanced method (such as GPT in our study) instead of a naive one. However, what our study also shows is that even a very advanced way to provide highlights (such as using one of the best-performing LLMs) can have difficulties to compete with a situation where no highlights are provided. This means that, in order to make the highlights useful, they should be provided by a high-performing method that can identify the pieces of information that may be hidden in the paragraph and that may help understand the future work statement.

Several elements of discussion arise from our study of recommending research directions. First, we discuss the different ways to embed researcher profiles. Second, using interpretable models made it possible to highlight interesting insights when solving the task. Third, the difficulty of recommending research directions based on researcher profiles is discussed. Fourth, we mention different fairness issues that can arise from such a recommendation of research directions. Fifth, as no study is empty of limitations, we discuss the limitations of our study in order to suggest future work. Finally, we sum up the significance of our work for the scientific community as a whole.

On the different ways to embed researcher profiles

Many different elements can be considered to embed researcher profiles when recommending research directions. Indeed, in addition to a summary of the previous abstracts (that we perform with sentence-BERT), here are other elements that can be taken into account: embedding summary of the whole previous papers, concepts retrieved in previous abstracts or papers by a concept recognizer, concepts in co-author abstracts or papers, concepts related to the papers cited by the papers of the author, etc.

Each of these strategies has pros and cons. For instance, considering whole papers to represent a researcher (instead of abstracts only) can provide more information but can also bury the important information in a mass of irrelevant texts. Furthermore, alternative strategies can be of interest in other setups than the one considered in this paper. For instance, if no abstract or paper is available for the researcher (for instance, because they are very new researchers), then recent information (abstracts, papers, and/or concepts) about the researcher’s supervisor can be used.

What can be learned from the interpretable models?

Interpretable models are models that provide users access to their inner workings [ 44 , 45 ]. Examples of interpretable models are sparse linear models, for which the weights can be extracted and studied, and decision trees with their human-friendly representation. As we use interpretable models in our experiments, like linear models with TF-IDF vectorization as features, we can leverage the information they provide about their modeling of the data and the task to get new insights. In fact, our interpretable models show that models easily overfit when performing the recommendation. Despite this issue, the weights of linear models can provide important clues about the reasons for this overfitting issue.

An analysis of the interpretable models shows that recommender systems can choose spurious features. For instance, if a researcher often generates a certain typo or refers to a specific city, then a SOI containing this typo or city may be used by the model as an important feature for the recommendation to this researcher. This, of course, leads to poor performance during the recommendation phase. Therefore, the simpler the model is, the less likely it is to overfit terms specific to the author that are in fact irrelevant to the recommendation.

However, these interpretable models also show that recommendations can sometimes be correctly performed with a combination of a few concepts that would otherwise be buried in long texts.

On the difficulty of the task

The results from our experiment in the Evaluation  section show that the task at hand is in fact very difficult. This is partly due to SOIs that are very generic.

Indeed, if we consider for instance the first-author heuristic, a recommendation is considered good for a researcher r , in our experiments, if the recommended SOI has in fact been written by r . However, if a particular SOI written by r is so generic that it is of no interest to the researchers in the field, then the probability of having it in the top 5 recommendations for r themselves is very low. Furthermore, if r tends to write all of their SOIs in such a way, then it may be that none of the SOIs written by r can in fact be recommended to r , which would participate in a bad result according to the first-author heuristic. The heuristic based on concepts alleviates this issue, as any SOI that contains concepts also present in the abstracts of r is considered a relevant recommendation candidate.

One other solution to avoid this issue, that we leave as a future work, is to gather experts in the field covered by the dataset of SOIs (e.g., in our case, experts in prenatal nutrition) and to ask them the question, “would this researcher be interested to work on at least one of these 5 SOIs?”. This solution however requires the gathered experts to study the researchers in the dataset in order to know their work and to be able to judge if the recommendations can be relevant for the researcher or not.

Possible fairness issues in the recommendation of research directions

While recommending research directions can make it easier for junior researchers to navigate their field, possible fairness issues can also arise. In this section, we highlight these potential fairness issues in order to raise awareness and inspire future work on the subject.

There are three categories of persons that are usually considered targets of fairness issues [ 46 ]: consumers, producers, and subjects. Consumers are the users of the recommender system, which corresponds, in our case, to the researchers using our system to obtain recommendations for research directions. Producers, on the other hand, are the persons producing the elements that are recommended. In our case, these persons are the authors of the SOIs and, therefore, of the papers containing them. Finally, the subjects are the persons concerned by the studies in these SOIs. For instance, if a SOI states that additional studies are required about a certain disease in a certain population, this population can also be the target of unfairness.

One first consumer fairness issue relates to the researchers who are non-native English speakers. Indeed, the more the sentences in their abstracts deviate from ordinary English phrasing, the more difficult it can be to match the researcher’s embedding to the SOI embeddings. A second consumer population that can be the target of unfairness is the most junior researchers. Indeed, these researchers may not yet use, in their few papers, the vocabulary of the field following the common usage that is well-known by more senior researchers. For these two issues related to an under-represented use of the language, fine-tuning the sentence-BERT embedding model with examples of non-native and junior researchers can be a solution. Finally, authors working on niche subjects can also suffer from unfairness. However, while this last point also deserves attention, it can more easily be tackled, as it is done by our proposed method RecSOI. Indeed, a niche research direction will be recommended by RecSOI as long as the SOI about this niche subject is close to at least one abstract of the researcher. However, RecSOI relies on an embedding model trained on data that may not contain lots of documents about the niche subject. Because of that, the resulting embeddings of sentences about niche subjects may be of lower quality, which can therefore lower the recommendation performance for niche subjects.

The fairness issues related to producers mirror the ones related to consumers. Indeed, some SOIs may be less often recommended and therefore proposed as research directions, if they have been written by non-native English speakers or junior researchers, or if they state ignorance about a niche subject. This means that the work of these researchers is less likely to be used as a basis for future work. As for consumer issues, fine-tuning the sentence-BERT embeddings to obtain embeddings that are equally good for non-conventional research sentences in English can be a solution.

However, the populations that may be the most impacted by the fact that some SOIs may be less recommended are the subjects in the related studies. Indeed, if, for instance, the medical aspects of a population from a certain non-English speaking country are almost exclusively studied by researchers from that country, then medical abnormalities and other research directions related to this population will be less recommended. However, if the producer unfairness issues mentioned above are solved, and all SOIs are all equally good for recommended, then this fairness issue may be solved at the same time.

Limitations of this work

Like all studies, our work comes with a set of limitations that are important to consider. First of all, our dataset is focused on prenatal nutrition papers. While additional work on generalization will need to be conducted, it is important to note that it is difficult to gather 500 or more authors from the literature to assess recommendations made for them. Likewise, gathering senior researchers to read researcher profiles and assess recommendations is also very difficult. This is why focusing on a specific field makes it easier to develop classifiers with good performance to automatically annotate the SOIs from a large set of papers.

However, in addition to its focus on a specific field, our work is also focused on sentences. Indeed, SOIs are considered to be contained in sentences in our work and the work of Boguslav et al. [ 5 , 6 ]. However, in some cases, multiple sentences are needed to describe the ignorance comprehensively. While we leave the detection and the recommendation of multi-sentence SOIs as a future work, it is worth noting that it can make the task even more difficult, as more words and concepts would have to be encoded in an embedding.

Another limitation of our work, which would require further studies and a dedicated solution, is that we consider SOIs without knowing if the statements have already been answered in recent papers. This task of determining if a solution to the problem has been provided is very difficult for many reasons. Some of these reasons are that (1) the words and the level of formalization used in the paper containing the problem and the one with the solution may be different, and (2) the proposed solutions are generally not complete answers to the question: they make specific hypotheses, have limitations, etc. This makes it therefore very hard to automatically solve the problem “Is this lack of knowledge, not a lack of knowledge at all anymore?”.

Next comes a limitation that is specific to our novel task and its solution: we implicitly make the hypothesis that the papers containing the research directions to recommend are freely and openly accessible. This is certainly not always the case, but research directions inside papers can hardly be extracted if the papers are not accessible. One solution to this issue is to propose research directions as we do for openly accessible papers and recommend whole papers based on meta-data when the papers are not accessible. See Haruna et al. [ 20 ] for a solution to recommend papers when meta-data only are available.

Significance of this work

Despite the inherent limitations and the need for future exploration, our study’s findings can prove useful far beyond our dataset on prenatal nutrition and the broader scope of biomedical research. Our work lies in the larger spectrum of how we keep track of what we know as well as recognize what we have yet to discover. Such an approach is particularly crucial given the accelerating pace of scientific output [ 1 ]. We believe that our study is one of the first to propose a systematic method to counter the decline in innovation, disruptiveness, and return on scientific investments [ 2 , 3 , 4 ]. By providing a structured approach to understanding and organizing existing knowledge, the task and the system we propose could be of great utility to other scientific fields, promoting efficient navigation through extensive literature and assisting in the identification of under-explored areas.

This paper introduced a new task - recommending research directions based on statements of ignorance (SOIs) - and a system to solve it. While many papers in the literature focus on recommending scientific papers, our work goes further by recommending specific sentences in these papers that can lead to new research directions. While the mass of scientific papers grows bigger and bigger, we believe it is important to develop solutions to navigate this mass. This is especially true for junior researchers who do not know yet all the potential directions in their field.

Our solution, RecSOI (Recommender of research directions using Statements Of Ignorance) leverages weighted BERT-like embeddings of previous abstracts to build researcher profiles. These profiles can then be used to find SOIs that are relevant to them. Different heuristics are used to estimate the relevancy of our work. For the concepts heuristic, we show that RecSOI achieves a MAP \(_\exists\) @5 of 77.2%. This means that 77.2% of the authors have, in their top 5, at least one SOI that contains at least one concept present in their abstracts.

Furthermore, as one of the contributions of this paper is the task itself, we also provide a detailed discussion of its uses and limitations. Among the important elements that are discussed, we enumerate potential fairness issues that can arise when dealing with this task.

Our work opens the door to many different avenues of future work. One of the most important future work avenues is to detect if a stated lack of knowledge is not a lack of knowledge anymore in the literature. This requires, for a specific SOI, browsing the literature in order to find if a paper fully answers the stated ignorance. While this is a very hard problem, we believe it is one of the most important in the field of “science of science”.

Other future work relates more closely to the solution brought in this paper. First, a multi-sentence extraction of SOIs can be developed and then used for recommendation. Second, a more sophisticated metric learning procedure that does not fall into the overfitting trap can be developed to build researcher profiles. This can achieved in two different ways: (1) by defining a metric that would consider the different sentence-BERT dimensions without sticking too closely to the training data; and/or (2) by defining a new way to embed the researcher profiles, so that metrics applied to these profiles would not overfit. Another interesting future work would also be to add a name disambiguation module when extracting past abstracts, in order to make sure that the author of the abstracts is indeed the author for which recommendations are requested. Another way to build researcher profiles, and an interesting future work, would be to determine the abstract from the past of the researcher that are still relevant to characterize their current research. Finally, a cross-field recommender system can be developed. Indeed, one can argue that the expertise and interests of some researchers can cross scientific domains (e.g., a machine learning researcher following a research direction from the field of AI law).

Through the contributions of this paper, we aim to help science overcome one of its largest current challenges: helping researchers find research directions that are relevant to them.

Availability of data and materials

The annotated dataset of Boguslav et al. is available at https://github.com/UCDenver-ccp/Ignorance-Question-Work-Full-Corpus . RecSOI code, as well as the related data and resources, are available at https://github.com/AdrienBibal/RecSOI . Additional papers can freely be accessed with the PubMed Entrez API, using the query “prenatal nutrition” (without quotes). To get information about the authors, their co-authors and their papers, the OpenAlex API can freely be used.

Abbreviations

Large language models

Mean average precision at k

Mean average precision at k with at least one relevant recommendation

Previous sentence

Recommender of research directions using statements of ignorance

Statement of ignorance

Bornmann L, Haunschild R, Mutz R. Growth rates of modern science: A latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanit Soc Sci Commun. 2021;8(1):1–15.

Article   Google Scholar  

Park M, Leahey E, Funk RJ. Papers and patents are becoming less disruptive over time. Nature. 2023;613(7942):138–44.

Cowen T, Southwood B. Is the rate of scientific progress slowing down? GMU Work Pap Econ. 2019;21–13:1–46.

Boeing P, Hünermund P. A global decline in research productivity? Evidence from China and Germany. Econ Lett. 2020;197:109646.

Boguslav MR, Salem NM, White EK, Leach SM, Hunter LE. Identifying and classifying goals for scientific knowledge. Bioinforma Adv. 2021;1(1):vbab012.

Boguslav MR, Salem NM, White EK, Sullivan KJ, Bada M, Hernandez TL, et al. Creating an ignorance-base: Exploring known unknowns in the scientific literature. J Biomed Inform. 2023;143:104405.

Achakulvisut T, Bhagavatula C, Acuna DE, Kording K. Claim extraction in biomedical publications using deep discourse model and transfer learning. 2019. arXiv:190700962.  https://arxiv.org .

Stab C, Kirschner C, Eckle-Kohler J, Gurevych I. Argumentation mining in persuasive essays and scientific articles from the discourse structure perspective. In: Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing (ArgNLP). CEUR-WS; 2014. pp. 21–5. CEUR-WS.org .

Lahav D, Falcon JS, Kuehl B, Johnson S, Parasa S, Shomron N, et al. A search engine for discovery of scientific challenges and directions. In: Proceedings of the AAAI Conference on Artificial Intelligence. AAAI: Washington, DC, United States; 2022. pp. 11982–90. vol. 36.

Bai X, Wang M, Lee I, Yang Z, Kong X, Xia F. Scientific paper recommendation: A survey. IEEE Access. 2019;7:9324–39.

Pan C, Li W. Research paper recommendation with topic analysis. In: Proceedings of the International Conference On Computer Design and Applications. vol. 4. 2010. pp. V4–264.

Sugiyama K, Kan MY. Scholarly paper recommendation via user’s recent research interests. In: Proceedings of the Annual Joint Conference on Digital Libraries. ACM: New York, NY, United States; 2010. pp. 29–38.

Nascimento C, Laender AH, da Silva AS, Gonçalves MA. A source independent framework for research paper recommendation. In: Proceedings of the Annual International ACM/IEEE Joint Conference on Digital Libraries. ACM: New York, NY, United States; 2011. pp. 297–306.

Jiang Y, Jia A, Feng Y, Zhao D. Recommending academic papers via users’ reading purposes. In: Proceedings of the ACM Conference on Recommender Systems. ACM: New York, NY, United States; 2012. pp. 241–4.

Winoto P, Tang TY, McCalla G. Contexts in a paper recommendation system with collaborative filtering. Int Rev Res Open Distrib Learn. 2012;13(5):56–75.

Google Scholar  

Lee J, Lee K, Kim JG. Personalized academic research paper recommendation system. 2013. arXiv:13045457.  https://arxiv.org .

Achakulvisut T, Acuna DE, Ruangrong T, Kording K. Science Concierge: A fast content-based recommendation system for scientific publications. PLoS ONE. 2016;11(7):e0158423.

Zhao W, Wu R, Liu H. Paper recommendation based on the knowledge gap between a researcher’s background knowledge and research target. Inf Process Manag. 2016;52(5):976–88.

Hassan HAM. Personalized research paper recommendation using deep learning. In: Proceedings of the Conference on User Modeling, Adaptation and Personalization. ACM: New York, NY, United States; 2017. pp. 327–30.

Haruna K, Akmar Ismail M, Damiasih D, Sutopo J, Herawan T. A collaborative approach for research paper recommender system. PLoS ONE. 2017;12(10):e0184516.

Acuna DE, Nagre K, Matnani P. EILEEN: A recommendation system for scientific publications and grants. 2021. arXiv:211009663.

Zhu Y, Lin Q, Lu H, Shi K, Qiu P, Niu Z. Recommending scientific paper via heterogeneous knowledge embedding based attentive recurrent neural networks. Knowl-Based Syst. 2021;215:106744.

Firestein S. Ignorance: How it drives science. New York, NY, United States: OUP; 2012.

Reimers N, Gurevych I. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). ACL: Stroudsburg, PA, United States; 2019. pp. 3982–92.

Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, et al. BioBERT: A pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2020;36(4):1234–40.

Peng Y, Yan S, Lu Z. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. In: Proceedings of the BioNLP Workshop and Shared Task. ACL: Stroudsburg, PA, United States; 2019. pp. 58–65.

Rocchio JJ. Relevance feedback in information retrieval. In: The SMART Retrieval System: Experiments in Automatic Document Processing. New York: Prentice Hall; 1971. pp. 313–23.

National Center for Biotechnology Information. Entrez programming utilities help. 2010. http://www.ncbi.nlm.nih.gov/books/NBK25501 . Accessed 12 May 2023.

Comeau DC, Wei CH, Islamaj Doğan R, Lu Z. PMC text mining subset in BioC: About three million full-text articles and growing. Bioinformatics. 2019;35(18):3533–5.

Priem J, Piwowar H, Orr R. OpenAlex: A fully-open index of scholarly works, authors, venues, institutions, and concepts. 2022. arXiv:220501833.  https://arxiv.org .

Raza S, Reji DJ, Shajan F, Bashir SR. Large-scale application of named entity recognition to biomedicine and epidemiology. PLoS Digit Health. 2022;1(12):e0000152.

Beel J, Gipp B, Langer S, Breitinger C. Paper recommender systems: A literature survey. Int J Digit Libr. 2016;17:305–38.

Monk C, Georgieff MK, Osterholm EA. Maternal prenatal distress and poor nutrition - Mutually influencing risk factors affecting infant neurocognitive development. J Child Psychol Psychiatry. 2013;54(2):115–30.

Liu N, Mao L, Sun X, Liu L, Yao P, Chen B. The effect of health and nutrition education intervention on women’s postpartum beliefs and practices: A randomized controlled trial. BMC Public Health. 2009;9:1–9.

Wu Q, Huang Y, van Velthoven MH, Wang W, Chang S, Zhang Y. The effectiveness of using a WeChat account to improve exclusive breastfeeding in Huzhu County Qinghai Province, China: Protocol for a randomized control trial. BMC Publ Health. 2019;19:1–10.

Harris MA, Reece MS, McGregor JA, Wilson JW, Burke SM, Wheeler M, et al. The effect of omega-3 docosahexaenoic acid supplementation on gestational length: Randomized trial of supplementation compared to nutrition education for increasing n-3 intake from foods. BioMed Res Int. 2015;2015.

Li J, Liu J, Zhang C, Liu G, Leng J, Wang L, et al. Effects of lifestyle intervention of maternal gestational diabetes mellitus on offspring growth pattern before two years of age. Diabetes Care. 2021;44(3):e42–4.

Lin HW, Feng HX, Chen L, Yuan XJ, Tan Z. Maternal exposure to environmental endocrine disruptors during pregnancy is associated with pediatric germ cell tumors. Nagoya J Med Sci. 2020;82(2):323.

Qiu C, Enquobahrie D, Frederick IO, Abetew D, Williams MA. Glucose intolerance and gestational diabetes risk in relation to sleep duration and snoring during pregnancy: A pilot study. BMC Women’s Health. 2010;10:1–9.

OpenAI. Introducing ChatGPT. 2022. https://openai.com/blog/chatgpt . Accessed 4 Apr 2023.

McGrath J, Iwazaki T, Eyles D, Burne T, Cui X, Ko P, et al. Protein expression in the nucleus accumbens of rats exposed to developmental vitamin D deficiency. PLoS ONE. 2008;3(6):e2383.

OpenAI. GPT-4. Technical Report. 2023. arXiv:2303.08774.

Scao TL, Fan A, Akiki C, Pavlick E, Ilić S, Hesslow D, et al. BLOOM: A 176b-parameter open-access multilingual language model. 2022. arXiv:2211.05100.  https://arxiv.org .

Bibal A, Frénay B. Interpretability of machine learning models and representations: An introduction. In: Proceedings of the European Symposium on Artificial Neural Networks. i6doc.com; 2016. pp. 77–82.  https://i6doc.com/en/ .

Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM Comput Surv. 2018;51(5):1–42.

Ekstrand MD, Das A, Burke R, Diaz F, et al. Fairness in information access systems. Found Trends Inf Retr. 2022;16(1–2):1–177.

Download references

Acknowledgements

Adrien Bibal would like to thank the Belgian American Educational Foundation (BAEF) and Nourah E. Salem would like to thank the National Institutes of Health for their support. The authors would like to thank Dr. Rebecca Marion for her precious insights on the subject of this paper and Dr. Mayla R. Boguslav for sharing various pieces of information related to the dataset. The authors would also like to thank (in alphabetical order) Dr. Mayla R. Boguslav, Sajjad Daneshgar, David Gamero del Castillo, Lucas Gillenwater, Dr. Mélanie Henry, Scott Mongold, Brook Santangelo and Anastasia Theodosiadou for their participation in the evaluation of the importance of context.

Adrien Bibal was supported by a Fellowship of the Belgian American Educational Foundation (BAEF) and Nourah E. Salem by the National Institutes of Health (grant number R01LM013400).

Author information

Daniel E. Acuna, Robin Burke and Lawrence E. Hunter are co-last authors.

Authors and Affiliations

University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA

Adrien Bibal, Nourah M. Salem & Elizabeth K. White

University of Louvain, Louvain-la-Neuve, Belgium

Rémi Cardon

University of Colorado Boulder, Boulder, Colorado, USA

Daniel E. Acuna & Robin Burke

University of Chicago, Chicago, Illinois, USA

Lawrence E. Hunter

You can also search for this author in PubMed   Google Scholar

Contributions

A.B. studied the problem, and conceptualized and implemented the solution. A.B., N.M.S., D.E.A., R.B. and L.E.H studied the solution for the recommender system and its evaluation. A.B. and N.M.S. prepared the data and developed the evaluation for the recommender system. A.B., R.C., E.K.W. and L.E.H. studied the solution for the extraction of context and its evaluation. A.B. conducted and implemented the evaluation for the extraction of context. L.E.H., R.B. and D.E.A. supervised the research. A.B. wrote the main manuscript. All authors reviewed the manuscript.

Corresponding author

Correspondence to Adrien Bibal .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Bibal, A., Salem, N., Cardon, R. et al. RecSOI: recommending research directions using statements of ignorance. J Biomed Semant 15 , 2 (2024). https://doi.org/10.1186/s13326-024-00304-3

Download citation

Received : 08 November 2023

Accepted : 23 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1186/s13326-024-00304-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Recommender systems
  • Biomedical literature
  • Natural Language Processing
  • Science of science

Journal of Biomedical Semantics

ISSN: 2041-1480

recommendations in research

  • Table of Contents

AI, Ethics & Human Agency

Collaboration, information literacy, writing process.

  • Recommendation Reports
  • © 2023 by Joseph M. Moxley - University of South Florida , Julie Staggers - Washington State University

Recommendation reports are texts that advise audiences about the best ways to solve a problem. Recommendation reports are a type of formal report that is widely used across disciplines and professions. Subject Matter Experts aim to make recommendations based on the best available theory, research and practice.

Different disciplines and professions have different research methods for assessing knowledge claims and defining knowledge . Thus, there is no one perfect way to write a recommendation report.

As always, when composing—especially when you’re planning your report—it’s strategic to focus on your audience, rhetorical analysis, and rhetorical reasoning. At center, keep the focus on what you want your audience to feel, think, and do.

While writers, speakers, and knowledge workers . . . may choose a variety of ways to organize their reports, below are some fairly traditional sections to formal recommendations reports:

  • Letter of transmittal
  • Problem Definition
  • Potential solutions to the problem
  • Empirical Research Methods used to investigate the problem
  • Recommendations
  • List of Illustrations

Report Body

Note: your specific rhetorical context will determine what headings you use in your Recommendation Report. That said, the following sections are fairly typical for this genre, and they are required, as appropriate, for this assignment.

Report back matter

Collect material for the appendices as you go. The report back matter will include:

  • Bibliography, which is sometimes referred to as Works Cited or References (Use a citation format appropriate for your field (APA, MLA, Chicago, IEEE, etc.)
  • Appendices, if necessary (e.g., letters of support, financial projections)

Formatting and design

Employ a professional writing style throughout, including:

  • Page layout: Appropriate to audience, purpose, and context. 8.5 x 11 with 1-inch margins is a fail-safe default.
  • Typography: Choose business-friendly fonts appropriate to your audience, purpose, and context; Arial for headers and Times New Roman for body text is a safe, neutral default.
  • Headings and subheadings: Use a numbered heading and subheading system, formatted using the Styles function on your word processor.
  • Bulleted and numbered lists: Use lists that are formatted correctly using the list buttons on your word processor with a blank line before the first bullet and after the last bullet
  • Graphics and figures: Support data findings and arguments with appropriate visuals – charts, tables, graphics;  Include numbered titles and captions
  • Page numbering: use lower-case Roman numerals for pages before the table of contents, Arabic numerals; no page number on the TOC.

Additional Resources

  • Final Reports by Angela Eward-Mangione   and Katherine McGee
  • Professional Writing Style

Brevity - Say More with Less

Brevity - Say More with Less

Clarity (in Speech and Writing)

Clarity (in Speech and Writing)

Coherence - How to Achieve Coherence in Writing

Coherence - How to Achieve Coherence in Writing

Diction

Flow - How to Create Flow in Writing

Inclusivity - Inclusive Language

Inclusivity - Inclusive Language

Simplicity

The Elements of Style - The DNA of Powerful Writing

Unity

Suggested Edits

  • Please select the purpose of your message. * - Corrections, Typos, or Edits Technical Support/Problems using the site Advertising with Writing Commons Copyright Issues I am contacting you about something else
  • Your full name
  • Your email address *
  • Page URL needing edits *
  • Name This field is for validation purposes and should be left unchanged.

Other Topics:

Citation - Definition - Introduction to Citation in Academic & Professional Writing

Citation - Definition - Introduction to Citation in Academic & Professional Writing

  • Joseph M. Moxley

Explore the different ways to cite sources in academic and professional writing, including in-text (Parenthetical), numerical, and note citations.

Collaboration - What is the Role of Collaboration in Academic & Professional Writing?

Collaboration - What is the Role of Collaboration in Academic & Professional Writing?

Collaboration refers to the act of working with others or AI to solve problems, coauthor texts, and develop products and services. Collaboration is a highly prized workplace competency in academic...

Genre

Genre may reference a type of writing, art, or musical composition; socially-agreed upon expectations about how writers and speakers should respond to particular rhetorical situations; the cultural values; the epistemological assumptions...

Grammar

Grammar refers to the rules that inform how people and discourse communities use language (e.g., written or spoken English, body language, or visual language) to communicate. Learn about the rhetorical...

Information Literacy - Discerning Quality Information from Noise

Information Literacy - Discerning Quality Information from Noise

Information Literacy refers to the competencies associated with locating, evaluating, using, and archiving information. In order to thrive, much less survive in a global information economy — an economy where information functions as a...

Mindset

Mindset refers to a person or community’s way of feeling, thinking, and acting about a topic. The mindsets you hold, consciously or subconsciously, shape how you feel, think, and act–and...

Rhetoric: Exploring Its Definition and Impact on Modern Communication

Rhetoric: Exploring Its Definition and Impact on Modern Communication

Learn about rhetoric and rhetorical practices (e.g., rhetorical analysis, rhetorical reasoning,  rhetorical situation, and rhetorical stance) so that you can strategically manage how you compose and subsequently produce a text...

Style

Style, most simply, refers to how you say something as opposed to what you say. The style of your writing matters because audiences are unlikely to read your work or...

The Writing Process - Research on Composing

The Writing Process - Research on Composing

The writing process refers to everything you do in order to complete a writing project. Over the last six decades, researchers have studied and theorized about how writers go about...

Writing Studies

Writing Studies

Writing studies refers to an interdisciplinary community of scholars and researchers who study writing. Writing studies also refers to an academic, interdisciplinary discipline – a subject of study. Students in...

Featured Articles

Student engrossed in reading on her laptop, surrounded by a stack of books

Academic Writing – How to Write for the Academic Community

recommendations in research

Professional Writing – How to Write for the Professional World

recommendations in research

Authority – How to Establish Credibility in Speech & Writing

recommendations in research

Quinquennial review of The Alan Turing Institute

This report presents the conclusions and recommendations of the quinquennial review panel to EPSRC on the Alan Turing Institute.

recommendations in research

Quinquennial review of The Alan Turing Institute (PDF)

PDF , 475 KB

If you cannot open or read this document, you can ask for a different format.

Email [email protected] , telling us:

  • the name of the document
  • what format you need
  • any assistive technology you use (such as type of screen reader).

Find out about our approach to the accessibility of our website .

This report presents the conclusions and recommendations of the quinquennial review panel to Engineering and Physical Sciences Research Council (EPSRC) on The Alan Turing Institute.

The quinquennial review was conducted by a panel of independent experts and has provided advice to both EPSRC and the institute on how to strengthen successful delivery of its strategy and help shape its future direction in a rapidly changing AI landscape.

It outlines the value of the institute’s activities and outputs during the first five years of its operation, and assesses its future strategy to continue providing value as a national institute.

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services .

usa flag

  • Policy & Compliance
  • Peer Review Policies and Practices

Revisions to the NIH Fellowship Application and Review Process

NIH is revising the application and review criteria for fellowship applications, beginning with those submitted January 25, 2025 and beyond. This page describes the goals of the change, implications for those writing and reviewing fellowship applications, and provides links to training and other resources.

Fellowship applications submitted on or after January 25, 2025 will follow a revised application and review criteria. The goal of the changes is to improve the chances that the most promising fellowship candidates will be consistently identified by scientific review panels. The changes will:

  • Better focus reviewer attention on three key assessments: the fellowship candidate’s preparedness and potential, research training plan, and commitment to the candidate
  • Ensure a broad range of candidates and research training contexts can be recognized as meritorious by clarifying and simplifying the language in the application and review criteria
  • Reduce bias in review by emphasizing the commitment to the candidate without undue consideration of sponsor and institutional reputation

Background  

This page is designed to help you learn more about the peer review process and why we’re revising the application and review process for NIH fellowship applications submitted for due dates on or after January 25, 2025.

Changes to Fellowship Applications  

Learn more about the changes being made to fellowship application forms and instructions for due dates on or after January 25, 2025.

Changes to the Fellowship Review Criteria  

Learn more about the changes being made to the fellowship review criteria for applications submitted for due dates on or after January 25, 2025.

Candidate Guidance  

This page provides guidance for candidates applying to fellowships for due dates on or after January 25, 2025

Reviewer Guidance (Coming in 2025)

Reviewers will be provided training and guidance materials in Spring 2025 in time for the first review meetings held in the summer of 2025 using the revised review criteria.

FAQs  

Find answers to your questions about the revisions to fellowship application and review criteria.

Training and Resources  

Training and resources, including presentations, webinars, and other resources to help you understand the revised fellowship application and peer review process are found on this site.

Notices, Statements, and Reports  

This page provides links to Notices, blog posts, press releases, and other background reports on the revised fellowship application and review criteria.

This page last updated on: April 18, 2024

  • Bookmark & Share
  • E-mail Updates
  • Help Downloading Files
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Kane RL, Guise JM, Hartman K, et al. Presentation of Future Research Needs [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Apr. (Methods Future Research Needs Reports, No. 9.)

Cover of Presentation of Future Research Needs

Presentation of Future Research Needs [Internet].

Recommendations.

  • General Recommendations

The FRN, whether methodological or topic-specific in nature, should be presented as a top tier rather than a numerical list. The level of detail of the FRN description will depend on the state of the science, and EPCs should use their judgment based on their understanding of the topic and field.

Basic principles include:

  • Rationale for prioritization if possible.
  • Research design considerations for the FRN should be offered as suggestions only to avoid appearing overly prescriptive.

The workgroup recommended separating the presentation of two elements of potential future research: methods issues and specific topics. Methods issues tend to transcend specific topics. They should be ranked separately.

  • FRN Methods Framework

The workgroup identified a number of potential methodological issues that an FRN might address. Table 1 identifies elements that should be considered when addressing methods issues.

Table 1. Potential issues for methodological future research needs.

Potential issues for methodological future research needs.

For each relevant issue, the FRN should address elements and level of detail and explain how this fills the evidence gap.

An example of a methodological gap identified in an FRN relates to treatments for localized prostate cancer. Because of the lengthy course of this disease, some randomized controlled trials have been published with high crossover rates in which patients have taken the initiative to receive the treatment to which they were not randomized. This is understandable from the patient’s perspective but may greatly reduce the ability to draw conclusions from the trial. Research was therefore recommended on “Exploring methods to increase patient adherence with randomization scheme.” This might include surveys to help understand participants’ decision-making; and measuring the effectiveness of approaches intended to reduce unplanned crossing over to another arm. Research was also recommended to increase the use of statistical modeling and other advanced methods in studies on localized prostate cancer.

  • FRN Topics Framework

The steps in this process may be summarized as follows:

  • Include reason why the FRN is prioritized as high. May include criteria used (burden, feasibility, impact).
  • Organize by PICOTS.
  • Use analytic framework if possible, and adapt if needed. Consider including relevant issues such as subgroups, settings, and other contextual issues.
  • Level of detail of FRN description depends on the state of the science.

The presentation of specific research topics should include a rationale as well as an organized presentation of each topic. Provide text description of why the prioritized questions are particularly urgent to be answered. Criteria for choosing topics should reflect why answers to the prioritized questions are particularly urgent. Proposed criteria include:

  • Feasibility of research
  • Likelihood results will affect practice/policy (for patients as well as others)

The PICOTS formulation should be used to present each recommended topical research question in a separate table for each question. An example of using the PICOTS framework to structure future research recommendations comes from the report Future Research Needs for Attention Deficit Hyperactivity Disorder: Effectiveness of Treatment in At-Risk Preschoolers; Long-Term Effectiveness in All Ages; and Variability in Prevalence, Diagnosis, and Treatment 24 (see Table 2 ).

Table 2. Excerpt from “Future Research Needs for Attention Deficit Hyperactivity Disorder”.

Excerpt from “Future Research Needs for Attention Deficit Hyperactivity Disorder”.

Graphical frameworks are often used in grants to clearly communicate ideas, linkages, and assumptions to demonstrate that the research proposed is well-integrated, well-reasoned, and appropriately designed to advance a field of research. Analytic frameworks have been used to structure comparative and systematic reviews but were not intended to guide discussions of future research, although work is underway to adapt them to FRNs when feasible. However, analytic frameworks depict the population, interventions, comparators, outcomes, timing, and settings (PICOTS) which are often key elements in research study designs. Future research chapters of CE reviews often mention the need for more research on special populations (including racial, ethnic, and genetic variations), settings (e.g., community or geographic), contextual features such as patient-provider communication and decisionmaking, and influencing factors as important topics for future research. Thus an analytic framework may be an effective method to display these considerations and their linkages to interventions and their outcomes. The report should employ a conceptual model or logic diagram when appropriate; not all FRN topics may be suitable for this. For example, research questions may address prevalence in subgroups. The model should be based on current thinking and not limited to what was in the parent report. 16

An example of the use of a framework adapted for an FRN document comes from Future Research Needs To Reduce the Risk of Primary Breast Cancer in Women , 13 which used an analytic framework that incorporated the priority research area, research needs, and potential study designs ( Figure 2 ). This flowchart depicts an enhanced conceptual framework to illustrate priorities for future research to reduce the risk of primary breast cancer in women. The chart emphasizes high priority research domains and depicts “influencing factors” important to stakeholders and integral to patient-centered care: health system/organization, social, educational, economic, and environmental factors. A series of research questions are applied to these high priority research domains, with the overall goal of understanding which interventions are most effective to reduce risk of breast cancer for which patients under what circumstances.

Analytic framework from “Future Research Needs to Reduce the Risk of Primary Breast Cancer in Women”.

As part of the process of the most salient FRNs, EPCs engage a wide variety of stakeholders who may identify a broad list of new potential research areas. It should be noted that these new areas of research will likely not be based on an assessment of the evidence (or lack of evidence) because they fall outside of the scope of the parent evidence report.

The level of detail in presenting recommendations will vary with the topic. Research in some areas may have sufficiently developed to the point where the gap can be precisely defined (e.g., testing a specific intervention or comparing two specific interventions). In other areas, the suggestions may be couched more broadly about types of questions or interventions. Likewise, the specificity of research design considerations may vary with the circumstance. In some cases, but not all, the appropriate design will be evident. There may be design tradeoffs or specific issues to consider. Include any research design considerations or comments if relevant. For example, it would be inappropriate to recommend that you must do an RCT with X number of people, but it would be fitting to suggest (as opposed to recommend) that future research should be appropriately powered to study X subpopulation. EPCs will need to decide when the research design issues are sufficiently clear that they can be urged.

  • Considerations for Research Designs

In looking at the entirety of the literature, evidence reviews may uncover important insights regarding study designs that would help advance the science. For example, in a review of the treatment of hip fracture, it became clear that the studies conducted by epidemiologists emphasized patient characteristics, and those by orthopedic surgeons emphasized treatments, but neither captured the whole terrain. FRN authors may want to consider including appropriate research considerations. FRN documents aim to delineate where there is an absence of studies and also to describe limitations of existing studies to the extent that researchers could improve upon those limitations. It can be a delicate balance to provide sufficient detail to be helpful to researchers while not being so prescriptive that research creativity and discovery are stifled. As opposed to identifying gaps in research, there may be important design issues to consider. When there are fatal flaws in prior study designs, future research needs documents should describe the flaws and potential design remedies in sufficient detail that interested researchers could improve their study designs accordingly. The amount of detail that should be shared in FRN documents will depend on the topic and specifics of the report. In fields with relatively little evidence, a broad translational table presenting the spectrum of study designs that would be acceptable to inform certain research gaps may be most useful. In other areas, where there is a substantial body of literature, a deeper description of important flaws in existing studies that are hampering the strength of certainty in results is appropriate.

A common issue that future research documents can inform across topics addresses the role for observational studies and comments about the context in which observational studies may be suitable or even preferable for certain needs. For example, while there may be randomized controlled trials of screening, the question about the adverse consequences of screening (or the long-term effects) may be best answered through an observational study. While each report will differ on the extent to which details about study designs can be discussed, it is the general intent to describe important flaws and provide insights into possible solutions while promoting the creativity that advances discovery.

An example of how study design can be addressed while leaving reasonable latitude can be found in the future study recommendations in Future Research Needs for Angiotensin-Converting Enzyme Inhibitors (ACEIs), Angiotensin II Receptor Antagonists (ARBs), or Direct Renin Inhibitors (DRI) for Treating Hypertension 25 ( Table 3 ).

Table 3. Excerpt from “Future Research Needs for Angiotensin-Converting Enzyme Inhibitors (ACEIs), Angiotensin II Receptor Antagonists (ARBs), or Direct Renin Inhibitors (DRI) for Treating Hypertension”.

Excerpt from “Future Research Needs for Angiotensin-Converting Enzyme Inhibitors (ACEIs), Angiotensin II Receptor Antagonists (ARBs), or Direct Renin Inhibitors (DRI) for Treating Hypertension”.

For specific details related to considerations of research designs in FRN documents, please refer to the RTI EPC methods paper on Advantages and Disadvantages of Different Study Designs for Future Research Needs. 17

  • Cite this Page Kane RL, Guise JM, Hartman K, et al. Presentation of Future Research Needs [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Apr. (Methods Future Research Needs Reports, No. 9.) Recommendations.
  • PDF version of this title (352K)

In this Page

Other titles in this collection.

  • AHRQ Methods Future Research Needs Series

Commission receives scientific advice on Artificial Intelligence uptake in research and innovation

Today, the Scientific Advice Mechanism (SAM) released its independent policy recommendations on how to facilitate the uptake of Artificial Intelligence (AI) in research and innovation across the EU. The advice is non-binding but may feed into the overall Commission strategy for AI in research and innovation. It is underpinned by an evidence review report published also today.

The Chair of the Group of Chief Scientific Advisors handed over the opinion to Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age, and Iliana Ivanova, Commissioner for Innovation, Research, Culture, Education and Youth.

Executive Vice-President Vestager said:

“There is no better way to boost the uptake of AI in scientific research than asking scientists about what they need the most. Not only are these recommendations concrete. Also they look at multiple aspects which AI and science need to serve us best: significant funding, skills, high quality data, computing power, and of course, guardrails to ensure we keep by the values we believe in.”

Commissioner Ivanova said:

“Artificial Intelligence means a revolution in research and innovation and will drive our future competitiveness. We need to ensure its responsible uptake by our researchers and innovators for the benefit of science but also of the economy and society as a whole. The work of the scientific advisors provides us with a wealth of solid evidence and practical advice to inform our future actions.”

The opinion addresses both the opportunities and challenges of using Artificial Intelligence in science. AI has the potential to revolutionise scientific discovery, accelerate research progress, boost innovation and improve researchers’ productivity. It can strengthen the EU’s position in science and ultimately contribute to solving global societal challenges. On the other hand, it also presents obstacles and risks, for example with obtaining transparent, reproducible results that are essential to robust science in an open society. Furthermore, the efficacy of many existing AI models is regarded as compromised by the quality of data used for their training.

The recommendations of the independent scientific advisors include:

  • Establishment of a European institute for AI in science To counter the dominance of a limited number of corporations over AI infrastructure and to empower public research across diverse disciplines, the scientists advise the creation of a new institute. This facility would offer extensive computational resources, a sustainable cloud infrastructure and specialised AI trainings for scientists.
  • High quality standards for AI systems (i.e., data, computing, codes) AI-powered scientific research requires a vast amount of data. That data should be of high quality, responsibly collected and meticulously curated, ensuring fair access for European researchers and innovators.
  • Transparency of public models The EU should support transparent public AI models helping, among other things, increase the trustworthiness of AI and reinforce the reproducibility of research results.
  • AI tools and technologies specialised for scientific work To help scientists enhance their overall efficiency, SAM advises the EU to support the development of AI tools and technologies specialised for scientific work (e.g., foundation models for science, scientific large language models, AI research assistants and other ways to use AI technologies).
  • AI-powered research with major benefits for EU citizens According to the advice, prioritising AI-powered research in areas like personalised healthcare and social cohesion, where data is abundant but difficult to interpret, would maximize benefits for EU citizens.
  • A Human and Community-Centric Approach The advisors recommend that the EU promotes research into the philosophical, legal, and ethical dimensions of AI in science, ensuring respect of human rights, transparency and accountability. Promoting ‘AI literacy’ would not only enable everyone to enjoy the benefits of this technology, but also strengthen future European research by nurturing and retaining the best talents.

The SAM opinion was requested by Executive Vice-President Vestager in July 2023. It complements a range of material that the Commission has developed on the use of AI in research and innovation. This includes the living Guidelines on the responsible use of generative AI released on 20 March as well as the policy brief on AI in Science released in December 2023, the foresight survey among ERC grantees that are using AI in their research, released in December 2023, and the portfolio analysis of ERC projects using and developing AI, published in March 2024.

The Scientific Advice Mechanism provides independent scientific evidence and policy recommendations to the European institutions by request of the College of Commissioners. It includes the Science Advice for Policy by European Academies ( SAPEA ) consortium, which gathers expertise from more than 100 institutions across Europe, and the Group of Chief Scientific Advisors ( GSCA ), who provide independent guidance informed by the evidence.

More Information

Scientific Advice Mechanism, Group of Chief Scientific Advisors, Successful and timely uptake of Artificial Intelligence in science in the EU, Scientific Opinion No. 15

Scientific Advice Mechanism, Science Advice for Policy by European Academies, Successful and timely uptake of Artificial Intelligence in science in the EU, Evidence Review Report

Living guidelines on the responsible use of generative AI in research

Successful and timely uptake of artificial intelligence in science in the EU - Publications Office of the EU (europa.eu)

AI in science – Harnessing the power of AI to accelerate discovery and foster innovation

Mapping ERC frontier research artificial intelligence

Use and impact of artificial intelligence in the scientific process – Foresight

Share this page

VIDEO

  1. Research Report

  2. मल्टीबैगर स्टोक 2019 के लिये Multibagger Stock for 2019 Pudumjee Paper Products Limited @ 22.15

  3. Making research recommendations

  4. Researchlyne.com

  5. International Conference: COM 4.0 Inaugural Session: Eudoxia Research University

  6. An Overview of Recommendation System Architecture

COMMENTS

  1. How to Write Recommendations in Research

    Learn how to write recommendations for future research based on your own work, using a simple formula and examples. Find out what recommendations should look like, how to connect them to your conclusions, and how to avoid common pitfalls.

  2. Research Recommendations

    Learn what research recommendations are, how to write them, and see examples of different types of recommendations. Research recommendations are suggestions or advice given to someone who is looking to conduct research on a specific topic or area.

  3. How to Write Recommendations in Research

    Learn what research recommendations are, how to write them, and why they are important for policy-makers and researchers. Find out the key components, steps, and advantages of research recommendations, as well as the difference between research recommendations and implications. See an example of a research recommendation based on a hypothetical study on student learning.

  4. What are Implications and Recommendations in Research? How to Write It

    Learn how to write implications and recommendations in research papers based on your study findings. Implications highlight the significance of your research, while recommendations suggest specific actions to solve a problem.

  5. How to formulate research recommendations

    Learn how to use the EPICOT+ format to identify unanswered questions and gaps in the evidence for further research. See examples of research recommendations based on systematic reviews, clinical guidelines, and the Database of Uncertainties about the Effects of Treatments.

  6. Draw conclusions and make recommendations (Chapter 6)

    For this reason you need to support your conclusions with structured, logical reasoning. Having drawn your conclusions you can then make recommendations. These should flow from your conclusions. They are suggestions about action that might be taken by people or organizations in the light of the conclusions that you have drawn from the results ...

  7. Research Recommendations Process and Methods Guide

    the research recommendations are relevant to current practice. we communicate well with the research community. This process and methods guide has been developed to help guidance-producing centres make research recommendations. It describes a step-by-step approach to identifying uncertainties, formulating research recommendations and research ...

  8. How to Write Discussions and Conclusions

    Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and ...

  9. Health research: How to formulate research recommendations

    The proposed statement on research recommendations applies to uncertainties of the effects of any form of health intervention or treatment and is intended for research in humans rather than basic scientific research. Further investigation is required to assess the applicability of the format for questions around diagnosis, signs and symptoms ...

  10. How to formulate research recommendations

    How to formulate research recommendations BMJ. 2006 Oct 14;333(7572):804-6. doi: 10.1136/bmj.38987.492014.94. ... "More research is needed" is a conclusion that fits most systematic reviews. But authors need to be more specific about what exactly is required

  11. Implications or Recommendations in Research: What's the Difference

    Implications are the impact your research makes, whereas recommendations are specific actions that can then be taken based on your findings, such as for more research or for policymaking. Updated on August 23, 2022. High-quality research articles that get many citations contain both implications and recommendations.

  12. How to Write Recommendations in Research Paper

    Learn the meaning, goals, structure, and guidelines of recommendations in a research paper. Find out how to provide practical solutions based on your findings and evidence, and how to tailor your suggestions to your audience.

  13. Grading quality of evidence and strength of recommendations

    Clinicians and patients commonly use clinical practice guidelines as a source of support—that is, recommendations that have been systematically developed by panels of people with access to the available evidence, an understanding of the clinical problem and research methods, and sufficient time for reflection.

  14. Writing a Research Paper Conclusion

    In a more empirical paper, you can close by either making recommendations for practice (for example, in clinical or policy papers), or suggesting directions for future research. Whatever the scope of your own research, there will always be room for further investigation of related topics, and you'll often discover new questions and problems ...

  15. How to write recommendations in a research paper

    Learn how to give appropriate recommendations in your research paper based on your review, analysis, data and observation. Follow the tips to write clear, concise and solution-oriented recommendations that are relevant to the objectives and needs of your audience.

  16. RecSOI: recommending research directions using statements of ignorance

    Finding new research topics is a task that researchers must handle very often, especially when starting a PhD degree. However, navigating the increasingly vast expanse of scientific knowledge, which sees a doubling of publication output every 17.3 years [], is an arduous task for even the most experienced academics.Amid the many papers published each year and the surge of scientists joining ...

  17. (Pdf) Chapter 5 Summary, Conclusions, Implications and Recommendations

    The conclusions are as stated below: i. Students' use of language in the oral sessions depicted their beliefs and values. based on their intentions. The oral sessions prompted the students to be ...

  18. Recommendation Reports

    Recommendation reports are texts that advise audiences about the best ways to solve a problem. Recommendation reports are a type of formal report that is widely used across disciplines and professions. Subject Matter Experts aim to make recommendations based on the best available theory, research and practice. Different disciplines and professions have different research methods

  19. (PDF) CHAPTER FIVE Summary, Conclusion and Recommendation

    ISBN: 978-978-59429-9-6. CHAPTER FIVE. Summary, Conclusion and Recommendation. Aisha Ibrahim Zaid. Department of Adult Educ. & Ext. Services. Faculty of Education and Extension Services. Usmanu ...

  20. Research recommendations

    As we develop guidance, we identify gaps and uncertainties in the evidence base which could benefit from further research. The most important unanswered questions are developed into research recommendations. Read our process and methods guide (PDF). Browse the list below to find a topic of interest. Only research recommendations made from 2011 ...

  21. How to write meaningful recommendations about other people (opinion)

    Karla Erickson describes how to write about others in ways that honor their significance without being engulfed by the increasing demands. It's 5:30 a.m. I am up before the day starts to write yet another recommendation letter. I have to get up earlier than usual to make sure that the letters due today or tomorrow do not get lost in the rush of the day.

  22. Conclusions and recommendations for future research

    The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8.

  23. Diffusion Recommender Model

    Dimitrios Rafailidis and Fabio Crestani. 2017. Recommendation with social relationships via deep learning. In SIGIR. ACM, 151--158. Google Scholar; Ruiyang Ren, Zhaoyang Liu, Yaliang Li, Wayne Xin Zhao, Hui Wang, Bolin Ding, and Ji-Rong Wen. 2020. Sequential recommendation with self-attentive multi-adversarial network. In SIGIR. ACM, 89--98 ...

  24. OMB Race and Ethnicity Cognitive Testing: Findings and Recommendations

    To cognitively test these changes, the RTI International/Research Support Services (RSS) team conducted 100 interviews—80 interviews with English-speaking participants and 20 interviews with Spanish-speaking participants. Testing examined how participants interacted with and responded to two versions of the combined race and ethnicity question.

  25. Quinquennial review of The Alan Turing Institute

    Details. This report presents the conclusions and recommendations of the quinquennial review panel to Engineering and Physical Sciences Research Council (EPSRC) on The Alan Turing Institute.

  26. Revisions to the NIH Fellowship Application and Review Process

    Better focus reviewer attention on three key assessments: the fellowship candidate's preparedness and potential, research training plan, and commitment to the candidate Ensure a broad range of candidates and research training contexts can be recognized as meritorious by clarifying and simplifying the language in the application and review ...

  27. Recommendations

    General Recommendations. The FRN, whether methodological or topic-specific in nature, should be presented as a top tier rather than a numerical list. The level of detail of the FRN description will depend on the state of the science, and EPCs should use their judgment based on their understanding of the topic and field. Basic principles include:

  28. Commission receives scientific advice on Artificial Intelligence uptake

    The recommendations of the independent scientific advisors include: Establishment of a European institute for AI in science To counter the dominance of a limited number of corporations over AI infrastructure and to empower public research across diverse disciplines, the scientists advise the creation of a new institute.

  29. Diet & Nutrition for Multiple Sclerosis

    While we do not yet know that a specific diet will treat your MS, a healthy diet is likely to improve your overall health and well-being. Most MS experts agree that a healthy diet is important to the long-term health of your nervous system. Experts offer the following recommendations for good health: Prepare meals at home as much as possible.