• Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Paper – Structure, Examples and Writing Guide

Research Paper – Structure, Examples and Writing Guide

Table of Contents

Research Paper

Research Paper

Definition:

Research Paper is a written document that presents the author’s original research, analysis, and interpretation of a specific topic or issue.

It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new knowledge or insights to a particular field of study, and to demonstrate the author’s understanding of the existing literature and theories related to the topic.

Structure of Research Paper

The structure of a research paper typically follows a standard format, consisting of several sections that convey specific information about the research study. The following is a detailed explanation of the structure of a research paper:

The title page contains the title of the paper, the name(s) of the author(s), and the affiliation(s) of the author(s). It also includes the date of submission and possibly, the name of the journal or conference where the paper is to be published.

The abstract is a brief summary of the research paper, typically ranging from 100 to 250 words. It should include the research question, the methods used, the key findings, and the implications of the results. The abstract should be written in a concise and clear manner to allow readers to quickly grasp the essence of the research.

Introduction

The introduction section of a research paper provides background information about the research problem, the research question, and the research objectives. It also outlines the significance of the research, the research gap that it aims to fill, and the approach taken to address the research question. Finally, the introduction section ends with a clear statement of the research hypothesis or research question.

Literature Review

The literature review section of a research paper provides an overview of the existing literature on the topic of study. It includes a critical analysis and synthesis of the literature, highlighting the key concepts, themes, and debates. The literature review should also demonstrate the research gap and how the current study seeks to address it.

The methods section of a research paper describes the research design, the sample selection, the data collection and analysis procedures, and the statistical methods used to analyze the data. This section should provide sufficient detail for other researchers to replicate the study.

The results section presents the findings of the research, using tables, graphs, and figures to illustrate the data. The findings should be presented in a clear and concise manner, with reference to the research question and hypothesis.

The discussion section of a research paper interprets the findings and discusses their implications for the research question, the literature review, and the field of study. It should also address the limitations of the study and suggest future research directions.

The conclusion section summarizes the main findings of the study, restates the research question and hypothesis, and provides a final reflection on the significance of the research.

The references section provides a list of all the sources cited in the paper, following a specific citation style such as APA, MLA or Chicago.

How to Write Research Paper

You can write Research Paper by the following guide:

  • Choose a Topic: The first step is to select a topic that interests you and is relevant to your field of study. Brainstorm ideas and narrow down to a research question that is specific and researchable.
  • Conduct a Literature Review: The literature review helps you identify the gap in the existing research and provides a basis for your research question. It also helps you to develop a theoretical framework and research hypothesis.
  • Develop a Thesis Statement : The thesis statement is the main argument of your research paper. It should be clear, concise and specific to your research question.
  • Plan your Research: Develop a research plan that outlines the methods, data sources, and data analysis procedures. This will help you to collect and analyze data effectively.
  • Collect and Analyze Data: Collect data using various methods such as surveys, interviews, observations, or experiments. Analyze data using statistical tools or other qualitative methods.
  • Organize your Paper : Organize your paper into sections such as Introduction, Literature Review, Methods, Results, Discussion, and Conclusion. Ensure that each section is coherent and follows a logical flow.
  • Write your Paper : Start by writing the introduction, followed by the literature review, methods, results, discussion, and conclusion. Ensure that your writing is clear, concise, and follows the required formatting and citation styles.
  • Edit and Proofread your Paper: Review your paper for grammar and spelling errors, and ensure that it is well-structured and easy to read. Ask someone else to review your paper to get feedback and suggestions for improvement.
  • Cite your Sources: Ensure that you properly cite all sources used in your research paper. This is essential for giving credit to the original authors and avoiding plagiarism.

Research Paper Example

Note : The below example research paper is for illustrative purposes only and is not an actual research paper. Actual research papers may have different structures, contents, and formats depending on the field of study, research question, data collection and analysis methods, and other factors. Students should always consult with their professors or supervisors for specific guidelines and expectations for their research papers.

Research Paper Example sample for Students:

Title: The Impact of Social Media on Mental Health among Young Adults

Abstract: This study aims to investigate the impact of social media use on the mental health of young adults. A literature review was conducted to examine the existing research on the topic. A survey was then administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO (Fear of Missing Out) are significant predictors of mental health problems among young adults.

Introduction: Social media has become an integral part of modern life, particularly among young adults. While social media has many benefits, including increased communication and social connectivity, it has also been associated with negative outcomes, such as addiction, cyberbullying, and mental health problems. This study aims to investigate the impact of social media use on the mental health of young adults.

Literature Review: The literature review highlights the existing research on the impact of social media use on mental health. The review shows that social media use is associated with depression, anxiety, stress, and other mental health problems. The review also identifies the factors that contribute to the negative impact of social media, including social comparison, cyberbullying, and FOMO.

Methods : A survey was administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The survey included questions on social media use, mental health status (measured using the DASS-21), and perceived impact of social media on their mental health. Data were analyzed using descriptive statistics and regression analysis.

Results : The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO are significant predictors of mental health problems among young adults.

Discussion : The study’s findings suggest that social media use has a negative impact on the mental health of young adults. The study highlights the need for interventions that address the factors contributing to the negative impact of social media, such as social comparison, cyberbullying, and FOMO.

Conclusion : In conclusion, social media use has a significant impact on the mental health of young adults. The study’s findings underscore the need for interventions that promote healthy social media use and address the negative outcomes associated with social media use. Future research can explore the effectiveness of interventions aimed at reducing the negative impact of social media on mental health. Additionally, longitudinal studies can investigate the long-term effects of social media use on mental health.

Limitations : The study has some limitations, including the use of self-report measures and a cross-sectional design. The use of self-report measures may result in biased responses, and a cross-sectional design limits the ability to establish causality.

Implications: The study’s findings have implications for mental health professionals, educators, and policymakers. Mental health professionals can use the findings to develop interventions that address the negative impact of social media use on mental health. Educators can incorporate social media literacy into their curriculum to promote healthy social media use among young adults. Policymakers can use the findings to develop policies that protect young adults from the negative outcomes associated with social media use.

References :

  • Twenge, J. M., & Campbell, W. K. (2019). Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Preventive medicine reports, 15, 100918.
  • Primack, B. A., Shensa, A., Escobar-Viera, C. G., Barrett, E. L., Sidani, J. E., Colditz, J. B., … & James, A. E. (2017). Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among US young adults. Computers in Human Behavior, 69, 1-9.
  • Van der Meer, T. G., & Verhoeven, J. W. (2017). Social media and its impact on academic performance of students. Journal of Information Technology Education: Research, 16, 383-398.

Appendix : The survey used in this study is provided below.

Social Media and Mental Health Survey

  • How often do you use social media per day?
  • Less than 30 minutes
  • 30 minutes to 1 hour
  • 1 to 2 hours
  • 2 to 4 hours
  • More than 4 hours
  • Which social media platforms do you use?
  • Others (Please specify)
  • How often do you experience the following on social media?
  • Social comparison (comparing yourself to others)
  • Cyberbullying
  • Fear of Missing Out (FOMO)
  • Have you ever experienced any of the following mental health problems in the past month?
  • Do you think social media use has a positive or negative impact on your mental health?
  • Very positive
  • Somewhat positive
  • Somewhat negative
  • Very negative
  • In your opinion, which factors contribute to the negative impact of social media on mental health?
  • Social comparison
  • In your opinion, what interventions could be effective in reducing the negative impact of social media on mental health?
  • Education on healthy social media use
  • Counseling for mental health problems caused by social media
  • Social media detox programs
  • Regulation of social media use

Thank you for your participation!

Applications of Research Paper

Research papers have several applications in various fields, including:

  • Advancing knowledge: Research papers contribute to the advancement of knowledge by generating new insights, theories, and findings that can inform future research and practice. They help to answer important questions, clarify existing knowledge, and identify areas that require further investigation.
  • Informing policy: Research papers can inform policy decisions by providing evidence-based recommendations for policymakers. They can help to identify gaps in current policies, evaluate the effectiveness of interventions, and inform the development of new policies and regulations.
  • Improving practice: Research papers can improve practice by providing evidence-based guidance for professionals in various fields, including medicine, education, business, and psychology. They can inform the development of best practices, guidelines, and standards of care that can improve outcomes for individuals and organizations.
  • Educating students : Research papers are often used as teaching tools in universities and colleges to educate students about research methods, data analysis, and academic writing. They help students to develop critical thinking skills, research skills, and communication skills that are essential for success in many careers.
  • Fostering collaboration: Research papers can foster collaboration among researchers, practitioners, and policymakers by providing a platform for sharing knowledge and ideas. They can facilitate interdisciplinary collaborations and partnerships that can lead to innovative solutions to complex problems.

When to Write Research Paper

Research papers are typically written when a person has completed a research project or when they have conducted a study and have obtained data or findings that they want to share with the academic or professional community. Research papers are usually written in academic settings, such as universities, but they can also be written in professional settings, such as research organizations, government agencies, or private companies.

Here are some common situations where a person might need to write a research paper:

  • For academic purposes: Students in universities and colleges are often required to write research papers as part of their coursework, particularly in the social sciences, natural sciences, and humanities. Writing research papers helps students to develop research skills, critical thinking skills, and academic writing skills.
  • For publication: Researchers often write research papers to publish their findings in academic journals or to present their work at academic conferences. Publishing research papers is an important way to disseminate research findings to the academic community and to establish oneself as an expert in a particular field.
  • To inform policy or practice : Researchers may write research papers to inform policy decisions or to improve practice in various fields. Research findings can be used to inform the development of policies, guidelines, and best practices that can improve outcomes for individuals and organizations.
  • To share new insights or ideas: Researchers may write research papers to share new insights or ideas with the academic or professional community. They may present new theories, propose new research methods, or challenge existing paradigms in their field.

Purpose of Research Paper

The purpose of a research paper is to present the results of a study or investigation in a clear, concise, and structured manner. Research papers are written to communicate new knowledge, ideas, or findings to a specific audience, such as researchers, scholars, practitioners, or policymakers. The primary purposes of a research paper are:

  • To contribute to the body of knowledge : Research papers aim to add new knowledge or insights to a particular field or discipline. They do this by reporting the results of empirical studies, reviewing and synthesizing existing literature, proposing new theories, or providing new perspectives on a topic.
  • To inform or persuade: Research papers are written to inform or persuade the reader about a particular issue, topic, or phenomenon. They present evidence and arguments to support their claims and seek to persuade the reader of the validity of their findings or recommendations.
  • To advance the field: Research papers seek to advance the field or discipline by identifying gaps in knowledge, proposing new research questions or approaches, or challenging existing assumptions or paradigms. They aim to contribute to ongoing debates and discussions within a field and to stimulate further research and inquiry.
  • To demonstrate research skills: Research papers demonstrate the author’s research skills, including their ability to design and conduct a study, collect and analyze data, and interpret and communicate findings. They also demonstrate the author’s ability to critically evaluate existing literature, synthesize information from multiple sources, and write in a clear and structured manner.

Characteristics of Research Paper

Research papers have several characteristics that distinguish them from other forms of academic or professional writing. Here are some common characteristics of research papers:

  • Evidence-based: Research papers are based on empirical evidence, which is collected through rigorous research methods such as experiments, surveys, observations, or interviews. They rely on objective data and facts to support their claims and conclusions.
  • Structured and organized: Research papers have a clear and logical structure, with sections such as introduction, literature review, methods, results, discussion, and conclusion. They are organized in a way that helps the reader to follow the argument and understand the findings.
  • Formal and objective: Research papers are written in a formal and objective tone, with an emphasis on clarity, precision, and accuracy. They avoid subjective language or personal opinions and instead rely on objective data and analysis to support their arguments.
  • Citations and references: Research papers include citations and references to acknowledge the sources of information and ideas used in the paper. They use a specific citation style, such as APA, MLA, or Chicago, to ensure consistency and accuracy.
  • Peer-reviewed: Research papers are often peer-reviewed, which means they are evaluated by other experts in the field before they are published. Peer-review ensures that the research is of high quality, meets ethical standards, and contributes to the advancement of knowledge in the field.
  • Objective and unbiased: Research papers strive to be objective and unbiased in their presentation of the findings. They avoid personal biases or preconceptions and instead rely on the data and analysis to draw conclusions.

Advantages of Research Paper

Research papers have many advantages, both for the individual researcher and for the broader academic and professional community. Here are some advantages of research papers:

  • Contribution to knowledge: Research papers contribute to the body of knowledge in a particular field or discipline. They add new information, insights, and perspectives to existing literature and help advance the understanding of a particular phenomenon or issue.
  • Opportunity for intellectual growth: Research papers provide an opportunity for intellectual growth for the researcher. They require critical thinking, problem-solving, and creativity, which can help develop the researcher’s skills and knowledge.
  • Career advancement: Research papers can help advance the researcher’s career by demonstrating their expertise and contributions to the field. They can also lead to new research opportunities, collaborations, and funding.
  • Academic recognition: Research papers can lead to academic recognition in the form of awards, grants, or invitations to speak at conferences or events. They can also contribute to the researcher’s reputation and standing in the field.
  • Impact on policy and practice: Research papers can have a significant impact on policy and practice. They can inform policy decisions, guide practice, and lead to changes in laws, regulations, or procedures.
  • Advancement of society: Research papers can contribute to the advancement of society by addressing important issues, identifying solutions to problems, and promoting social justice and equality.

Limitations of Research Paper

Research papers also have some limitations that should be considered when interpreting their findings or implications. Here are some common limitations of research papers:

  • Limited generalizability: Research findings may not be generalizable to other populations, settings, or contexts. Studies often use specific samples or conditions that may not reflect the broader population or real-world situations.
  • Potential for bias : Research papers may be biased due to factors such as sample selection, measurement errors, or researcher biases. It is important to evaluate the quality of the research design and methods used to ensure that the findings are valid and reliable.
  • Ethical concerns: Research papers may raise ethical concerns, such as the use of vulnerable populations or invasive procedures. Researchers must adhere to ethical guidelines and obtain informed consent from participants to ensure that the research is conducted in a responsible and respectful manner.
  • Limitations of methodology: Research papers may be limited by the methodology used to collect and analyze data. For example, certain research methods may not capture the complexity or nuance of a particular phenomenon, or may not be appropriate for certain research questions.
  • Publication bias: Research papers may be subject to publication bias, where positive or significant findings are more likely to be published than negative or non-significant findings. This can skew the overall findings of a particular area of research.
  • Time and resource constraints: Research papers may be limited by time and resource constraints, which can affect the quality and scope of the research. Researchers may not have access to certain data or resources, or may be unable to conduct long-term studies due to practical limitations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Reference management. Clean and simple.

Types of research papers

research paper is based on

Analytical research paper

Argumentative or persuasive paper, definition paper, compare and contrast paper, cause and effect paper, interpretative paper, experimental research paper, survey research paper, frequently asked questions about the different types of research papers, related articles.

There are multiple different types of research papers. It is important to know which type of research paper is required for your assignment, as each type of research paper requires different preparation. Below is a list of the most common types of research papers.

➡️ Read more:  What is a research paper?

In an analytical research paper you:

  • pose a question
  • collect relevant data from other researchers
  • analyze their different viewpoints

You focus on the findings and conclusions of other researchers and then make a personal conclusion about the topic. It is important to stay neutral and not show your own negative or positive position on the matter.

The argumentative paper presents two sides of a controversial issue in one paper. It is aimed at getting the reader on the side of your point of view.

You should include and cite findings and arguments of different researchers on both sides of the issue, but then favor one side over the other and try to persuade the reader of your side. Your arguments should not be too emotional though, they still need to be supported with logical facts and statistical data.

Tip: Avoid expressing too much emotion in a persuasive paper.

The definition paper solely describes facts or objective arguments without using any personal emotion or opinion of the author. Its only purpose is to provide information. You should include facts from a variety of sources, but leave those facts unanalyzed.

Compare and contrast papers are used to analyze the difference between two:

Make sure to sufficiently describe both sides in the paper, and then move on to comparing and contrasting both thesis and supporting one.

Cause and effect papers are usually the first types of research papers that high school and college students write. They trace probable or expected results from a specific action and answer the main questions "Why?" and "What?", which reflect effects and causes.

In business and education fields, cause and effect papers will help trace a range of results that could arise from a particular action or situation.

An interpretative paper requires you to use knowledge that you have gained from a particular case study, for example a legal situation in law studies. You need to write the paper based on an established theoretical framework and use valid supporting data to back up your statement and conclusion.

This type of research paper basically describes a particular experiment in detail. It is common in fields like:

Experiments are aimed to explain a certain outcome or phenomenon with certain actions. You need to describe your experiment with supporting data and then analyze it sufficiently.

This research paper demands the conduction of a survey that includes asking questions to respondents. The conductor of the survey then collects all the information from the survey and analyzes it to present it in the research paper.

➡️ Ready to start your research paper? Take a look at our guide on how to start a research paper .

In an analytical research paper, you pose a question and then collect relevant data from other researchers to analyze their different viewpoints. You focus on the findings and conclusions of other researchers and then make a personal conclusion about the topic.

The definition paper solely describes facts or objective arguments without using any personal emotion or opinion of the author. Its only purpose is to provide information.

Cause and effect papers are usually the first types of research papers that high school and college students are confronted with. The answer questions like "Why?" and "What?", which reflect effects and causes. In business and education fields, cause and effect papers will help trace a range of results that could arise from a particular action or situation.

This type of research paper describes a particular experiment in detail. It is common in fields like biology, chemistry or physics. Experiments are aimed to explain a certain outcome or phenomenon with certain actions.

research paper is based on

Grad Coach

How To Write A Research Paper

Step-By-Step Tutorial With Examples + FREE Template

By: Derek Jansen (MBA) | Expert Reviewer: Dr Eunice Rautenbach | March 2024

For many students, crafting a strong research paper from scratch can feel like a daunting task – and rightly so! In this post, we’ll unpack what a research paper is, what it needs to do , and how to write one – in three easy steps. 🙂 

Overview: Writing A Research Paper

What (exactly) is a research paper.

  • How to write a research paper
  • Stage 1 : Topic & literature search
  • Stage 2 : Structure & outline
  • Stage 3 : Iterative writing
  • Key takeaways

Let’s start by asking the most important question, “ What is a research paper? ”.

Simply put, a research paper is a scholarly written work where the writer (that’s you!) answers a specific question (this is called a research question ) through evidence-based arguments . Evidence-based is the keyword here. In other words, a research paper is different from an essay or other writing assignments that draw from the writer’s personal opinions or experiences. With a research paper, it’s all about building your arguments based on evidence (we’ll talk more about that evidence a little later).

Now, it’s worth noting that there are many different types of research papers , including analytical papers (the type I just described), argumentative papers, and interpretative papers. Here, we’ll focus on analytical papers , as these are some of the most common – but if you’re keen to learn about other types of research papers, be sure to check out the rest of the blog .

With that basic foundation laid, let’s get down to business and look at how to write a research paper .

Research Paper Template

Overview: The 3-Stage Process

While there are, of course, many potential approaches you can take to write a research paper, there are typically three stages to the writing process. So, in this tutorial, we’ll present a straightforward three-step process that we use when working with students at Grad Coach.

These three steps are:

  • Finding a research topic and reviewing the existing literature
  • Developing a provisional structure and outline for your paper, and
  • Writing up your initial draft and then refining it iteratively

Let’s dig into each of these.

Need a helping hand?

research paper is based on

Step 1: Find a topic and review the literature

As we mentioned earlier, in a research paper, you, as the researcher, will try to answer a question . More specifically, that’s called a research question , and it sets the direction of your entire paper. What’s important to understand though is that you’ll need to answer that research question with the help of high-quality sources – for example, journal articles, government reports, case studies, and so on. We’ll circle back to this in a minute.

The first stage of the research process is deciding on what your research question will be and then reviewing the existing literature (in other words, past studies and papers) to see what they say about that specific research question. In some cases, your professor may provide you with a predetermined research question (or set of questions). However, in many cases, you’ll need to find your own research question within a certain topic area.

Finding a strong research question hinges on identifying a meaningful research gap – in other words, an area that’s lacking in existing research. There’s a lot to unpack here, so if you wanna learn more, check out the plain-language explainer video below.

Once you’ve figured out which question (or questions) you’ll attempt to answer in your research paper, you’ll need to do a deep dive into the existing literature – this is called a “ literature search ”. Again, there are many ways to go about this, but your most likely starting point will be Google Scholar .

If you’re new to Google Scholar, think of it as Google for the academic world. You can start by simply entering a few different keywords that are relevant to your research question and it will then present a host of articles for you to review. What you want to pay close attention to here is the number of citations for each paper – the more citations a paper has, the more credible it is (generally speaking – there are some exceptions, of course).

how to use google scholar

Ideally, what you’re looking for are well-cited papers that are highly relevant to your topic. That said, keep in mind that citations are a cumulative metric , so older papers will often have more citations than newer papers – just because they’ve been around for longer. So, don’t fixate on this metric in isolation – relevance and recency are also very important.

Beyond Google Scholar, you’ll also definitely want to check out academic databases and aggregators such as Science Direct, PubMed, JStor and so on. These will often overlap with the results that you find in Google Scholar, but they can also reveal some hidden gems – so, be sure to check them out.

Once you’ve worked your way through all the literature, you’ll want to catalogue all this information in some sort of spreadsheet so that you can easily recall who said what, when and within what context. If you’d like, we’ve got a free literature spreadsheet that helps you do exactly that.

Don’t fixate on an article’s citation count in isolation - relevance (to your research question) and recency are also very important.

Step 2: Develop a structure and outline

With your research question pinned down and your literature digested and catalogued, it’s time to move on to planning your actual research paper .

It might sound obvious, but it’s really important to have some sort of rough outline in place before you start writing your paper. So often, we see students eagerly rushing into the writing phase, only to land up with a disjointed research paper that rambles on in multiple

Now, the secret here is to not get caught up in the fine details . Realistically, all you need at this stage is a bullet-point list that describes (in broad strokes) what you’ll discuss and in what order. It’s also useful to remember that you’re not glued to this outline – in all likelihood, you’ll chop and change some sections once you start writing, and that’s perfectly okay. What’s important is that you have some sort of roadmap in place from the start.

You need to have a rough outline in place before you start writing your paper - or you’ll end up with a disjointed research paper that rambles on.

At this stage you might be wondering, “ But how should I structure my research paper? ”. Well, there’s no one-size-fits-all solution here, but in general, a research paper will consist of a few relatively standardised components:

  • Introduction
  • Literature review
  • Methodology

Let’s take a look at each of these.

First up is the introduction section . As the name suggests, the purpose of the introduction is to set the scene for your research paper. There are usually (at least) four ingredients that go into this section – these are the background to the topic, the research problem and resultant research question , and the justification or rationale. If you’re interested, the video below unpacks the introduction section in more detail. 

The next section of your research paper will typically be your literature review . Remember all that literature you worked through earlier? Well, this is where you’ll present your interpretation of all that content . You’ll do this by writing about recent trends, developments, and arguments within the literature – but more specifically, those that are relevant to your research question . The literature review can oftentimes seem a little daunting, even to seasoned researchers, so be sure to check out our extensive collection of literature review content here .

With the introduction and lit review out of the way, the next section of your paper is the research methodology . In a nutshell, the methodology section should describe to your reader what you did (beyond just reviewing the existing literature) to answer your research question. For example, what data did you collect, how did you collect that data, how did you analyse that data and so on? For each choice, you’ll also need to justify why you chose to do it that way, and what the strengths and weaknesses of your approach were.

Now, it’s worth mentioning that for some research papers, this aspect of the project may be a lot simpler . For example, you may only need to draw on secondary sources (in other words, existing data sets). In some cases, you may just be asked to draw your conclusions from the literature search itself (in other words, there may be no data analysis at all). But, if you are required to collect and analyse data, you’ll need to pay a lot of attention to the methodology section. The video below provides an example of what the methodology section might look like.

By this stage of your paper, you will have explained what your research question is, what the existing literature has to say about that question, and how you analysed additional data to try to answer your question. So, the natural next step is to present your analysis of that data . This section is usually called the “results” or “analysis” section and this is where you’ll showcase your findings.

Depending on your school’s requirements, you may need to present and interpret the data in one section – or you might split the presentation and the interpretation into two sections. In the latter case, your “results” section will just describe the data, and the “discussion” is where you’ll interpret that data and explicitly link your analysis back to your research question. If you’re not sure which approach to take, check in with your professor or take a look at past papers to see what the norms are for your programme.

Alright – once you’ve presented and discussed your results, it’s time to wrap it up . This usually takes the form of the “ conclusion ” section. In the conclusion, you’ll need to highlight the key takeaways from your study and close the loop by explicitly answering your research question. Again, the exact requirements here will vary depending on your programme (and you may not even need a conclusion section at all) – so be sure to check with your professor if you’re unsure.

Step 3: Write and refine

Finally, it’s time to get writing. All too often though, students hit a brick wall right about here… So, how do you avoid this happening to you?

Well, there’s a lot to be said when it comes to writing a research paper (or any sort of academic piece), but we’ll share three practical tips to help you get started.

First and foremost , it’s essential to approach your writing as an iterative process. In other words, you need to start with a really messy first draft and then polish it over multiple rounds of editing. Don’t waste your time trying to write a perfect research paper in one go. Instead, take the pressure off yourself by adopting an iterative approach.

Secondly , it’s important to always lean towards critical writing , rather than descriptive writing. What does this mean? Well, at the simplest level, descriptive writing focuses on the “ what ”, while critical writing digs into the “ so what ” – in other words, the implications. If you’re not familiar with these two types of writing, don’t worry! You can find a plain-language explanation here.

Last but not least, you’ll need to get your referencing right. Specifically, you’ll need to provide credible, correctly formatted citations for the statements you make. We see students making referencing mistakes all the time and it costs them dearly. The good news is that you can easily avoid this by using a simple reference manager . If you don’t have one, check out our video about Mendeley, an easy (and free) reference management tool that you can start using today.

Recap: Key Takeaways

We’ve covered a lot of ground here. To recap, the three steps to writing a high-quality research paper are:

  • To choose a research question and review the literature
  • To plan your paper structure and draft an outline
  • To take an iterative approach to writing, focusing on critical writing and strong referencing

Remember, this is just a b ig-picture overview of the research paper development process and there’s a lot more nuance to unpack. So, be sure to grab a copy of our free research paper template to learn more about how to write a research paper.

You Might Also Like:

Referencing in Word

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • How to write a research paper

Last updated

11 January 2024

Reviewed by

With proper planning, knowledge, and framework, completing a research paper can be a fulfilling and exciting experience. 

Though it might initially sound slightly intimidating, this guide will help you embrace the challenge. 

By documenting your findings, you can inspire others and make a difference in your field. Here's how you can make your research paper unique and comprehensive.

  • What is a research paper?

Research papers allow you to demonstrate your knowledge and understanding of a particular topic. These papers are usually lengthier and more detailed than typical essays, requiring deeper insight into the chosen topic.

To write a research paper, you must first choose a topic that interests you and is relevant to the field of study. Once you’ve selected your topic, gathering as many relevant resources as possible, including books, scholarly articles, credible websites, and other academic materials, is essential. You must then read and analyze these sources, summarizing their key points and identifying gaps in the current research.

You can formulate your ideas and opinions once you thoroughly understand the existing research. To get there might involve conducting original research, gathering data, or analyzing existing data sets. It could also involve presenting an original argument or interpretation of the existing research.

Writing a successful research paper involves presenting your findings clearly and engagingly, which might involve using charts, graphs, or other visual aids to present your data and using concise language to explain your findings. You must also ensure your paper adheres to relevant academic formatting guidelines, including proper citations and references.

Overall, writing a research paper requires a significant amount of time, effort, and attention to detail. However, it is also an enriching experience that allows you to delve deeply into a subject that interests you and contribute to the existing body of knowledge in your chosen field.

  • How long should a research paper be?

Research papers are deep dives into a topic. Therefore, they tend to be longer pieces of work than essays or opinion pieces. 

However, a suitable length depends on the complexity of the topic and your level of expertise. For instance, are you a first-year college student or an experienced professional? 

Also, remember that the best research papers provide valuable information for the benefit of others. Therefore, the quality of information matters most, not necessarily the length. Being concise is valuable.

Following these best practice steps will help keep your process simple and productive:

1. Gaining a deep understanding of any expectations

Before diving into your intended topic or beginning the research phase, take some time to orient yourself. Suppose there’s a specific topic assigned to you. In that case, it’s essential to deeply understand the question and organize your planning and approach in response. Pay attention to the key requirements and ensure you align your writing accordingly. 

This preparation step entails

Deeply understanding the task or assignment

Being clear about the expected format and length

Familiarizing yourself with the citation and referencing requirements 

Understanding any defined limits for your research contribution

Where applicable, speaking to your professor or research supervisor for further clarification

2. Choose your research topic

Select a research topic that aligns with both your interests and available resources. Ideally, focus on a field where you possess significant experience and analytical skills. In crafting your research paper, it's crucial to go beyond summarizing existing data and contribute fresh insights to the chosen area.

Consider narrowing your focus to a specific aspect of the topic. For example, if exploring the link between technology and mental health, delve into how social media use during the pandemic impacts the well-being of college students. Conducting interviews and surveys with students could provide firsthand data and unique perspectives, adding substantial value to the existing knowledge.

When finalizing your topic, adhere to legal and ethical norms in the relevant area (this ensures the integrity of your research, protects participants' rights, upholds intellectual property standards, and ensures transparency and accountability). Following these principles not only maintains the credibility of your work but also builds trust within your academic or professional community.

For instance, in writing about medical research, consider legal and ethical norms, including patient confidentiality laws and informed consent requirements. Similarly, if analyzing user data on social media platforms, be mindful of data privacy regulations, ensuring compliance with laws governing personal information collection and use. Aligning with legal and ethical standards not only avoids potential issues but also underscores the responsible conduct of your research.

3. Gather preliminary research

Once you’ve landed on your topic, it’s time to explore it further. You’ll want to discover more about available resources and existing research relevant to your assignment at this stage. 

This exploratory phase is vital as you may discover issues with your original idea or realize you have insufficient resources to explore the topic effectively. This key bit of groundwork allows you to redirect your research topic in a different, more feasible, or more relevant direction if necessary. 

Spending ample time at this stage ensures you gather everything you need, learn as much as you can about the topic, and discover gaps where the topic has yet to be sufficiently covered, offering an opportunity to research it further. 

4. Define your research question

To produce a well-structured and focused paper, it is imperative to formulate a clear and precise research question that will guide your work. Your research question must be informed by the existing literature and tailored to the scope and objectives of your project. By refining your focus, you can produce a thoughtful and engaging paper that effectively communicates your ideas to your readers.

5. Write a thesis statement

A thesis statement is a one-to-two-sentence summary of your research paper's main argument or direction. It serves as an overall guide to summarize the overall intent of the research paper for you and anyone wanting to know more about the research.

A strong thesis statement is:

Concise and clear: Explain your case in simple sentences (avoid covering multiple ideas). It might help to think of this section as an elevator pitch.

Specific: Ensure that there is no ambiguity in your statement and that your summary covers the points argued in the paper.

Debatable: A thesis statement puts forward a specific argument––it is not merely a statement but a debatable point that can be analyzed and discussed.

Here are three thesis statement examples from different disciplines:

Psychology thesis example: "We're studying adults aged 25-40 to see if taking short breaks for mindfulness can help with stress. Our goal is to find practical ways to manage anxiety better."

Environmental science thesis example: "This research paper looks into how having more city parks might make the air cleaner and keep people healthier. I want to find out if more green spaces means breathing fewer carcinogens in big cities."

UX research thesis example: "This study focuses on improving mobile banking for older adults using ethnographic research, eye-tracking analysis, and interactive prototyping. We investigate the usefulness of eye-tracking analysis with older individuals, aiming to spark debate and offer fresh perspectives on UX design and digital inclusivity for the aging population."

6. Conduct in-depth research

A research paper doesn’t just include research that you’ve uncovered from other papers and studies but your fresh insights, too. You will seek to become an expert on your topic––understanding the nuances in the current leading theories. You will analyze existing research and add your thinking and discoveries.  It's crucial to conduct well-designed research that is rigorous, robust, and based on reliable sources. Suppose a research paper lacks evidence or is biased. In that case, it won't benefit the academic community or the general public. Therefore, examining the topic thoroughly and furthering its understanding through high-quality research is essential. That usually means conducting new research. Depending on the area under investigation, you may conduct surveys, interviews, diary studies, or observational research to uncover new insights or bolster current claims.

7. Determine supporting evidence

Not every piece of research you’ve discovered will be relevant to your research paper. It’s important to categorize the most meaningful evidence to include alongside your discoveries. It's important to include evidence that doesn't support your claims to avoid exclusion bias and ensure a fair research paper.

8. Write a research paper outline

Before diving in and writing the whole paper, start with an outline. It will help you to see if more research is needed, and it will provide a framework by which to write a more compelling paper. Your supervisor may even request an outline to approve before beginning to write the first draft of the full paper. An outline will include your topic, thesis statement, key headings, short summaries of the research, and your arguments.

9. Write your first draft

Once you feel confident about your outline and sources, it’s time to write your first draft. While penning a long piece of content can be intimidating, if you’ve laid the groundwork, you will have a structure to help you move steadily through each section. To keep up motivation and inspiration, it’s often best to keep the pace quick. Stopping for long periods can interrupt your flow and make jumping back in harder than writing when things are fresh in your mind.

10. Cite your sources correctly

It's always a good practice to give credit where it's due, and the same goes for citing any works that have influenced your paper. Building your arguments on credible references adds value and authenticity to your research. In the formatting guidelines section, you’ll find an overview of different citation styles (MLA, CMOS, or APA), which will help you meet any publishing or academic requirements and strengthen your paper's credibility. It is essential to follow the guidelines provided by your school or the publication you are submitting to ensure the accuracy and relevance of your citations.

11. Ensure your work is original

It is crucial to ensure the originality of your paper, as plagiarism can lead to serious consequences. To avoid plagiarism, you should use proper paraphrasing and quoting techniques. Paraphrasing is rewriting a text in your own words while maintaining the original meaning. Quoting involves directly citing the source. Giving credit to the original author or source is essential whenever you borrow their ideas or words. You can also use plagiarism detection tools such as Scribbr or Grammarly to check the originality of your paper. These tools compare your draft writing to a vast database of online sources. If you find any accidental plagiarism, you should correct it immediately by rephrasing or citing the source.

12. Revise, edit, and proofread

One of the essential qualities of excellent writers is their ability to understand the importance of editing and proofreading. Even though it's tempting to call it a day once you've finished your writing, editing your work can significantly improve its quality. It's natural to overlook the weaker areas when you've just finished writing a paper. Therefore, it's best to take a break of a day or two, or even up to a week, to refresh your mind. This way, you can return to your work with a new perspective. After some breathing room, you can spot any inconsistencies, spelling and grammar errors, typos, or missing citations and correct them. 

  • The best research paper format 

The format of your research paper should align with the requirements set forth by your college, school, or target publication. 

There is no one “best” format, per se. Depending on the stated requirements, you may need to include the following elements:

Title page: The title page of a research paper typically includes the title, author's name, and institutional affiliation and may include additional information such as a course name or instructor's name. 

Table of contents: Include a table of contents to make it easy for readers to find specific sections of your paper.

Abstract: The abstract is a summary of the purpose of the paper.

Methods : In this section, describe the research methods used. This may include collecting data, conducting interviews, or doing field research.

Results: Summarize the conclusions you drew from your research in this section.

Discussion: In this section, discuss the implications of your research. Be sure to mention any significant limitations to your approach and suggest areas for further research.

Tables, charts, and illustrations: Use tables, charts, and illustrations to help convey your research findings and make them easier to understand.

Works cited or reference page: Include a works cited or reference page to give credit to the sources that you used to conduct your research.

Bibliography: Provide a list of all the sources you consulted while conducting your research.

Dedication and acknowledgments : Optionally, you may include a dedication and acknowledgments section to thank individuals who helped you with your research.

  • General style and formatting guidelines

Formatting your research paper means you can submit it to your college, journal, or other publications in compliance with their criteria.

Research papers tend to follow the American Psychological Association (APA), Modern Language Association (MLA), or Chicago Manual of Style (CMOS) guidelines.

Here’s how each style guide is typically used:

Chicago Manual of Style (CMOS):

CMOS is a versatile style guide used for various types of writing. It's known for its flexibility and use in the humanities. CMOS provides guidelines for citations, formatting, and overall writing style. It allows for both footnotes and in-text citations, giving writers options based on their preferences or publication requirements.

American Psychological Association (APA):

APA is common in the social sciences. It’s hailed for its clarity and emphasis on precision. It has specific rules for citing sources, creating references, and formatting papers. APA style uses in-text citations with an accompanying reference list. It's designed to convey information efficiently and is widely used in academic and scientific writing.

Modern Language Association (MLA):

MLA is widely used in the humanities, especially literature and language studies. It emphasizes the author-page format for in-text citations and provides guidelines for creating a "Works Cited" page. MLA is known for its focus on the author's name and the literary works cited. It’s frequently used in disciplines that prioritize literary analysis and critical thinking.

To confirm you're using the latest style guide, check the official website or publisher's site for updates, consult academic resources, and verify the guide's publication date. Online platforms and educational resources may also provide summaries and alerts about any revisions or additions to the style guide.

Citing sources

When working on your research paper, it's important to cite the sources you used properly. Your citation style will guide you through this process. Generally, there are three parts to citing sources in your research paper: 

First, provide a brief citation in the body of your essay. This is also known as a parenthetical or in-text citation. 

Second, include a full citation in the Reference list at the end of your paper. Different types of citations include in-text citations, footnotes, and reference lists. 

In-text citations include the author's surname and the date of the citation. 

Footnotes appear at the bottom of each page of your research paper. They may also be summarized within a reference list at the end of the paper. 

A reference list includes all of the research used within the paper at the end of the document. It should include the author, date, paper title, and publisher listed in the order that aligns with your citation style.

10 research paper writing tips:

Following some best practices is essential to writing a research paper that contributes to your field of study and creates a positive impact.

These tactics will help you structure your argument effectively and ensure your work benefits others:

Clear and precise language:  Ensure your language is unambiguous. Use academic language appropriately, but keep it simple. Also, provide clear takeaways for your audience.

Effective idea separation:  Organize the vast amount of information and sources in your paper with paragraphs and titles. Create easily digestible sections for your readers to navigate through.

Compelling intro:  Craft an engaging introduction that captures your reader's interest. Hook your audience and motivate them to continue reading.

Thorough revision and editing:  Take the time to review and edit your paper comprehensively. Use tools like Grammarly to detect and correct small, overlooked errors.

Thesis precision:  Develop a clear and concise thesis statement that guides your paper. Ensure that your thesis aligns with your research's overall purpose and contribution.

Logical flow of ideas:  Maintain a logical progression throughout the paper. Use transitions effectively to connect different sections and maintain coherence.

Critical evaluation of sources:  Evaluate and critically assess the relevance and reliability of your sources. Ensure that your research is based on credible and up-to-date information.

Thematic consistency:  Maintain a consistent theme throughout the paper. Ensure that all sections contribute cohesively to the overall argument.

Relevant supporting evidence:  Provide concise and relevant evidence to support your arguments. Avoid unnecessary details that may distract from the main points.

Embrace counterarguments:  Acknowledge and address opposing views to strengthen your position. Show that you have considered alternative arguments in your field.

7 research tips 

If you want your paper to not only be well-written but also contribute to the progress of human knowledge, consider these tips to take your paper to the next level:

Selecting the appropriate topic: The topic you select should align with your area of expertise, comply with the requirements of your project, and have sufficient resources for a comprehensive investigation.

Use academic databases: Academic databases such as PubMed, Google Scholar, and JSTOR offer a wealth of research papers that can help you discover everything you need to know about your chosen topic.

Critically evaluate sources: It is important not to accept research findings at face value. Instead, it is crucial to critically analyze the information to avoid jumping to conclusions or overlooking important details. A well-written research paper requires a critical analysis with thorough reasoning to support claims.

Diversify your sources: Expand your research horizons by exploring a variety of sources beyond the standard databases. Utilize books, conference proceedings, and interviews to gather diverse perspectives and enrich your understanding of the topic.

Take detailed notes: Detailed note-taking is crucial during research and can help you form the outline and body of your paper.

Stay up on trends: Keep abreast of the latest developments in your field by regularly checking for recent publications. Subscribe to newsletters, follow relevant journals, and attend conferences to stay informed about emerging trends and advancements. 

Engage in peer review: Seek feedback from peers or mentors to ensure the rigor and validity of your research. Peer review helps identify potential weaknesses in your methodology and strengthens the overall credibility of your findings.

  • The real-world impact of research papers

Writing a research paper is more than an academic or business exercise. The experience provides an opportunity to explore a subject in-depth, broaden one's understanding, and arrive at meaningful conclusions. With careful planning, dedication, and hard work, writing a research paper can be a fulfilling and enriching experience contributing to advancing knowledge.

How do I publish my research paper? 

Many academics wish to publish their research papers. While challenging, your paper might get traction if it covers new and well-written information. To publish your research paper, find a target publication, thoroughly read their guidelines, format your paper accordingly, and send it to them per their instructions. You may need to include a cover letter, too. After submission, your paper may be peer-reviewed by experts to assess its legitimacy, quality, originality, and methodology. Following review, you will be informed by the publication whether they have accepted or rejected your paper. 

What is a good opening sentence for a research paper? 

Beginning your research paper with a compelling introduction can ensure readers are interested in going further. A relevant quote, a compelling statistic, or a bold argument can start the paper and hook your reader. Remember, though, that the most important aspect of a research paper is the quality of the information––not necessarily your ability to storytell, so ensure anything you write aligns with your goals.

Research paper vs. a research proposal—what’s the difference?

While some may confuse research papers and proposals, they are different documents. 

A research proposal comes before a research paper. It is a detailed document that outlines an intended area of exploration. It includes the research topic, methodology, timeline, sources, and potential conclusions. Research proposals are often required when seeking approval to conduct research. 

A research paper is a summary of research findings. A research paper follows a structured format to present those findings and construct an argument or conclusion.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 5 March 2024

Last updated: 25 November 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics.

  • 10 research paper

Log in or sign up

Get started for free

How to Write and Publish a Research Paper for a Peer-Reviewed Journal

  • Open access
  • Published: 30 April 2020
  • Volume 36 , pages 909–913, ( 2021 )

Cite this article

You have full access to this open access article

  • Clara Busse   ORCID: orcid.org/0000-0002-0178-1000 1 &
  • Ella August   ORCID: orcid.org/0000-0001-5151-1036 1 , 2  

267k Accesses

15 Citations

719 Altmetric

Explore all metrics

Communicating research findings is an essential step in the research process. Often, peer-reviewed journals are the forum for such communication, yet many researchers are never taught how to write a publishable scientific paper. In this article, we explain the basic structure of a scientific paper and describe the information that should be included in each section. We also identify common pitfalls for each section and recommend strategies to avoid them. Further, we give advice about target journal selection and authorship. In the online resource 1 , we provide an example of a high-quality scientific paper, with annotations identifying the elements we describe in this article.

Similar content being viewed by others

research paper is based on

How to Choose the Right Journal

research paper is based on

The Point Is…to Publish?

research paper is based on

Some Opinions on the Review Process of Research Papers Destined for Publication

Ehsan Roohi & Omid Mahian

Avoid common mistakes on your manuscript.

Introduction

Writing a scientific paper is an important component of the research process, yet researchers often receive little formal training in scientific writing. This is especially true in low-resource settings. In this article, we explain why choosing a target journal is important, give advice about authorship, provide a basic structure for writing each section of a scientific paper, and describe common pitfalls and recommendations for each section. In the online resource 1 , we also include an annotated journal article that identifies the key elements and writing approaches that we detail here. Before you begin your research, make sure you have ethical clearance from all relevant ethical review boards.

Select a Target Journal Early in the Writing Process

We recommend that you select a “target journal” early in the writing process; a “target journal” is the journal to which you plan to submit your paper. Each journal has a set of core readers and you should tailor your writing to this readership. For example, if you plan to submit a manuscript about vaping during pregnancy to a pregnancy-focused journal, you will need to explain what vaping is because readers of this journal may not have a background in this topic. However, if you were to submit that same article to a tobacco journal, you would not need to provide as much background information about vaping.

Information about a journal’s core readership can be found on its website, usually in a section called “About this journal” or something similar. For example, the Journal of Cancer Education presents such information on the “Aims and Scope” page of its website, which can be found here: https://www.springer.com/journal/13187/aims-and-scope .

Peer reviewer guidelines from your target journal are an additional resource that can help you tailor your writing to the journal and provide additional advice about crafting an effective article [ 1 ]. These are not always available, but it is worth a quick web search to find out.

Identify Author Roles Early in the Process

Early in the writing process, identify authors, determine the order of authors, and discuss the responsibilities of each author. Standard author responsibilities have been identified by The International Committee of Medical Journal Editors (ICMJE) [ 2 ]. To set clear expectations about each team member’s responsibilities and prevent errors in communication, we also suggest outlining more detailed roles, such as who will draft each section of the manuscript, write the abstract, submit the paper electronically, serve as corresponding author, and write the cover letter. It is best to formalize this agreement in writing after discussing it, circulating the document to the author team for approval. We suggest creating a title page on which all authors are listed in the agreed-upon order. It may be necessary to adjust authorship roles and order during the development of the paper. If a new author order is agreed upon, be sure to update the title page in the manuscript draft.

In the case where multiple papers will result from a single study, authors should discuss who will author each paper. Additionally, authors should agree on a deadline for each paper and the lead author should take responsibility for producing an initial draft by this deadline.

Structure of the Introduction Section

The introduction section should be approximately three to five paragraphs in length. Look at examples from your target journal to decide the appropriate length. This section should include the elements shown in Fig.  1 . Begin with a general context, narrowing to the specific focus of the paper. Include five main elements: why your research is important, what is already known about the topic, the “gap” or what is not yet known about the topic, why it is important to learn the new information that your research adds, and the specific research aim(s) that your paper addresses. Your research aim should address the gap you identified. Be sure to add enough background information to enable readers to understand your study. Table 1 provides common introduction section pitfalls and recommendations for addressing them.

figure 1

The main elements of the introduction section of an original research article. Often, the elements overlap

Methods Section

The purpose of the methods section is twofold: to explain how the study was done in enough detail to enable its replication and to provide enough contextual detail to enable readers to understand and interpret the results. In general, the essential elements of a methods section are the following: a description of the setting and participants, the study design and timing, the recruitment and sampling, the data collection process, the dataset, the dependent and independent variables, the covariates, the analytic approach for each research objective, and the ethical approval. The hallmark of an exemplary methods section is the justification of why each method was used. Table 2 provides common methods section pitfalls and recommendations for addressing them.

Results Section

The focus of the results section should be associations, or lack thereof, rather than statistical tests. Two considerations should guide your writing here. First, the results should present answers to each part of the research aim. Second, return to the methods section to ensure that the analysis and variables for each result have been explained.

Begin the results section by describing the number of participants in the final sample and details such as the number who were approached to participate, the proportion who were eligible and who enrolled, and the number of participants who dropped out. The next part of the results should describe the participant characteristics. After that, you may organize your results by the aim or by putting the most exciting results first. Do not forget to report your non-significant associations. These are still findings.

Tables and figures capture the reader’s attention and efficiently communicate your main findings [ 3 ]. Each table and figure should have a clear message and should complement, rather than repeat, the text. Tables and figures should communicate all salient details necessary for a reader to understand the findings without consulting the text. Include information on comparisons and tests, as well as information about the sample and timing of the study in the title, legend, or in a footnote. Note that figures are often more visually interesting than tables, so if it is feasible to make a figure, make a figure. To avoid confusing the reader, either avoid abbreviations in tables and figures, or define them in a footnote. Note that there should not be citations in the results section and you should not interpret results here. Table 3 provides common results section pitfalls and recommendations for addressing them.

Discussion Section

Opposite the introduction section, the discussion should take the form of a right-side-up triangle beginning with interpretation of your results and moving to general implications (Fig.  2 ). This section typically begins with a restatement of the main findings, which can usually be accomplished with a few carefully-crafted sentences.

figure 2

Major elements of the discussion section of an original research article. Often, the elements overlap

Next, interpret the meaning or explain the significance of your results, lifting the reader’s gaze from the study’s specific findings to more general applications. Then, compare these study findings with other research. Are these findings in agreement or disagreement with those from other studies? Does this study impart additional nuance to well-accepted theories? Situate your findings within the broader context of scientific literature, then explain the pathways or mechanisms that might give rise to, or explain, the results.

Journals vary in their approach to strengths and limitations sections: some are embedded paragraphs within the discussion section, while some mandate separate section headings. Keep in mind that every study has strengths and limitations. Candidly reporting yours helps readers to correctly interpret your research findings.

The next element of the discussion is a summary of the potential impacts and applications of the research. Should these results be used to optimally design an intervention? Does the work have implications for clinical protocols or public policy? These considerations will help the reader to further grasp the possible impacts of the presented work.

Finally, the discussion should conclude with specific suggestions for future work. Here, you have an opportunity to illuminate specific gaps in the literature that compel further study. Avoid the phrase “future research is necessary” because the recommendation is too general to be helpful to readers. Instead, provide substantive and specific recommendations for future studies. Table 4 provides common discussion section pitfalls and recommendations for addressing them.

Follow the Journal’s Author Guidelines

After you select a target journal, identify the journal’s author guidelines to guide the formatting of your manuscript and references. Author guidelines will often (but not always) include instructions for titles, cover letters, and other components of a manuscript submission. Read the guidelines carefully. If you do not follow the guidelines, your article will be sent back to you.

Finally, do not submit your paper to more than one journal at a time. Even if this is not explicitly stated in the author guidelines of your target journal, it is considered inappropriate and unprofessional.

Your title should invite readers to continue reading beyond the first page [ 4 , 5 ]. It should be informative and interesting. Consider describing the independent and dependent variables, the population and setting, the study design, the timing, and even the main result in your title. Because the focus of the paper can change as you write and revise, we recommend you wait until you have finished writing your paper before composing the title.

Be sure that the title is useful for potential readers searching for your topic. The keywords you select should complement those in your title to maximize the likelihood that a researcher will find your paper through a database search. Avoid using abbreviations in your title unless they are very well known, such as SNP, because it is more likely that someone will use a complete word rather than an abbreviation as a search term to help readers find your paper.

After you have written a complete draft, use the checklist (Fig. 3 ) below to guide your revisions and editing. Additional resources are available on writing the abstract and citing references [ 5 ]. When you feel that your work is ready, ask a trusted colleague or two to read the work and provide informal feedback. The box below provides a checklist that summarizes the key points offered in this article.

figure 3

Checklist for manuscript quality

Data Availability

Michalek AM (2014) Down the rabbit hole…advice to reviewers. J Cancer Educ 29:4–5

Article   Google Scholar  

International Committee of Medical Journal Editors. Defining the role of authors and contributors: who is an author? http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authosrs-and-contributors.html . Accessed 15 January, 2020

Vetto JT (2014) Short and sweet: a short course on concise medical writing. J Cancer Educ 29(1):194–195

Brett M, Kording K (2017) Ten simple rules for structuring papers. PLoS ComputBiol. https://doi.org/10.1371/journal.pcbi.1005619

Lang TA (2017) Writing a better research article. J Public Health Emerg. https://doi.org/10.21037/jphe.2017.11.06

Download references

Acknowledgments

Ella August is grateful to the Sustainable Sciences Institute for mentoring her in training researchers on writing and publishing their research.

Code Availability

Not applicable.

Author information

Authors and affiliations.

Department of Maternal and Child Health, University of North Carolina Gillings School of Global Public Health, 135 Dauer Dr, 27599, Chapel Hill, NC, USA

Clara Busse & Ella August

Department of Epidemiology, University of Michigan School of Public Health, 1415 Washington Heights, Ann Arbor, MI, 48109-2029, USA

Ella August

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ella August .

Ethics declarations

Conflicts of interests.

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

(PDF 362 kb)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Busse, C., August, E. How to Write and Publish a Research Paper for a Peer-Reviewed Journal. J Canc Educ 36 , 909–913 (2021). https://doi.org/10.1007/s13187-020-01751-z

Download citation

Published : 30 April 2020

Issue Date : October 2021

DOI : https://doi.org/10.1007/s13187-020-01751-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Manuscripts
  • Scientific writing
  • Find a journal
  • Publish with us
  • Track your research

Evidence-Based Research Series-Paper 1: What Evidence-Based Research is and why is it important?

Affiliations.

  • 1 Johns Hopkins Evidence-based Practice Center, Division of General Internal Medicine, Department of Medicine, Johns Hopkins University, Baltimore, MD, USA.
  • 2 Digital Content Services, Operations, Elsevier Ltd., 125 London Wall, London, EC2Y 5AS, UK.
  • 3 School of Nursing, McMaster University, Health Sciences Centre, Room 2J20, 1280 Main Street West, Hamilton, Ontario, Canada, L8S 4K1; Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Inndalsveien 28, Bergen, P.O.Box 7030 N-5020 Bergen, Norway.
  • 4 Department of Sport Science and Clinical Biomechanics, University of Southern Denmark, Campusvej 55, 5230, Odense M, Denmark; Department of Physiotherapy and Occupational Therapy, University Hospital of Copenhagen, Herlev & Gentofte, Kildegaardsvej 28, 2900, Hellerup, Denmark.
  • 5 Musculoskeletal Statistics Unit, the Parker Institute, Bispebjerg and Frederiksberg Hospital, Copenhagen, Nordre Fasanvej 57, 2000, Copenhagen F, Denmark; Department of Clinical Research, Research Unit of Rheumatology, University of Southern Denmark, Odense University Hospital, Denmark.
  • 6 Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Inndalsveien 28, Bergen, P.O.Box 7030 N-5020 Bergen, Norway. Electronic address: [email protected].
  • PMID: 32979491
  • DOI: 10.1016/j.jclinepi.2020.07.020

Objectives: There is considerable actual and potential waste in research. Evidence-based research ensures worthwhile and valuable research. The aim of this series, which this article introduces, is to describe the evidence-based research approach.

Study design and setting: In this first article of a three-article series, we introduce the evidence-based research approach. Evidence-based research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner.

Results: We describe evidence-based research and provide an overview of the approach of systematically and transparently using previous research before starting a new study to justify and design the new study (article #2 in series) and-on study completion-place its results in the context with what is already known (article #3 in series).

Conclusion: This series introduces evidence-based research as an approach to minimize unnecessary and irrelevant clinical health research that is unscientific, wasteful, and unethical.

Keywords: Clinical health research; Clinical trials; Evidence synthesis; Evidence-based research; Medical ethics; Research ethics; Systematic review.

Copyright © 2020 Elsevier Inc. All rights reserved.

Publication types

  • Research Support, Non-U.S. Gov't
  • Biomedical Research* / methods
  • Biomedical Research* / organization & administration
  • Clinical Trials as Topic / ethics
  • Clinical Trials as Topic / methods
  • Clinical Trials as Topic / organization & administration
  • Ethics, Research
  • Evidence-Based Medicine / methods*
  • Needs Assessment
  • Reproducibility of Results
  • Research Design* / standards
  • Research Design* / trends
  • Systematic Reviews as Topic
  • Treatment Outcome

Research Paper Examples

Academic Writing Service

Research paper examples are of great value for students who want to complete their assignments timely and efficiently. If you are a student in the university, your first stop in the quest for research paper examples will be the campus library where you can get to view the research sample papers of lecturers and other professionals in diverse fields plus those of fellow students who preceded you in the campus. Many college departments maintain libraries of previous student work, including large research papers, which current students can examine.

Embark on a journey of academic excellence with iResearchNet, your premier destination for research paper examples that illuminate the path to scholarly success. In the realm of academia, where the pursuit of knowledge is both a challenge and a privilege, the significance of having access to high-quality research paper examples cannot be overstated. These exemplars are not merely papers; they are beacons of insight, guiding students and scholars through the complex maze of academic writing and research methodologies.

At iResearchNet, we understand that the foundation of academic achievement lies in the quality of resources at one’s disposal. This is why we are dedicated to offering a comprehensive collection of research paper examples across a multitude of disciplines. Each example stands as a testament to rigorous research, clear writing, and the deep understanding necessary to advance in one’s academic and professional journey.

Access to superior research paper examples equips learners with the tools to develop their own ideas, arguments, and hypotheses, fostering a cycle of learning and discovery that transcends traditional boundaries. It is with this vision that iResearchNet commits to empowering students and researchers, providing them with the resources to not only meet but exceed the highest standards of academic excellence. Join us on this journey, and let iResearchNet be your guide to unlocking the full potential of your academic endeavors.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code, what is a research paper.

  • Anthropology
  • Communication
  • Criminal Justice
  • Criminal Law
  • Criminology
  • Mental Health
  • Political Science

Importance of Research Paper Examples

  • Research Paper Writing Services

A Sample Research Paper on Child Abuse

A research paper represents the pinnacle of academic investigation, a scholarly manuscript that encapsulates a detailed study, analysis, or argument based on extensive independent research. It is an embodiment of the researcher’s ability to synthesize a wealth of information, draw insightful conclusions, and contribute novel perspectives to the existing body of knowledge within a specific field. At its core, a research paper strives to push the boundaries of what is known, challenging existing theories and proposing new insights that could potentially reshape the understanding of a particular subject area.

The objective of writing a research paper is manifold, serving both educational and intellectual pursuits. Primarily, it aims to educate the author, providing a rigorous framework through which they engage deeply with a topic, hone their research and analytical skills, and learn the art of academic writing. Beyond personal growth, the research paper serves the broader academic community by contributing to the collective pool of knowledge, offering fresh perspectives, and stimulating further research. It is a medium through which scholars communicate ideas, findings, and theories, thereby fostering an ongoing dialogue that propels the advancement of science, humanities, and other fields of study.

Research papers can be categorized into various types, each with distinct objectives and methodologies. The most common types include:

  • Analytical Research Paper: This type focuses on analyzing different viewpoints represented in the scholarly literature or data. The author critically evaluates and interprets the information, aiming to provide a comprehensive understanding of the topic.
  • Argumentative or Persuasive Research Paper: Here, the author adopts a stance on a contentious issue and argues in favor of their position. The objective is to persuade the reader through evidence and logic that the author’s viewpoint is valid or preferable.
  • Experimental Research Paper: Often used in the sciences, this type documents the process, results, and implications of an experiment conducted by the author. It provides a detailed account of the methodology, data collected, analysis performed, and conclusions drawn.
  • Survey Research Paper: This involves collecting data from a set of respondents about their opinions, behaviors, or characteristics. The paper analyzes this data to draw conclusions about the population from which the sample was drawn.
  • Comparative Research Paper: This type involves comparing and contrasting different theories, policies, or phenomena. The aim is to highlight similarities and differences, thereby gaining a deeper understanding of the subjects under review.
  • Cause and Effect Research Paper: It explores the reasons behind specific actions, events, or conditions and the consequences that follow. The goal is to establish a causal relationship between variables.
  • Review Research Paper: This paper synthesizes existing research on a particular topic, offering a comprehensive analysis of the literature to identify trends, gaps, and consensus in the field.

Understanding the nuances and objectives of these various types of research papers is crucial for scholars and students alike, as it guides their approach to conducting and writing up their research. Each type demands a unique set of skills and perspectives, pushing the author to think critically and creatively about their subject matter. As the academic landscape continues to evolve, the research paper remains a fundamental tool for disseminating knowledge, encouraging innovation, and fostering a culture of inquiry and exploration.

Browse Sample Research Papers

iResearchNet prides itself on offering a wide array of research paper examples across various disciplines, meticulously curated to support students, educators, and researchers in their academic endeavors. Each example embodies the hallmarks of scholarly excellence—rigorous research, analytical depth, and clear, precise writing. Below, we explore the diverse range of research paper examples available through iResearchNet, designed to inspire and guide users in their quest for academic achievement.

Anthropology Research Paper Examples

Our anthropology research paper examples delve into the study of humanity, exploring cultural, social, biological, and linguistic variations among human populations. These papers offer insights into human behavior, traditions, and evolution, providing a comprehensive overview of anthropological research methods and theories.

  • Archaeology Research Paper
  • Forensic Anthropology Research Paper
  • Linguistics Research Paper
  • Medical Anthropology Research Paper
  • Social Problems Research Paper

Art Research Paper Examples

The art research paper examples feature analyses of artistic expressions across different cultures and historical periods. These papers cover a variety of topics, including art history, criticism, and theory, as well as the examination of specific artworks or movements.

  • Performing Arts Research Paper
  • Music Research Paper
  • Architecture Research Paper
  • Theater Research Paper
  • Visual Arts Research Paper

Cancer Research Paper Examples

Our cancer research paper examples focus on the latest findings in the field of oncology, discussing the biological mechanisms of cancer, advancements in diagnostic techniques, and innovative treatment strategies. These papers aim to contribute to the ongoing battle against cancer by sharing cutting-edge research.

  • Breast Cancer Research Paper
  • Leukemia Research Paper
  • Lung Cancer Research Paper
  • Ovarian Cancer Research Paper
  • Prostate Cancer Research Paper

Communication Research Paper Examples

These examples explore the complexities of human communication, covering topics such as media studies, interpersonal communication, and public relations. The papers examine how communication processes affect individuals, societies, and cultures.

  • Advertising Research Paper
  • Journalism Research Paper
  • Media Research Paper
  • Public Relations Research Paper
  • Public Speaking Research Paper

Crime Research Paper Examples

The crime research paper examples provided by iResearchNet investigate various aspects of criminal behavior and the factors contributing to crime. These papers cover a range of topics, from theoretical analyses of criminality to empirical studies on crime prevention strategies.

  • Computer Crime Research Paper
  • Domestic Violence Research Paper
  • Hate Crimes Research Paper
  • Organized Crime Research Paper
  • White-Collar Crime Research Paper

Criminal Justice Research Paper Examples

Our criminal justice research paper examples delve into the functioning of the criminal justice system, exploring issues related to law enforcement, the judiciary, and corrections. These papers critically examine policies, practices, and reforms within the criminal justice system.

  • Capital Punishment Research Paper
  • Community Policing Research Paper
  • Corporal Punishment Research Paper
  • Criminal Investigation Research Paper
  • Criminal Justice System Research Paper
  • Plea Bargaining Research Paper
  • Restorative Justice Research Paper

Criminal Law Research Paper Examples

These examples focus on the legal aspects of criminal behavior, discussing laws, regulations, and case law that govern criminal proceedings. The papers provide an in-depth analysis of criminal law principles, legal defenses, and the implications of legal decisions.

  • Actus Reus Research Paper
  • Gun Control Research Paper
  • Insanity Defense Research Paper
  • International Criminal Law Research Paper
  • Self-Defense Research Paper

Criminology Research Paper Examples

iResearchNet’s criminology research paper examples study the causes, prevention, and societal impacts of crime. These papers employ various theoretical frameworks to analyze crime trends and propose effective crime reduction strategies.

  • Cultural Criminology Research Paper
  • Education and Crime Research Paper
  • Marxist Criminology Research Paper
  • School Crime Research Paper
  • Urban Crime Research Paper

Culture Research Paper Examples

The culture research paper examples examine the beliefs, practices, and artifacts that define different societies. These papers explore how culture shapes identities, influences behaviors, and impacts social interactions.

  • Advertising and Culture Research Paper
  • Material Culture Research Paper
  • Popular Culture Research Paper
  • Cross-Cultural Studies Research Paper
  • Culture Change Research Paper

Economics Research Paper Examples

Our economics research paper examples offer insights into the functioning of economies at both the micro and macro levels. Topics include economic theory, policy analysis, and the examination of economic indicators and trends.

  • Budget Research Paper
  • Cost-Benefit Analysis Research Paper
  • Fiscal Policy Research Paper
  • Labor Market Research Paper

Education Research Paper Examples

These examples address a wide range of issues in education, from teaching methods and curriculum design to educational policy and reform. The papers aim to enhance understanding and improve outcomes in educational settings.

  • Early Childhood Education Research Paper
  • Information Processing Research Paper
  • Multicultural Education Research Paper
  • Special Education Research Paper
  • Standardized Tests Research Paper

Health Research Paper Examples

The health research paper examples focus on public health issues, healthcare systems, and medical interventions. These papers contribute to the discourse on health promotion, disease prevention, and healthcare management.

  • AIDS Research Paper
  • Alcoholism Research Paper
  • Disease Research Paper
  • Health Economics Research Paper
  • Health Insurance Research Paper
  • Nursing Research Paper

History Research Paper Examples

Our history research paper examples cover significant events, figures, and periods, offering critical analyses of historical narratives and their impact on present-day society.

  • Adolf Hitler Research Paper
  • American Revolution Research Paper
  • Ancient Greece Research Paper
  • Apartheid Research Paper
  • Christopher Columbus Research Paper
  • Climate Change Research Paper
  • Cold War Research Paper
  • Columbian Exchange Research Paper
  • Deforestation Research Paper
  • Diseases Research Paper
  • Earthquakes Research Paper
  • Egypt Research Paper

Leadership Research Paper Examples

These examples explore the theories and practices of effective leadership, examining the qualities, behaviors, and strategies that distinguish successful leaders in various contexts.

  • Implicit Leadership Theories Research Paper
  • Judicial Leadership Research Paper
  • Leadership Styles Research Paper
  • Police Leadership Research Paper
  • Political Leadership Research Paper
  • Remote Leadership Research Paper

Mental Health Research Paper Examples

The mental health research paper examples provided by iResearchNet discuss psychological disorders, therapeutic interventions, and mental health advocacy. These papers aim to raise awareness and improve mental health care practices.

  • ADHD Research Paper
  • Anxiety Research Paper
  • Autism Research Paper
  • Depression Research Paper
  • Eating Disorders Research Paper
  • PTSD Research Paper
  • Schizophrenia Research Paper
  • Stress Research Paper

Political Science Research Paper Examples

Our political science research paper examples analyze political systems, behaviors, and ideologies. Topics include governance, policy analysis, and the study of political movements and institutions.

  • American Government Research Paper
  • Civil War Research Paper
  • Communism Research Paper
  • Democracy Research Paper
  • Game Theory Research Paper
  • Human Rights Research Paper
  • International Relations Research Paper
  • Terrorism Research Paper

Psychology Research Paper Examples

These examples delve into the study of the mind and behavior, covering a broad range of topics in clinical, cognitive, developmental, and social psychology.

  • Artificial Intelligence Research Paper
  • Assessment Psychology Research Paper
  • Biological Psychology Research Paper
  • Clinical Psychology Research Paper
  • Cognitive Psychology Research Paper
  • Developmental Psychology Research Paper
  • Discrimination Research Paper
  • Educational Psychology Research Paper
  • Environmental Psychology Research Paper
  • Experimental Psychology Research Paper
  • Intelligence Research Paper
  • Learning Disabilities Research Paper
  • Personality Psychology Research Paper
  • Psychiatry Research Paper
  • Psychotherapy Research Paper
  • Social Cognition Research Paper
  • Social Psychology Research Paper

Sociology Research Paper Examples

The sociology research paper examples examine societal structures, relationships, and processes. These papers provide insights into social phenomena, inequality, and change.

  • Family Research Paper
  • Demography Research Paper
  • Group Dynamics Research Paper
  • Quality of Life Research Paper
  • Social Change Research Paper
  • Social Movements Research Paper
  • Social Networks Research Paper

Technology Research Paper Examples

Our technology research paper examples address the impact of technological advancements on society, exploring issues related to digital communication, cybersecurity, and innovation.

  • Computer Forensics Research Paper
  • Genetic Engineering Research Paper
  • History of Technology Research Paper
  • Internet Research Paper
  • Nanotechnology Research Paper

research paper is based on

Other Research Paper Examples

  • Abortion Research Paper
  • Adoption Research Paper
  • Animal Testing Research Paper
  • Bullying Research Paper
  • Diversity Research Paper
  • Divorce Research Paper
  • Drugs Research Paper
  • Environmental Issues Research Paper
  • Ethics Research Paper
  • Evolution Research Paper
  • Feminism Research Paper
  • Food Research Paper
  • Gender Research Paper
  • Globalization Research Paper
  • Juvenile Justice Research Paper
  • Law Research Paper
  • Management Research Paper
  • Philosophy Research Paper
  • Public Health Research Paper
  • Religion Research Paper
  • Science Research Paper
  • Social Sciences Research Paper
  • Statistics Research Paper
  • Other Sample Research Papers

Each category of research paper examples provided by iResearchNet serves as a valuable resource for students and researchers seeking to deepen their understanding of a specific field. By offering a comprehensive collection of well-researched and thoughtfully written papers, iResearchNet aims to support academic growth and encourage scholarly inquiry across diverse disciplines.

Sample Research Papers: To Read or Not to Read?

When you get an assignment to write a research paper, the first question you ask yourself is ‘Should I look for research paper examples?’ Maybe, I can deal with this task on my own without any help. Is it that difficult?

Thousands of students turn to our service every day for help. It does not mean that they cannot do their assignments on their own. They can, but the reason is different. Writing a research paper demands so much time and energy that asking for assistance seems to be a perfect solution. As the matter of fact, it is a perfect solution, especially, when you need to work to pay for your studying as well.

Firstly, if you search for research paper examples before you start writing, you can save your time significantly. You look at the example and you understand the gist of your assignment within several minutes. Secondly, when you examine some sample paper, you get to know all the requirements. You analyze the structure, the language, and the formatting details. Finally, reading examples helps students to overcome writer’s block, as other people’s ideas can motivate you to discover your own ideas.

The significance of research paper examples in the academic journey of students cannot be overstated. These examples serve not only as a blueprint for structuring and formatting academic papers but also as a beacon guiding students through the complex landscape of academic writing standards. iResearchNet recognizes the pivotal role that high-quality research paper examples play in fostering academic success and intellectual growth among students.

Blueprint for Academic Success

Research paper examples provided by iResearchNet are meticulously crafted to demonstrate the essential elements of effective academic writing. These examples offer clear insights into how to organize a paper, from the introductory paragraph, through the development of arguments and analysis, to the concluding remarks. They showcase the appropriate use of headings, subheadings, and the integration of tables, figures, and appendices, which collectively contribute to a well-organized and coherent piece of scholarly work. By studying these examples, students can gain a comprehensive understanding of the structure and formatting required in academic papers, which is crucial for meeting the rigorous standards of academic institutions.

Sparking Ideas and Providing Evidence

Beyond serving as a structural guide, research paper examples act as a source of inspiration for students embarking on their research projects. These examples illuminate a wide array of topics, methodologies, and analytical frameworks, thereby sparking ideas for students’ own research inquiries. They demonstrate how to effectively engage with existing literature, frame research questions, and develop a compelling thesis statement. Moreover, by presenting evidence and arguments in a logical and persuasive manner, these examples illustrate the art of substantiating claims with solid research, encouraging students to adopt a similar level of rigor and depth in their work.

Enhancing Research Skills

Engagement with high-quality research paper examples is instrumental in improving research skills among students. These examples expose students to various research methodologies, from qualitative case studies to quantitative analyses, enabling them to appreciate the breadth of research approaches applicable to their fields of study. By analyzing these examples, students learn how to critically evaluate sources, differentiate between primary and secondary data, and apply ethical considerations in research. Furthermore, these papers serve as a model for effectively citing sources, thereby teaching students the importance of academic integrity and the avoidance of plagiarism.

Research Paper Examples

In essence, research paper examples are a fundamental resource that can significantly enhance the academic writing and research capabilities of students. iResearchNet’s commitment to providing access to a diverse collection of exemplary papers reflects its dedication to supporting academic excellence. Through these examples, students are equipped with the tools necessary to navigate the challenges of academic writing, foster innovative thinking, and contribute meaningfully to the scholarly community. By leveraging these resources, students can elevate their academic pursuits, ensuring their research is not only rigorous but also impactful.

Custom Research Paper Writing Services

In the academic journey, the ability to craft a compelling and meticulously researched paper is invaluable. Recognizing the challenges and pressures that students face, iResearchNet has developed a suite of research paper writing services designed to alleviate the burden of academic writing and research. Our services are tailored to meet the diverse needs of students across all academic disciplines, ensuring that every research paper not only meets but exceeds the rigorous standards of scholarly excellence. Below, we detail the multifaceted aspects of our research paper writing services, illustrating how iResearchNet stands as a beacon of support in the academic landscape.

At iResearchNet, we understand the pivotal role that research papers play in the academic and professional development of students. With this understanding at our core, we offer comprehensive writing services that cater to the intricate process of research paper creation. Our services are designed to guide students through every stage of the writing process, from initial research to final submission, ensuring clarity, coherence, and scholarly rigor.

The Need for Research Paper Writing Services

Navigating the complexities of academic writing and research can be a daunting task for many students. The challenges of identifying credible sources, synthesizing information, adhering to academic standards, and articulating arguments cohesively are significant. Furthermore, the pressures of tight deadlines and the high stakes of academic success can exacerbate the difficulties faced by students. iResearchNet’s research paper writing services are crafted to address these challenges head-on, providing expert assistance that empowers students to achieve their academic goals with confidence.

Why Choose iResearchNet

Selecting the right partner for research paper writing is a pivotal decision for students and researchers aiming for academic excellence. iResearchNet stands out as the premier choice for several compelling reasons, each designed to meet the diverse needs of our clientele and ensure their success.

  • Expert Writers : At iResearchNet, we pride ourselves on our team of expert writers who are not only masters in their respective fields but also possess a profound understanding of academic writing standards. With advanced degrees and extensive experience, our writers bring depth, insight, and precision to each paper, ensuring that your work is informed by the latest research and methodologies.
  • Top Quality : Quality is the cornerstone of our services. We adhere to rigorous quality control processes to ensure that every paper we deliver meets the highest standards of academic excellence. Our commitment to quality means thorough research, impeccable writing, and meticulous proofreading, resulting in work that not only meets but exceeds expectations.
  • Customized Solutions : Understanding that each research project has its unique challenges and requirements, iResearchNet offers customized solutions tailored to your specific needs. Whether you’re grappling with a complex research topic, a tight deadline, or specific formatting guidelines, our team is equipped to provide personalized support that aligns with your objectives.
  • Affordable Prices : We believe that access to high-quality research paper writing services should not be prohibitive. iResearchNet offers competitive pricing structures designed to provide value without compromising on quality. Our transparent pricing model ensures that you know exactly what you are paying for, with no hidden costs or surprises.
  • Timely Delivery : Meeting deadlines is critical in academic writing, and at iResearchNet, we take this seriously. Our efficient processes and dedicated team ensure that your paper is delivered on time, every time, allowing you to meet your academic deadlines with confidence.
  • 24/7 Support : Our commitment to your success is reflected in our round-the-clock support. Whether you have a question about your order, need to communicate with your writer, or require assistance with any aspect of our service, our friendly and knowledgeable support team is available 24/7 to assist you.
  • Money-Back Guarantee : Your satisfaction is our top priority. iResearchNet offers a money-back guarantee, ensuring that if for any reason you are not satisfied with the work delivered, you are entitled to a refund. This policy underscores our confidence in the quality of our services and our dedication to your success.

Choosing iResearchNet for your research paper writing needs means partnering with a trusted provider committed to excellence, innovation, and customer satisfaction. Our unparalleled blend of expert writers, top-quality work, customized solutions, affordability, timely delivery, 24/7 support, and a money-back guarantee makes us the ideal choice for students and researchers seeking to elevate their academic performance.

How It Works: iResearchNet’s Streamlined Process

Navigating the process of obtaining a top-notch research paper has never been more straightforward, thanks to iResearchNet’s streamlined approach. Our user-friendly system ensures that from the moment you decide to place your order to the final receipt of your custom-written paper, every step is seamless, transparent, and tailored to your needs. Here’s how our comprehensive process works:

  • Place Your Order : Begin your journey to academic success by visiting our website and filling out the order form. Here, you’ll provide details about your research paper, including the topic, academic level, number of pages, formatting style, and any specific instructions or requirements. This initial step is crucial for us to understand your needs fully and match you with the most suitable writer.
  • Make Payment : Once your order details are confirmed, you’ll proceed to the payment section. Our platform offers a variety of secure payment options, ensuring that your transaction is safe and hassle-free. Our transparent pricing policy means you’ll know exactly what you’re paying for upfront, with no hidden fees.
  • Choose Your Writer : After payment, you’ll have the opportunity to choose a writer from our team of experts. Our writers are categorized based on their fields of expertise, academic qualifications, and customer feedback ratings. This step empowers you to select the writer who best matches your research paper’s requirements, ensuring a personalized and targeted approach to your project.
  • Receive Your Work : Our writer will commence work on your research paper, adhering to the specified guidelines and timelines. Throughout this process, you’ll have the ability to communicate directly with your writer, allowing for updates, revisions, and clarifications to ensure the final product meets your expectations. Once completed, your research paper will undergo a thorough quality check before being delivered to you via your chosen method.
  • Free Revisions : Your satisfaction is our priority. Upon receiving your research paper, you’ll have the opportunity to review the work and request any necessary revisions. iResearchNet offers free revisions within a specified period, ensuring that your final paper perfectly aligns with your academic requirements and expectations.

Our process is designed to provide you with a stress-free experience and a research paper that reflects your academic goals. From placing your order to enjoying the success of a well-written paper, iResearchNet is here to support you every step of the way.

Our Extras: Enhancing Your iResearchNet Experience

At iResearchNet, we are committed to offering more than just standard research paper writing services. We understand the importance of providing a comprehensive and personalized experience for each of our clients. That’s why we offer a range of additional services designed to enhance your experience and ensure your academic success. Here are the exclusive extras you can benefit from:

  • VIP Service : Elevate your iResearchNet experience with our VIP service, offering you priority treatment from the moment you place your order. This service ensures your projects are given first priority, with immediate attention from our team, and direct access to our top-tier writers and editors. VIP clients also benefit from our highest level of customer support, available to address any inquiries or needs with utmost urgency and personalized care.
  • Plagiarism Report : Integrity and originality are paramount in academic writing. To provide you with peace of mind, we offer a detailed plagiarism report with every research paper. This report is generated using advanced plagiarism detection software, ensuring that your work is unique and adheres to the highest standards of academic honesty.
  • Text Messages : Stay informed about your order’s progress with real-time updates sent directly to your phone. This service ensures you’re always in the loop, providing immediate notifications about key milestones, writer assignments, and any changes to your order status. With this added layer of communication, you can relax knowing that you’ll never miss an important update about your research paper.
  • Table of Contents : A well-organized research paper is key to guiding readers through your work. Our service includes the creation of a detailed table of contents, meticulously structured to reflect the main sections and subsections of your paper. This not only enhances the navigability of your document but also presents your research in a professional and academically appropriate format.
  • Abstract Page : The abstract page is your research paper’s first impression, summarizing the essential points of your study and its conclusions. Crafting a compelling abstract is an art, and our experts are skilled in highlighting the significance, methodology, results, and implications of your research succinctly and effectively. This service ensures that your paper makes a strong impact from the very beginning.
  • Editor’s Check : Before your research paper reaches you, it undergoes a final review by our team of experienced editors. This editor’s check is a comprehensive process that includes proofreading for grammar, punctuation, and spelling errors, as well as ensuring that the paper meets all your specifications and academic standards. This meticulous attention to detail guarantees that your paper is polished, professional, and ready for submission.

To ensure your research paper is of the highest quality and ready for submission, it undergoes a rigorous editor’s check. This final review process includes a thorough examination for any grammatical, punctuation, or spelling errors, as well as a verification that the paper meets all your specified requirements and academic standards. Our editors’ meticulous approach guarantees that your paper is polished, accurate, and exemplary.

By choosing iResearchNet and leveraging our extras, you can elevate the quality of your research paper and enjoy a customized, worry-free academic support experience.

A research paper is an academic piece of writing, so you need to follow all the requirements and standards. Otherwise, it will be impossible to get the high results. To make it easier for you, we have analyzed the structure and peculiarities of a sample research paper on the topic ‘Child Abuse’.

The paper includes 7300+ words, a detailed outline, citations are in APA formatting style, and bibliography with 28 sources.

To write any paper you need to write a great outline. This is the key to a perfect paper. When you organize your paper, it is easier for you to present the ideas logically, without jumping from one thought to another.

In the outline, you need to name all the parts of your paper. That is to say, an introduction, main body, conclusion, bibliography, some papers require abstract and proposal as well.

A good outline will serve as a guide through your paper making it easier for the reader to follow your ideas.

I. Introduction

Ii. estimates of child abuse: methodological limitations, iii. child abuse and neglect: the legalities, iv. corporal punishment versus child abuse, v. child abuse victims: the patterns, vi. child abuse perpetrators: the patterns, vii. explanations for child abuse, viii. consequences of child abuse and neglect, ix. determining abuse: how to tell whether a child is abused or neglected, x. determining abuse: interviewing children, xi. how can society help abused children and abusive families, introduction.

An introduction should include a thesis statement and the main points that you will discuss in the paper.

A thesis statement is one sentence in which you need to show your point of view. You will then develop this point of view through the whole piece of work:

‘The impact of child abuse affects more than one’s childhood, as the psychological and physical injuries often extend well into adulthood.’

Child abuse is a very real and prominent social problem today. The impact of child abuse affects more than one’s childhood, as the psychological and physical injuries often extend well into adulthood. Most children are defenseless against abuse, are dependent on their caretakers, and are unable to protect themselves from these acts.

Childhood serves as the basis for growth, development, and socialization. Throughout adolescence, children are taught how to become productive and positive, functioning members of society. Much of the socializing of children, particularly in their very earliest years, comes at the hands of family members. Unfortunately, the messages conveyed to and the actions against children by their families are not always the positive building blocks for which one would hope.

In 2008, the Children’s Defense Fund reported that each day in America, 2,421 children are confirmed as abused or neglected, 4 children are killed by abuse or neglect, and 78 babies die before their first birthday. These daily estimates translate into tremendous national figures. In 2006, caseworkers substantiated an estimated 905,000 reports of child abuse or neglect. Of these, 64% suffered neglect, 16% were physically abused, 9% were sexually abused, 7% were emotionally or psychologically maltreated, and 2% were medically neglected. In addition, 15% of the victims experienced “other” types of maltreatment such as abandonment, threats of harm to the child, and congenital drug addiction (National Child Abuse and Neglect Data System, 2006). Obviously, this problem is a substantial one.

In the main body, you dwell upon the topic of your paper. You provide your ideas and support them with evidence. The evidence include all the data and material you have found, analyzed and systematized. You can support your point of view with different statistical data, with surveys, and the results of different experiments. Your task is to show that your idea is right, and make the reader interested in the topic.

In this example, a writer analyzes the issue of child abuse: different statistical data, controversies regarding the topic, examples of the problem and the consequences.

Several issues arise when considering the amount of child abuse that occurs annually in the United States. Child abuse is very hard to estimate because much (or most) of it is not reported. Children who are abused are unlikely to report their victimization because they may not know any better, they still love their abusers and do not want to see them taken away (or do not themselves want to be taken away from their abusers), they have been threatened into not reporting, or they do not know to whom they should report their victimizations. Still further, children may report their abuse only to find the person to whom they report does not believe them or take any action on their behalf. Continuing to muddy the waters, child abuse can be disguised as legitimate injury, particularly because young children are often somewhat uncoordinated and are still learning to accomplish physical tasks, may not know their physical limitations, and are often legitimately injured during regular play. In the end, children rarely report child abuse; most often it is an adult who makes a report based on suspicion (e.g., teacher, counselor, doctor, etc.).

Even when child abuse is reported, social service agents and investigators may not follow up or substantiate reports for a variety of reasons. Parents can pretend, lie, or cover up injuries or stories of how injuries occurred when social service agents come to investigate. Further, there is not always agreement about what should be counted as abuse by service providers and researchers. In addition, social service agencies/agents have huge caseloads and may only be able to deal with the most serious forms of child abuse, leaving the more “minor” forms of abuse unsupervised and unmanaged (and uncounted in the statistical totals).

While most laws about child abuse and neglect fall at the state levels, federal legislation provides a foundation for states by identifying a minimum set of acts and behaviors that define child abuse and neglect. The Federal Child Abuse Prevention and Treatment Act (CAPTA), which stems from the Keeping Children and Families Safe Act of 2003, defines child abuse and neglect as, at minimum, “(1) any recent act or failure to act on the part of a parent or caretaker which results in death, serious physical or emotional harm, sexual abuse, or exploitation; or (2) an act or failure to act which presents an imminent risk or serious harm.”

Using these minimum standards, each state is responsible for providing its own definition of maltreatment within civil and criminal statutes. When defining types of child abuse, many states incorporate similar elements and definitions into their legal statutes. For example, neglect is often defined as failure to provide for a child’s basic needs. Neglect can encompass physical elements (e.g., failure to provide necessary food or shelter, or lack of appropriate supervision), medical elements (e.g., failure to provide necessary medical or mental health treatment), educational elements (e.g., failure to educate a child or attend to special educational needs), and emotional elements (e.g., inattention to a child’s emotional needs, failure to provide psychological care, or permitting the child to use alcohol or other drugs). Failure to meet needs does not always mean a child is neglected, as situations such as poverty, cultural values, and community standards can influence the application of legal statutes. In addition, several states distinguish between failure to provide based on financial inability and failure to provide for no apparent financial reason.

Statutes on physical abuse typically include elements of physical injury (ranging from minor bruises to severe fractures or death) as a result of punching, beating, kicking, biting, shaking, throwing, stabbing, choking, hitting (with a hand, stick, strap, or other object), burning, or otherwise harming a child. Such injury is considered abuse regardless of the intention of the caretaker. In addition, many state statutes include allowing or encouraging another person to physically harm a child (such as noted above) as another form of physical abuse in and of itself. Sexual abuse usually includes activities by a parent or caretaker such as fondling a child’s genitals, penetration, incest, rape, sodomy, indecent exposure, and exploitation through prostitution or the production of pornographic materials.

Finally, emotional or psychological abuse typically is defined as a pattern of behavior that impairs a child’s emotional development or sense of self-worth. This may include constant criticism, threats, or rejection, as well as withholding love, support, or guidance. Emotional abuse is often the most difficult to prove and, therefore, child protective services may not be able to intervene without evidence of harm to the child. Some states suggest that harm may be evidenced by an observable or substantial change in behavior, emotional response, or cognition, or by anxiety, depression, withdrawal, or aggressive behavior. At a practical level, emotional abuse is almost always present when other types of abuse are identified.

Some states include an element of substance abuse in their statutes on child abuse. Circumstances that can be considered substance abuse include (a) the manufacture of a controlled substance in the presence of a child or on the premises occupied by a child (Colorado, Indiana, Iowa, Montana, South Dakota, Tennessee, and Virginia); (b) allowing a child to be present where the chemicals or equipment for the manufacture of controlled substances are used (Arizona, New Mexico); (c) selling, distributing, or giving drugs or alcohol to a child (Florida, Hawaii, Illinois, Minnesota, and Texas); (d) use of a controlled substance by a caregiver that impairs the caregiver’s ability to adequately care for the child (Kentucky, New York, Rhode Island, and Texas); and (e) exposure of the child to drug paraphernalia (North Dakota), the criminal sale or distribution of drugs (Montana, Virginia), or drug-related activity (District of Columbia).

One of the most difficult issues with which the U.S. legal system must contend is that of allowing parents the right to use corporal punishment when disciplining a child, while not letting them cross over the line into the realm of child abuse. Some parents may abuse their children under the guise of discipline, and many instances of child abuse arise from angry parents who go too far when disciplining their children with physical punishment. Generally, state statutes use terms such as “reasonable discipline of a minor,” “causes only temporary, short-term pain,” and may cause “the potential for bruising” but not “permanent damage, disability, disfigurement or injury” to the child as ways of indicating the types of discipline behaviors that are legal. However, corporal punishment that is “excessive,” “malicious,” “endangers the bodily safety of,” or is “an intentional infliction of injury” is not allowed under most state statutes (e.g., state of Florida child abuse statute).

Most research finds that the use of physical punishment (most often spanking) is not an effective method of discipline. The literature on this issue tends to find that spanking stops misbehavior, but no more effectively than other firm measures. Further, it seems to hinder rather than improve general compliance/obedience (particularly when the child is not in the presence of the punisher). Researchers have also explained why physical punishment is not any more effective at gaining child compliance than nonviolent forms of discipline. Some of the problems that arise when parents use spanking or other forms of physical punishment include the fact that spanking does not teach what children should do, nor does it provide them with alternative behavior options should the circumstance arise again. Spanking also undermines reasoning, explanation, or other forms of parental instruction because children cannot learn, reason, or problem solve well while experiencing threat, pain, fear, or anger. Further, the use of physical punishment is inconsistent with nonviolent principles, or parental modeling. In addition, the use of spanking chips away at the bonds of affection between parents and children, and tends to induce resentment and fear. Finally, it hinders the development of empathy and compassion in children, and they do not learn to take responsibility for their own behavior (Pitzer, 1997).

One of the biggest problems with the use of corporal punishment is that it can escalate into much more severe forms of violence. Usually, parents spank because they are angry (and somewhat out of control) and they can’t think of other ways to discipline. When parents are acting as a result of emotional triggers, the notion of discipline is lost while punishment and pain become the foci.

In 2006, of the children who were found to be victims of child abuse, nearly 75% of them were first-time victims (or had not come to the attention of authorities prior). A slight majority of child abuse victims were girls—51.5%, compared to 48% of abuse victims being boys. The younger the child, the more at risk he or she is for child abuse and neglect victimization. Specifically, the rate for infants (birth to 1 year old) was approximately 24 per 1,000 children of the same age group. The victimization rate for children 1–3 years old was 14 per 1,000 children of the same age group. The abuse rate for children aged 4– 7 years old declined further to 13 per 1,000 children of the same age group. African American, American Indian, and Alaska Native children, as well as children of multiple races, had the highest rates of victimization. White and Latino children had lower rates, and Asian children had the lowest rates of child abuse and neglect victimization. Regarding living arrangements, nearly 27% of victims were living with a single mother, 20% were living with married parents, while 22% were living with both parents but the marital status was unknown. (This reporting element had nearly 40% missing data, however.) Regarding disability, nearly 8% of child abuse victims had some degree of mental retardation, emotional disturbance, visual or hearing impairment, learning disability, physical disability, behavioral problems, or other medical problems. Unfortunately, data indicate that for many victims, the efforts of the child protection services system were not successful in preventing subsequent victimization. Children who had been prior victims of maltreatment were 96% more likely to experience another occurrence than those who were not prior victims. Further, child victims who were reported to have a disability were 52% more likely to experience recurrence than children without a disability. Finally, the oldest victims (16–21 years of age) were the least likely to experience a recurrence, and were 51% less likely to be victimized again than were infants (younger than age 1) (National Child Abuse and Neglect Data System, 2006).

Child fatalities are the most tragic consequence of maltreatment. Yet, each year, children die from abuse and neglect. In 2006, an estimated 1,530 children in the United States died due to abuse or neglect. The overall rate of child fatalities was 2 deaths per 100,000 children. More than 40% of child fatalities were attributed to neglect, but physical abuse also was a major contributor. Approximately 78% of the children who died due to child abuse and neglect were younger than 4 years old, and infant boys (younger than 1) had the highest rate of fatalities at 18.5 deaths per 100,000 boys of the same age in the national population. Infant girls had a rate of 14.7 deaths per 100,000 girls of the same age (National Child Abuse and Neglect Data System, 2006).

One question to be addressed regarding child fatalities is why infants have such a high rate of death when compared to toddlers and adolescents. Children under 1 year old pose an immense amount of responsibility for their caretakers: they are completely dependent and need constant attention. Children this age are needy, impulsive, and not amenable to verbal control or effective communication. This can easily overwhelm vulnerable parents. Another difficulty associated with infants is that they are physically weak and small. Injuries to infants can be fatal, while similar injuries to older children might not be. The most common cause of death in children less than 1 year is cerebral trauma (often the result of shaken-baby syndrome). Exasperated parents can deliver shakes or blows without realizing how little it takes to cause irreparable or fatal damage to an infant. Research informs us that two of the most common triggers for fatal child abuse are crying that will not cease and toileting accidents. Both of these circumstances are common in infants and toddlers whose only means of communication often is crying, and who are limited in mobility and cannot use the toilet. Finally, very young children cannot assist in injury diagnoses. Children who have been injured due to abuse or neglect often cannot communicate to medical professionals about where it hurts, how it hurts, and so forth. Also, nonfatal injuries can turn fatal in the absence of care by neglectful parents or parents who do not want medical professionals to possibly identify an injury as being the result of abuse.

Estimates reveal that nearly 80% of perpetrators of child abuse were parents of the victim. Other relatives accounted for nearly 7%, and unmarried partners of parents made up 4% of perpetrators. Of those perpetrators that were parents, over 90% were biological parents, 4% were stepparents, and 0.7% were adoptive parents. Of this group, approximately 58% of perpetrators were women and 42% were men. Women perpetrators are typically younger than men. The average age for women abusers was 31 years old, while for men the average was 34 years old. Forty percent of women who abused were younger than 30 years of age, compared with 33% of men being under 30. The racial distribution of perpetrators is similar to that of victims. Fifty-four percent were white, 21% were African American, and 20% were Hispanic/Latino (National Child Abuse and Neglect Data System, 2006).

There are many factors that are associated with child abuse. Some of the more common/well-accepted explanations are individual pathology, parent–child interaction, past abuse in the family (or social learning), situational factors, and cultural support for physical punishment along with a lack of cultural support for helping parents here in the United States.

The first explanation centers on the individual pathology of a parent or caretaker who is abusive. This theory focuses on the idea that people who abuse their children have something wrong with their individual personality or biological makeup. Such psychological pathologies may include having anger control problems; being depressed or having post-partum depression; having a low tolerance for frustration (e.g., children can be extremely frustrating: they don’t always listen; they constantly push the line of how far they can go; and once the line has been established, they are constantly treading on it to make sure it hasn’t moved. They are dependent and self-centered, so caretakers have very little privacy or time to themselves); being rigid (e.g., having no tolerance for differences—for example, what if your son wanted to play with dolls? A rigid father would not let him, laugh at him for wanting to, punish him when he does, etc.); having deficits in empathy (parents who cannot put themselves in the shoes of their children cannot fully understand what their children need emotionally); or being disorganized, inefficient, and ineffectual. (Parents who are unable to manage their own lives are unlikely to be successful at managing the lives of their children, and since many children want and need limits, these parents are unable to set them or adhere to them.)

Biological pathologies that may increase the likelihood of someone becoming a child abuser include having substance abuse or dependence problems, or having persistent or reoccurring physical health problems (especially health problems that can be extremely painful and can cause a person to become more self-absorbed, both qualities that can give rise to a lack of patience, lower frustration tolerance, and increased stress).

The second explanation for child abuse centers on the interaction between the parent and the child, noting that certain types of parents are more likely to abuse, and certain types of children are more likely to be abused, and when these less-skilled parents are coupled with these more difficult children, child abuse is the most likely to occur. Discussion here focuses on what makes a parent less skilled, and what makes a child more difficult. Characteristics of unskilled parents are likely to include such traits as only pointing out what children do wrong and never giving any encouragement for good behavior, and failing to be sensitive to the emotional needs of children. Less skilled parents tend to have unrealistic expectations of children. They may engage in role reversal— where the parents make the child take care of them—and view the parent’s happiness and well-being as the responsibility of the child. Some parents view the parental role as extremely stressful and experience little enjoyment from being a parent. Finally, less-skilled parents tend to have more negative perceptions regarding their child(ren). For example, perhaps the child has a different shade of skin than they expected and this may disappoint or anger them, they may feel the child is being manipulative (long before children have this capability), or they may view the child as the scapegoat for all the parents’ or family’s problems. Theoretically, parents with these characteristics would be more likely to abuse their children, but if they are coupled with having a difficult child, they would be especially likely to be abusive. So, what makes a child more difficult? Certainly, through no fault of their own, children may have characteristics that are associated with child care that is more demanding and difficult than in the “normal” or “average” situation. Such characteristics can include having physical and mental disabilities (autism, attention deficit hyperactivity disorder [ADHD], hyperactivity, etc.); the child may be colicky, frequently sick, be particularly needy, or cry more often. In addition, some babies are simply unhappier than other babies for reasons that cannot be known. Further, infants are difficult even in the best of circumstances. They are unable to communicate effectively, and they are completely dependent on their caretakers for everything, including eating, diaper changing, moving around, entertainment, and emotional bonding. Again, these types of children, being more difficult, are more likely to be victims of child abuse.

Nonetheless, each of these types of parents and children alone cannot explain the abuse of children, but it is the interaction between them that becomes the key. Unskilled parents may produce children that are happy and not as needy, and even though they are unskilled, they do not abuse because the child takes less effort. At the same time, children who are more difficult may have parents who are skilled and are able to handle and manage the extra effort these children take with aplomb. However, risks for child abuse increase when unskilled parents must contend with difficult children.

Social learning or past abuse in the family is a third common explanation for child abuse. Here, the theory concentrates not only on what children learn when they see or experience violence in their homes, but additionally on what they do not learn as a result of these experiences. Social learning theory in the context of family violence stresses that if children are abused or see abuse (toward siblings or a parent), those interactions and violent family members become the representations and role models for their future familial interactions. In this way, what children learn is just as important as what they do not learn. Children who witness or experience violence may learn that this is the way parents deal with children, or that violence is an acceptable method of child rearing and discipline. They may think when they become parents that “violence worked on me when I was a child, and I turned out fine.” They may learn unhealthy relationship interaction patterns; children may witness the negative interactions of parents and they may learn the maladaptive or violent methods of expressing anger, reacting to stress, or coping with conflict.

What is equally as important, though, is that they are unlikely to learn more acceptable and nonviolent ways of rearing children, interacting with family members, and working out conflict. Here it may happen that an adult who was abused as a child would like to be nonviolent toward his or her own children, but when the chips are down and the child is misbehaving, this abused-child-turned-adult does not have a repertoire of nonviolent strategies to try. This parent is more likely to fall back on what he or she knows as methods of discipline.

Something important to note here is that not all abused children grow up to become abusive adults. Children who break the cycle were often able to establish and maintain one healthy emotional relationship with someone during their childhoods (or period of young adulthood). For instance, they may have received emotional support from a nonabusing parent, or they received social support and had a positive relationship with another adult during their childhood (e.g., teacher, coach, minister, neighbor, etc.). Abused children who participate in therapy during some period of their lives can often break the cycle of violence. In addition, adults who were abused but are able to form an emotionally supportive and satisfying relationship with a mate can make the transition to being nonviolent in their family interactions.

Moving on to a fourth familiar explanation for child abuse, there are some common situational factors that influence families and parents and increase the risks for child abuse. Typically, these are factors that increase family stress or social isolation. Specifically, such factors may include receiving public assistance or having low socioeconomic status (a combination of low income and low education). Other factors include having family members who are unemployed, underemployed (working in a job that requires lower qualifications than an individual possesses), or employed only part time. These financial difficulties cause great stress for families in meeting the needs of the individual members. Other stress-inducing familial characteristics are single-parent households and larger family size. Finally, social isolation can be devastating for families and family members. Having friends to talk to, who can be relied upon, and with whom kids can be dropped off occasionally is tremendously important for personal growth and satisfaction in life. In addition, social isolation and stress can cause individuals to be quick to lose their tempers, as well as cause people to be less rational in their decision making and to make mountains out of mole hills. These situations can lead families to be at greater risk for child abuse.

Finally, cultural views and supports (or lack thereof) can lead to greater amounts of child abuse in a society such as the United States. One such cultural view is that of societal support for physical punishment. This is problematic because there are similarities between the way criminals are dealt with and the way errant children are handled. The use of capital punishment is advocated for seriously violent criminals, and people are quick to use such idioms as “spare the rod and spoil the child” when it comes to the discipline or punishment of children. In fact, it was not until quite recently that parenting books began to encourage parents to use other strategies than spanking or other forms of corporal punishment in the discipline of their children. Only recently, the American Academy of Pediatrics has come out and recommended that parents do not spank or use other forms of violence on their children because of the deleterious effects such methods have on youngsters and their bonds with their parents. Nevertheless, regardless of recommendations, the culture of corporal punishment persists.

Another cultural view in the United States that can give rise to greater incidents of child abuse is the belief that after getting married, couples of course should want and have children. Culturally, Americans consider that children are a blessing, raising kids is the most wonderful thing a person can do, and everyone should have children. Along with this notion is the idea that motherhood is always wonderful; it is the most fulfilling thing a woman can do; and the bond between a mother and her child is strong, glorious, and automatic—all women love being mothers. Thus, culturally (and theoretically), society nearly insists that married couples have children and that they will love having children. But, after children are born, there is not much support for couples who have trouble adjusting to parenthood, or who do not absolutely love their new roles as parents. People look askance at parents who need help, and cannot believe parents who say anything negative about parenthood. As such, theoretically, society has set up a situation where couples are strongly encouraged to have kids, are told they will love kids, but then society turns a blind or disdainful eye when these same parents need emotional, financial, or other forms of help or support. It is these types of cultural viewpoints that increase the risks for child abuse in society.

The consequences of child abuse are tremendous and long lasting. Research has shown that the traumatic experience of childhood abuse is life changing. These costs may surface during adolescence, or they may not become evident until abused children have grown up and become abusing parents or abused spouses. Early identification and treatment is important to minimize these potential long-term effects. Whenever children say they have been abused, it is imperative that they be taken seriously and their abuse be reported. Suspicions of child abuse must be reported as well. If there is a possibility that a child is or has been abused, an investigation must be conducted.

Children who have been abused may exhibit traits such as the inability to love or have faith in others. This often translates into adults who are unable to establish lasting and stable personal relationships. These individuals have trouble with physical closeness and touching as well as emotional intimacy and trust. Further, these qualities tend to cause a fear of entering into new relationships, as well as the sabotaging of any current ones.

Psychologically, children who have been abused tend to have poor self-images or are passive, withdrawn, or clingy. They may be angry individuals who are filled with rage, anxiety, and a variety of fears. They are often aggressive, disruptive, and depressed. Many abused children have flashbacks and nightmares about the abuse they have experienced, and this may cause sleep problems as well as drug and alcohol problems. Posttraumatic stress disorder (PTSD) and antisocial personality disorder are both typical among maltreated children. Research has also shown that most abused children fail to reach “successful psychosocial functioning,” and are thus not resilient and do not resume a “normal life” after the abuse has ended.

Socially (and likely because of these psychological injuries), abused children have trouble in school, will have difficulty getting and remaining employed, and may commit a variety of illegal or socially inappropriate behaviors. Many studies have shown that victims of child abuse are likely to participate in high-risk behaviors such as alcohol or drug abuse, the use of tobacco, and high-risk sexual behaviors (e.g., unprotected sex, large numbers of sexual partners). Later in life, abused children are more likely to have been arrested and homeless. They are also less able to defend themselves in conflict situations and guard themselves against repeated victimizations.

Medically, abused children likely will experience health problems due to the high frequency of physical injuries they receive. In addition, abused children experience a great deal of emotional turmoil and stress, which can also have a significant impact on their physical condition. These health problems are likely to continue occurring into adulthood. Some of these longer-lasting health problems include headaches; eating problems; problems with toileting; and chronic pain in the back, stomach, chest, and genital areas. Some researchers have noted that abused children may experience neurological impairment and problems with intellectual functioning, while others have found a correlation between abuse and heart, lung, and liver disease, as well as cancer (Thomas, 2004).

Victims of sexual abuse show an alarming number of disturbances as adults. Some dislike and avoid sex, or experience sexual problems or disorders, while other victims appear to enjoy sexual activities that are self-defeating or maladaptive—normally called “dysfunctional sexual behavior”—and have many sexual partners.

Abused children also experience a wide variety of developmental delays. Many do not reach physical, cognitive, or emotional developmental milestones at the typical time, and some never accomplish what they are supposed to during childhood socialization. In the next section, these developmental delays are discussed as a means of identifying children who may be abused.

There are two primary ways of identifying children who are abused: spotting and evaluating physical injuries, and detecting and appraising developmental delays. Distinguishing physical injuries due to abuse can be difficult, particularly among younger children who are likely to get hurt or receive injuries while they are playing and learning to become ambulatory. Nonetheless, there are several types of wounds that children are unlikely to give themselves during their normal course of play and exploration. These less likely injuries may signal instances of child abuse.

While it is true that children are likely to get bruises, particularly when they are learning to walk or crawl, bruises on infants are not normal. Also, the back of the legs, upper arms, or on the chest, neck, head, or genitals are also locations where bruises are unlikely to occur during normal childhood activity. Further, bruises with clean patterns, like hand prints, buckle prints, or hangers (to name a few), are good examples of the types of bruises children do not give themselves.

Another area of physical injury where the source of the injury can be difficult to detect is fractures. Again, children fall out of trees, or crash their bikes, and can break limbs. These can be normal parts of growing up. However, fractures in infants less than 12 months old are particularly suspect, as infants are unlikely to be able to accomplish the types of movement necessary to actually break a leg or an arm. Further, multiple fractures, particularly more than one on a bone, should be examined more closely. Spiral or torsion fractures (when the bone is broken by twisting) are suspect because when children break their bones due to play injuries, the fractures are usually some other type (e.g., linear, oblique, compacted). In addition, when parents don’t know about the fracture(s) or how it occurred, abuse should be considered, because when children get these types of injuries, they need comfort and attention.

Head and internal injuries are also those that may signal abuse. Serious blows to the head cause internal head injuries, and this is very different from the injuries that result from bumping into things. Abused children are also likely to experience internal injuries like those to the abdomen, liver, kidney, and bladder. They may suffer a ruptured spleen, or intestinal perforation. These types of damages rarely happen by accident.

Burns are another type of physical injury that can happen by accident or by abuse. Nevertheless, there are ways to tell these types of burn injuries apart. The types of burns that should be examined and investigated are those where the burns are in particular locations. Burns to the bottom of the feet, genitals, abdomen, or other inaccessible spots should be closely considered. Burns of the whole hand or those to the buttocks are also unlikely to happen as a result of an accident.

Turning to the detection and appraisal of developmental delays, one can more readily assess possible abuse by considering what children of various ages should be able to accomplish, than by noting when children are delayed and how many milestones on which they are behind schedule. Importantly, a few delays in reaching milestones can be expected, since children develop individually and not always according to the norm. Nonetheless, when children are abused, their development is likely to be delayed in numerous areas and across many milestones.

As children develop and grow, they should be able to crawl, walk, run, talk, control going to the bathroom, write, set priorities, plan ahead, trust others, make friends, develop a good self-image, differentiate between feeling and behavior, and get their needs met in appropriate ways. As such, when children do not accomplish these feats, their circumstances should be examined.

Infants who are abused or neglected typically develop what is termed failure to thrive syndrome. This syndrome is characterized by slow, inadequate growth, or not “filling out” physically. They have a pale, colorless complexion and dull eyes. They are not likely to spend much time looking around, and nothing catches their eyes. They may show other signs of lack of nutrition such as cuts, bruises that do not heal in a timely way, and discolored fingernails. They are also not trusting and may not cry much, as they are not expecting to have their needs met. Older infants may not have developed any language skills, or these developments are quite slow. This includes both verbal and nonverbal means of communication.

Toddlers who are abused often become hypervigilant about their environments and others’ moods. They are more outwardly focused than a typical toddler (who is quite self-centered) and may be unable to separate themselves as individuals, or consider themselves as distinct beings. In this way, abused toddlers cannot focus on tasks at hand because they are too concerned about others’ reactions. They don’t play with toys, have no interest in exploration, and seem unable to enjoy life. They are likely to accept losses with little reaction, and may have age-inappropriate knowledge of sex and sexual relations. Finally, toddlers, whether they are abused or not, begin to mirror their parents’ behaviors. Thus, toddlers who are abused may mimic the abuse when they are playing with dolls or “playing house.”

Developmental delays can also be detected among abused young adolescents. Some signs include the failure to learn cause and effect, since their parents are so inconsistent. They have no energy for learning and have not developed beyond one- or two-word commands. They probably cannot follow complicated directions (such as two to three tasks per instruction), and they are unlikely to be able to think for themselves. Typically, they have learned that failure is totally unacceptable, but they are more concerned with the teacher’s mood than with learning and listening to instruction. Finally, they are apt to have been inadequately toilet trained and thus may be unable to control their bladders.

Older adolescents, because they are likely to have been abused for a longer period of time, continue to get further and further behind in their developmental achievements. Abused children this age become family nurturers. They take care of their parents and cater to their parents’ needs, rather than the other way around. In addition, they probably take care of any younger siblings and do the household chores. Because of these default responsibilities, they usually do not participate in school activities; they frequently miss days at school; and they have few, if any, friends. Because they have become so hypervigilant and have increasingly delayed development, they lose interest in and become disillusioned with education. They develop low self-esteem and little confidence, but seem old for their years. Children this age who are abused are still likely to be unable to control their bladders and may have frequent toileting accidents.

Other developmental delays can occur and be observed in abused and neglected children of any age. For example, malnutrition and withdrawal can be noticed in infants through teenagers. Maltreated children frequently have persistent or untreated illnesses, and these can become permanent disabilities if medical conditions go untreated for a long enough time. Another example can be the consequences of neurological damage. Beyond being a medical issue, this type of damage can cause problems with social behavior and impulse control, which, again, can be discerned in various ages of children.

Once child abuse is suspected, law enforcement officers, child protection workers, or various other practitioners may need to interview the child about the abuse or neglect he or she may have suffered. Interviewing children can be extremely difficult because children at various stages of development can remember only certain parts or aspects of the events in their lives. Also, interviewers must be careful that they do not put ideas or answers into the heads of the children they are interviewing. There are several general recommendations when interviewing children about the abuse they may have experienced. First, interviewers must acknowledge that even when children are abused, they likely still love their parents. They do not want to be taken away from their parents, nor do they want to see their parents get into trouble. Interviewers must not blame the parents or be judgmental about them or the child’s family. Beyond that, interviews should take place in a safe, neutral location. Interviewers can use dolls and role-play to help children express the types of abuse of which they may be victims.

Finally, interviewers must ask age-appropriate questions. For example, 3-year-olds can probably only answer questions about what happened and who was involved. Four- to five-year-olds can also discuss where the incidents occurred. Along with what, who, and where, 6- to 8-year-olds can talk about the element of time, or when the abuse occurred. Nine- to 10-year-olds are able to add commentary about the number of times the abuse occurred. Finally, 11-year-olds and older children can additionally inform interviewers about the circumstances of abusive instances.

A conclusion is not a summary of what a writer has already mentioned. On the contrary, it is the last point made. Taking every detail of the investigation, the researcher makes the concluding point. In this part of a paper, you need to put a full stop in your research. You need to persuade the reader in your opinion.

Never add any new information in the conclusion. You can present solutions to the problem and you dwell upon the results, but only if this information has been already mentioned in the main body.

Child advocates recommend a variety of strategies to aid families and children experiencing abuse. These recommendations tend to focus on societal efforts as well as more individual efforts. One common strategy advocated is the use of public service announcements that encourage individuals to report any suspected child abuse. Currently, many mandatory reporters (those required by law to report abuse such as teachers, doctors, and social service agency employees) and members of communities feel that child abuse should not be reported unless there is substantial evidence that abuse is indeed occurring. Child advocates stress that this notion should be changed, and that people should report child abuse even if it is only suspected. Public service announcements should stress that if people report suspected child abuse, the worst that can happen is that they might be wrong, but in the grander scheme of things that is really not so bad.

Child advocates also stress that greater interagency cooperation is needed. This cooperation should be evident between women’s shelters, child protection agencies, programs for at-risk children, medical agencies, and law enforcement officers. These agencies typically do not share information, and if they did, more instances of child abuse would come to the attention of various authorities and could be investigated and managed. Along these lines, child protection agencies and programs should receive more funding. When budgets are cut, social services are often the first things to go or to get less financial support. Child advocates insist that with more resources, child protection agencies could hire more workers, handle more cases, conduct more investigations, and follow up with more children and families.

Continuing, more educational efforts must be initiated about issues such as punishment and discipline styles and strategies; having greater respect for children; as well as informing the community about what child abuse is, and how to recognize it. In addition, Americans must alter the cultural orientation about child bearing and child rearing. Couples who wish to remain child-free must be allowed to do so without disdain. And, it must be acknowledged that raising children is very difficult, is not always gloriously wonderful, and that parents who seek help should be lauded and not criticized. These kinds of efforts can help more children to be raised in nonviolent, emotionally satisfying families, and thus become better adults.

Bibliography

When you write a paper, make sure you are aware of all the formatting requirements. Incorrect formatting can lower your mark, so do not underestimate the importance of this part.

Organizing your bibliography is quite a tedious and time-consuming task. Still, you need to do it flawlessly. For this reason, analyze all the standards you need to meet or ask professionals to help you with it. All the comas, colons, brackets etc. matter. They truly do.

Bibliography:

  • American Academy of Pediatrics: https://www.aap.org/
  • Bancroft, L., & Silverman, J. G. (2002). The batterer as parent. Thousand Oaks, CA: Sage.
  • Child Abuse Prevention and Treatment Act, 42 U.S.C.A. § 5106g (1998).
  • Childhelp: Child Abuse Statistics: https://www.childhelp.org/child-abuse-statistics/
  • Children’s Defense Fund: https://www.childrensdefense.org/
  • Child Stats.gov: https://www.childstats.gov/
  • Child Welfare League of America: https://www.cwla.org/
  • Crosson-Tower, C. (2008). Understanding child abuse and neglect (7th ed.). Boston: Allyn & Bacon.
  • DeBecker, G. (1999). Protecting the gift: Keeping children and teenagers safe (and parents sane). New York: Bantam Dell.
  • Family Research Laboratory at the University of New Hampshire: https://cola.unh.edu/family-research-laboratory
  • Guterman, N. B. (2001). Stopping child maltreatment before it starts: Emerging horizons in early home visitation services. Thousand Oaks, CA: Sage.
  • Herman, J. L. (2000). Father-daughter incest. Cambridge, MA: Harvard University Press.
  • Medline Plus, Child Abuse: https://medlineplus.gov/childabuse.html
  • Myers, J. E. B. (Ed.). (1994). The backlash: Child protection under fire. Newbury Park, CA: Sage.
  • National Center for Missing and Exploited Children: https://www.missingkids.org/home
  • National Child Abuse and Neglect Data System. (2006). Child maltreatment 2006: Reports from the states to the National Child Abuse and Neglect Data System. Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families.
  • New York University Silver School of Social Work: https://socialwork.nyu.edu/
  • Pitzer, R. L. (1997). Corporal punishment in the discipline of children in the home: Research update for practitioners. Paper presented at the National Council on Family Relations Annual Conference, Washington, DC.
  • RAND, Child Abuse and Neglect: https://www.rand.org/topics/child-abuse-and-neglect.html
  • Richards, C. E. (2001). The loss of innocents: Child killers and their victims. Wilmington, DE: Scholarly Resources.
  • Straus, M. A. (2001). Beating the devil out of them: Corporal punishment in American families and its effects on children. Edison, NJ: Transaction.
  • Thomas, P. M. (2004). Protection, dissociation, and internal roles: Modeling and treating the effects of child abuse. Review of General Psychology, 7(15).
  • U.S. Department of Health and Human Services, Administration for Children and Families: https://www.acf.hhs.gov/

Your Pathway to Academic Excellence with iResearchNet

Embarking on your academic journey with iResearchNet not only sets the foundation for your success but also ensures you navigate the complexities of research paper writing with ease and confidence. Our comprehensive suite of services, from expertly crafted research papers to an array of exclusive extras, is designed with your academic needs and aspirations in mind. By choosing iResearchNet, you’re not just securing a service; you’re investing in your future, in a partnership that values excellence, integrity, and your personal academic goals.

We invite you to take the first step towards transforming your academic challenges into opportunities for growth and learning. Starting is simple, and the benefits are immense. With iResearchNet, you gain access to a world of expertise, personalized support, and resources tailored to elevate your research and writing to the highest standard. Our commitment to quality, alongside our promise of customization, affordability, and timely delivery, ensures that every paper we deliver meets and exceeds your expectations.

Whether you’re seeking to inspire your academic journey with our diverse research paper examples, require the specialized assistance of our expert writers, or wish to enhance your experience with our additional services, iResearchNet is here to support you every step of the way. Our process is streamlined for your convenience, ensuring that from the moment you place your order to the final receipt of your paper, your journey is smooth, transparent, and fully aligned with your academic objectives.

Embrace the opportunity to excel, to stand out, and to make your academic work truly remarkable. Choose iResearchNet today, and let us be your partner in achieving academic excellence. Begin your journey by placing your order, and experience firsthand the impact of professional support and exceptional writing on your academic endeavors. Your success is our priority, and with iResearchNet, it’s within reach.

ORDER HIGH QUALITY CUSTOM PAPER

research paper is based on

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, 113 great research paper topics.

author image

General Education

feature_pencilpaper

One of the hardest parts of writing a research paper can be just finding a good topic to write about. Fortunately we've done the hard work for you and have compiled a list of 113 interesting research paper topics. They've been organized into ten categories and cover a wide range of subjects so you can easily find the best topic for you.

In addition to the list of good research topics, we've included advice on what makes a good research paper topic and how you can use your topic to start writing a great paper.

What Makes a Good Research Paper Topic?

Not all research paper topics are created equal, and you want to make sure you choose a great topic before you start writing. Below are the three most important factors to consider to make sure you choose the best research paper topics.

#1: It's Something You're Interested In

A paper is always easier to write if you're interested in the topic, and you'll be more motivated to do in-depth research and write a paper that really covers the entire subject. Even if a certain research paper topic is getting a lot of buzz right now or other people seem interested in writing about it, don't feel tempted to make it your topic unless you genuinely have some sort of interest in it as well.

#2: There's Enough Information to Write a Paper

Even if you come up with the absolute best research paper topic and you're so excited to write about it, you won't be able to produce a good paper if there isn't enough research about the topic. This can happen for very specific or specialized topics, as well as topics that are too new to have enough research done on them at the moment. Easy research paper topics will always be topics with enough information to write a full-length paper.

Trying to write a research paper on a topic that doesn't have much research on it is incredibly hard, so before you decide on a topic, do a bit of preliminary searching and make sure you'll have all the information you need to write your paper.

#3: It Fits Your Teacher's Guidelines

Don't get so carried away looking at lists of research paper topics that you forget any requirements or restrictions your teacher may have put on research topic ideas. If you're writing a research paper on a health-related topic, deciding to write about the impact of rap on the music scene probably won't be allowed, but there may be some sort of leeway. For example, if you're really interested in current events but your teacher wants you to write a research paper on a history topic, you may be able to choose a topic that fits both categories, like exploring the relationship between the US and North Korea. No matter what, always get your research paper topic approved by your teacher first before you begin writing.

113 Good Research Paper Topics

Below are 113 good research topics to help you get you started on your paper. We've organized them into ten categories to make it easier to find the type of research paper topics you're looking for.

Arts/Culture

  • Discuss the main differences in art from the Italian Renaissance and the Northern Renaissance .
  • Analyze the impact a famous artist had on the world.
  • How is sexism portrayed in different types of media (music, film, video games, etc.)? Has the amount/type of sexism changed over the years?
  • How has the music of slaves brought over from Africa shaped modern American music?
  • How has rap music evolved in the past decade?
  • How has the portrayal of minorities in the media changed?

music-277279_640

Current Events

  • What have been the impacts of China's one child policy?
  • How have the goals of feminists changed over the decades?
  • How has the Trump presidency changed international relations?
  • Analyze the history of the relationship between the United States and North Korea.
  • What factors contributed to the current decline in the rate of unemployment?
  • What have been the impacts of states which have increased their minimum wage?
  • How do US immigration laws compare to immigration laws of other countries?
  • How have the US's immigration laws changed in the past few years/decades?
  • How has the Black Lives Matter movement affected discussions and view about racism in the US?
  • What impact has the Affordable Care Act had on healthcare in the US?
  • What factors contributed to the UK deciding to leave the EU (Brexit)?
  • What factors contributed to China becoming an economic power?
  • Discuss the history of Bitcoin or other cryptocurrencies  (some of which tokenize the S&P 500 Index on the blockchain) .
  • Do students in schools that eliminate grades do better in college and their careers?
  • Do students from wealthier backgrounds score higher on standardized tests?
  • Do students who receive free meals at school get higher grades compared to when they weren't receiving a free meal?
  • Do students who attend charter schools score higher on standardized tests than students in public schools?
  • Do students learn better in same-sex classrooms?
  • How does giving each student access to an iPad or laptop affect their studies?
  • What are the benefits and drawbacks of the Montessori Method ?
  • Do children who attend preschool do better in school later on?
  • What was the impact of the No Child Left Behind act?
  • How does the US education system compare to education systems in other countries?
  • What impact does mandatory physical education classes have on students' health?
  • Which methods are most effective at reducing bullying in schools?
  • Do homeschoolers who attend college do as well as students who attended traditional schools?
  • Does offering tenure increase or decrease quality of teaching?
  • How does college debt affect future life choices of students?
  • Should graduate students be able to form unions?

body_highschoolsc

  • What are different ways to lower gun-related deaths in the US?
  • How and why have divorce rates changed over time?
  • Is affirmative action still necessary in education and/or the workplace?
  • Should physician-assisted suicide be legal?
  • How has stem cell research impacted the medical field?
  • How can human trafficking be reduced in the United States/world?
  • Should people be able to donate organs in exchange for money?
  • Which types of juvenile punishment have proven most effective at preventing future crimes?
  • Has the increase in US airport security made passengers safer?
  • Analyze the immigration policies of certain countries and how they are similar and different from one another.
  • Several states have legalized recreational marijuana. What positive and negative impacts have they experienced as a result?
  • Do tariffs increase the number of domestic jobs?
  • Which prison reforms have proven most effective?
  • Should governments be able to censor certain information on the internet?
  • Which methods/programs have been most effective at reducing teen pregnancy?
  • What are the benefits and drawbacks of the Keto diet?
  • How effective are different exercise regimes for losing weight and maintaining weight loss?
  • How do the healthcare plans of various countries differ from each other?
  • What are the most effective ways to treat depression ?
  • What are the pros and cons of genetically modified foods?
  • Which methods are most effective for improving memory?
  • What can be done to lower healthcare costs in the US?
  • What factors contributed to the current opioid crisis?
  • Analyze the history and impact of the HIV/AIDS epidemic .
  • Are low-carbohydrate or low-fat diets more effective for weight loss?
  • How much exercise should the average adult be getting each week?
  • Which methods are most effective to get parents to vaccinate their children?
  • What are the pros and cons of clean needle programs?
  • How does stress affect the body?
  • Discuss the history of the conflict between Israel and the Palestinians.
  • What were the causes and effects of the Salem Witch Trials?
  • Who was responsible for the Iran-Contra situation?
  • How has New Orleans and the government's response to natural disasters changed since Hurricane Katrina?
  • What events led to the fall of the Roman Empire?
  • What were the impacts of British rule in India ?
  • Was the atomic bombing of Hiroshima and Nagasaki necessary?
  • What were the successes and failures of the women's suffrage movement in the United States?
  • What were the causes of the Civil War?
  • How did Abraham Lincoln's assassination impact the country and reconstruction after the Civil War?
  • Which factors contributed to the colonies winning the American Revolution?
  • What caused Hitler's rise to power?
  • Discuss how a specific invention impacted history.
  • What led to Cleopatra's fall as ruler of Egypt?
  • How has Japan changed and evolved over the centuries?
  • What were the causes of the Rwandan genocide ?

main_lincoln

  • Why did Martin Luther decide to split with the Catholic Church?
  • Analyze the history and impact of a well-known cult (Jonestown, Manson family, etc.)
  • How did the sexual abuse scandal impact how people view the Catholic Church?
  • How has the Catholic church's power changed over the past decades/centuries?
  • What are the causes behind the rise in atheism/ agnosticism in the United States?
  • What were the influences in Siddhartha's life resulted in him becoming the Buddha?
  • How has media portrayal of Islam/Muslims changed since September 11th?

Science/Environment

  • How has the earth's climate changed in the past few decades?
  • How has the use and elimination of DDT affected bird populations in the US?
  • Analyze how the number and severity of natural disasters have increased in the past few decades.
  • Analyze deforestation rates in a certain area or globally over a period of time.
  • How have past oil spills changed regulations and cleanup methods?
  • How has the Flint water crisis changed water regulation safety?
  • What are the pros and cons of fracking?
  • What impact has the Paris Climate Agreement had so far?
  • What have NASA's biggest successes and failures been?
  • How can we improve access to clean water around the world?
  • Does ecotourism actually have a positive impact on the environment?
  • Should the US rely on nuclear energy more?
  • What can be done to save amphibian species currently at risk of extinction?
  • What impact has climate change had on coral reefs?
  • How are black holes created?
  • Are teens who spend more time on social media more likely to suffer anxiety and/or depression?
  • How will the loss of net neutrality affect internet users?
  • Analyze the history and progress of self-driving vehicles.
  • How has the use of drones changed surveillance and warfare methods?
  • Has social media made people more or less connected?
  • What progress has currently been made with artificial intelligence ?
  • Do smartphones increase or decrease workplace productivity?
  • What are the most effective ways to use technology in the classroom?
  • How is Google search affecting our intelligence?
  • When is the best age for a child to begin owning a smartphone?
  • Has frequent texting reduced teen literacy rates?

body_iphone2

How to Write a Great Research Paper

Even great research paper topics won't give you a great research paper if you don't hone your topic before and during the writing process. Follow these three tips to turn good research paper topics into great papers.

#1: Figure Out Your Thesis Early

Before you start writing a single word of your paper, you first need to know what your thesis will be. Your thesis is a statement that explains what you intend to prove/show in your paper. Every sentence in your research paper will relate back to your thesis, so you don't want to start writing without it!

As some examples, if you're writing a research paper on if students learn better in same-sex classrooms, your thesis might be "Research has shown that elementary-age students in same-sex classrooms score higher on standardized tests and report feeling more comfortable in the classroom."

If you're writing a paper on the causes of the Civil War, your thesis might be "While the dispute between the North and South over slavery is the most well-known cause of the Civil War, other key causes include differences in the economies of the North and South, states' rights, and territorial expansion."

#2: Back Every Statement Up With Research

Remember, this is a research paper you're writing, so you'll need to use lots of research to make your points. Every statement you give must be backed up with research, properly cited the way your teacher requested. You're allowed to include opinions of your own, but they must also be supported by the research you give.

#3: Do Your Research Before You Begin Writing

You don't want to start writing your research paper and then learn that there isn't enough research to back up the points you're making, or, even worse, that the research contradicts the points you're trying to make!

Get most of your research on your good research topics done before you begin writing. Then use the research you've collected to create a rough outline of what your paper will cover and the key points you're going to make. This will help keep your paper clear and organized, and it'll ensure you have enough research to produce a strong paper.

What's Next?

Are you also learning about dynamic equilibrium in your science class? We break this sometimes tricky concept down so it's easy to understand in our complete guide to dynamic equilibrium .

Thinking about becoming a nurse practitioner? Nurse practitioners have one of the fastest growing careers in the country, and we have all the information you need to know about what to expect from nurse practitioner school .

Want to know the fastest and easiest ways to convert between Fahrenheit and Celsius? We've got you covered! Check out our guide to the best ways to convert Celsius to Fahrenheit (or vice versa).

These recommendations are based solely on our knowledge and experience. If you purchase an item through one of our links, PrepScholar may receive a commission.

author image

Christine graduated from Michigan State University with degrees in Environmental Biology and Geography and received her Master's from Duke University. In high school she scored in the 99th percentile on the SAT and was named a National Merit Finalist. She has taught English and biology in several countries.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

research paper is based on

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Comparing Paper and Computer Testing: 7 Key Research Studies

research paper is based on

  • Share article

Do the computer-based exams that are increasingly prevalent in K-12 education measure skills and knowledge as accurately as traditional paper-based tests?

With news that millions of students who took PARCC exams by computer tended to score worse than those who took the same exams with paper and pencil, it’s a technical question that is again getting heavy scrutiny.

Earlier this month, officials from the multistate Partnership for the Assessment of Readiness for College and Careers acknowledged to Education Week that there were discrepancies in scores across different formats of its exams.

Illinois, Rhode Island, and the Baltimore County, Md., schools are among the states and districts that have found such a pattern, with the advantage for paper-based test-takers appearing to be most pronounced in English/language arts and upper-grades math.

In Rhode Island, for example, officials found that 42.5 percent of the students who took the PARCC English/language arts exam on paper scored proficient, compared with 34 percent of those who took the test by computer. A spokesman for the state education department said the variability in scores there appears to be due in large measure to varying degrees of “student and system readiness for technology.”

Researchers and psychometricians have been wrestling with the dilemma of comparing paper- and computer-based test results for more than 20 years, said Derek Briggs, a professor of research and evaluation methodology at the University of Colorado at Boulder. He serves on the technical-advisory committees for both PARCC and the Smarter Balanced Assessment Consortium, the two main groups that have created tests aligned with the Common Core State Standards.

Briggs said computer- and paper-based versions of an exam shouldn’t necessarily be expected to measure the same abilities, or have comparable results. Part of the motivation for pouring hundreds of millions of federal dollars into the new consortia exams, after all, was to use technology to create better tests that elicit, for instance, more evidence of students’ critical-thinking skills and ability to model and solve problems.

But the reality is that in some states and districts, the technology infrastructure doesn’t exist to support administration of the computer-based exams. All children don’t have the same access to technology at home and in school, nor do their teachers use technology in the classroom in the same ways, even when it is present.

And some students are much more familiar than others with basic elements of a typical computer-based exam’s digital interface—how to scroll through a window, use word-processing features such as copying and pasting, and how to drag and drop items on a screen, for example. A mounting body of evidence suggests that some students tend to do worse on computer-based versions of an exam, for reasons that have more to do with their familiarity with technology than with their academic knowledge and skills.

To give a deeper look at the issues behind this “mode effect,” Education Week examined seven key research studies on the topic:

1. “Online Assessment and the Comparability of Score Meaning”

Educational Testing Service, 2003

“It should be a matter of indifference to the examinee whether the test is administered on computer or paper, or whether it is taken on a large-screen display or a small one,” wrote Randy Elliot Bennett more than a decade ago. Bennet was one of the leaders in the field of psychometrics and mode-comparability, and this overview explores a range of mode-comparability issues. “Although the promise of online assessment is substantial, states are encountering significant issues, including ones of measurement and fairness,” the paper reads. “Particularly distressing is the potential for such variation [in testing conditions] to unfairly affect population groups, such as females, minority-group members, or students attending schools in poor neighborhoods.”

2. “Maintaining Score Equivalence as Tests Transition Online: Issues, Approaches, and Trends”

Pearson, 2008

The authors of this paper, originally presented at the National Council of Measurement in Education, highlight the “mixed findings” from studies about the impact of test-administration mode on student reading and mathematic scores, saying they “promote ambiguity” and make life difficult for policymakers. The answer, they say, is quasi-experimental designs carried out by testing entities such as state departments of education. The preferred technique, the paper suggests, is a matched-samples comparability analysis, through which researchers are able to create comparable groups of test-takers in each mode of administration, then compare how they performed.

3. “Does It Matter If I Take My Mathematics Test on Computer? A Second Empirical Study of Mode Effects in NAEP”

Journal of Technology, Learning, and Assessment, 2008

“Results showed that the computer-based mathematics test was significantly harder statistically than the paper-based test,” according to Randy Elliot Bennett, who is also the lead author of this paper, which looked at results from a 2001 National Center for Education Statistics investigation of new technology for administering the National Assessment of Educational Progress in math. “In addition, computer facility predicted online mathematics test performance after controlling for performance on a paper-based mathematics test, suggesting that degree of familiarity with computers may matter when taking a computer-based mathematics test in NAEP.”

4. “The Nation’s Report Card: Writing 2011”

National Center for Education Statistics, 2014

As the NCES moved to administer its first computer-based NAEP writing assessment, it also tracked the impact in this study of how 24,100 8th graders and 28,1000 12th graders performed. Doug Levin, then the director of the State Educational Technology Directors Association, summed up the findings in a 2014 blog post: “Students who had greater access to technology in and out of school, and had teachers that required its use for school assignments, used technology in more powerful ways” and “scored significantly higher on the NAEP writing achievement test,” Levin wrote. “Such clear and direct relationships are few and far between in education—and these findings raise many implications for states and districts as they shift to online assessment.”

5. “Performance of 4th-Grade Students in the 2012 NAEP Computer-Based Writing Pilot”

This working paper found that high-performing 4th graders who took NAEP’s computer-based pilot writing exam in 2012 scored “substantively higher on the computer” than similar students who had taken the exam on paper in 2010. Low- and middle-performing students did not similarly benefit from taking the exam on computers, raising concerns that computer-based exams might widen achievement gaps. Likely key to the score differences, said Sheida White, one of the report’s authors, in an interview, is the role of “facilitative” computer skills such as keyboarding ability and word-processing skills. “When a student [who has those skills] is generating an essay, their cognitive resources are focused on their word choices, their sentence structure, and how to make their sentences more interesting and varied—not trying to find letters on a keyboard, or the technical aspects of the computer,” White said.

6. “Mathematics Minnesota Comprehensive Assessment-Series III (MCA-III) Mode Comparability Study Report”

Minnesota Department of Education and Pearson, 2012

This state-level study of mode effects on exams administered in spring and summer of 2011 used the matched-samples comparability-analysis technique described in the Pearson study. “Although the results indicated the presence of relatively small overall mode effects that favored the paper administration, these effects were observed for a minority of items common to the paper and online forms,” the study found.

7. “Comparability of Student Scores Obtained From Paper and Computer Administrations”

Oregon Department of Education, 2007

This state-level mode-comparability study looked across math, reading, and science tests administered by both computer and paper. “Results suggest that average scores and standard errors are quite similar across [computer] and paper tests. Although the difference were still quite small (less than a half a scale score point), 3rd graders tended to show slightly larger differences,” the paper reads. “This study provides evidence that scores are comparable across [Oregon’s computer] and paper delivery modes.”

Library Intern Connor Smith provided research assistance. A version of this article appeared in the February 24, 2016 edition of Education Week as Seven Studies Comparing Paper and Computer Test Scores

Sign Up for The Savvy Principal

Edweek top school jobs.

Workflow, Teamwork, Education concept. Team, people, colleagues in company, organization, administrative community. Corporate work, partnership and study.

Sign Up & Sign In

module image 9

research paper is based on

"We've shifted the responsibility of extracting relevant context for software engineering tasks from developers to the AI agents": Microsoft's AI-based framework turns developers to overnight 'mere supervisors'

What you need to know.

  • A research paper recently published by Microsoft details how its AI framework is turning software engineering into a fully automated task, rendering developers "mere supervisors."
  • NVIDIA's CEO had previously warned that coding is not a viable career option for the future generation as AI will eventually take over the profession.
  • Upskilling seems like a viable option, especially if you want to explore opportunities in coding.
  • Recruiters are actively seeking professionals with AI skills.

While safety and privacy are among the significant concerns among users with the prevalence and fast adoption of AI, the loss of jobs to AI is quickly rising in the ranks, too. Microsoft’s Bill Gates recently expressed fear of losing his career to AI but indicated the technology presents a 3-day workweek opportunity as it can handle mundane and recurring tasks.

NVIDIA’s CEO, Jensen Huang, shared the same sentiments and claimed that coding might be dead in the water as a career option for the next generation , given the rapid adoption of generative AI. As it turns out, Huang and Microsoft might be on the same train of thought regarding coding as a viable career option for the future generation. 

Microsoft recently published a research paper that painted a clearer picture, highlighting the future of coding and developers as artificial intelligence becomes more widespread. The paper provides an in-depth analysis of AutoDev — an AI-powered framework designed to ‘assist’ developers with software development, ultimately redefining coding and automation. 

The research paper further details instances where the framework was tested and performed well by providing repositories to tackle technical software engineering work. It’s worth noting that the technology also ships with AI-powered capabilities to validate its outcomes. AutoDev supports file editing, retrieval, build processes, execution, testing, and git operations. 

As highlighted by the researchers in the paper:

“The developer’s role within the AutoDev framework transforms from manual actions and validation of AI suggestions to a supervisor overseeing multi-agent collaboration on tasks, with the option to provide feedback. Developers can monitor AutoDev’s progress toward goals by observing the ongoing conversation used for communication among agents and the repository.”

The report further outlines:

“We’ve shifted the responsibility of extracting relevant context for software engineering tasks and validating AI-generated code from users (mainly developers) to the AI agents themselves.”

With this in mind, It’s only a matter of time before the AI-based framework becomes self-sufficient and can run operations without human supervision or intervention. This shift ultimately means coding might not be a viable career option in the foreseeable future. 

Upskilling seems like the only viable option to remain relevant

While commenting on the viability of coding as a career option for the next generation, NVIDIA's boss indicated that the youth are better off seeking opportunities in biology, education, manufacturing, or farming. He added that the only way around this challenge for people already invested in coding is to upskill (specifically in AI). This way, it'll be possible to maintain relevance and contribute to programming projects.

Coding isn't the only profession impacted by the fast adoption of AI. Architecture and graphic design jobs are also at risk , too. AI-powered tools like Image Creator from Designer (Bing Image Creator), ChatGPT, Midjourney, and more are already great at generating detailed and impressive structural designs within a moment's notice. 

However, they aren't perfect either. Did you know AI struggles to create a simple, plain white image ? This limitation is on top of the heightened censorship of the tools, which has seemingly lobotomized their capabilities . There's been an alarming increase in reports flagging deepfakes and explicit images surfacing online. A study also revealed recruiters are seeking professionals with AI skills , so it might not be a bad idea to upskill in the area. 

 "We've shifted the responsibility of extracting relevant context for software engineering tasks from developers to the AI agents": Microsoft's AI-based framework turns developers to overnight 'mere supervisors'

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Innovation (Camb)
  • v.2(4); 2021 Nov 28

Artificial intelligence: A powerful paradigm for scientific research

1 Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China

35 University of Chinese Academy of Sciences, Beijing 100049, China

5 Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China

10 Zhongshan Hospital Institute of Clinical Science, Fudan University, Shanghai 200032, China

Changping Huang

18 Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China

11 Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China

37 Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China

26 Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China

Xingchen Liu

28 Institute of Coal Chemistry, Chinese Academy of Sciences, Taiyuan 030001, China

2 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China

Fengliang Dong

3 National Center for Nanoscience and Technology, Beijing 100190, China

Cheng-Wei Qiu

4 Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore

6 Department of Gynaecology, Obstetrics and Gynaecology Hospital, Fudan University, Shanghai 200011, China

36 Shanghai Key Laboratory of Female Reproductive Endocrine-Related Diseases, Shanghai 200011, China

7 School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, China

41 Second Affiliated Hospital School of Medicine, and School of Public Health, Zhejiang University, Hangzhou 310058, China

8 Department of Obstetrics and Gynecology, Peking University Third Hospital, Beijing 100191, China

9 Zhejiang Provincial People’s Hospital, Hangzhou 310014, China

Chenguang Fu

12 School of Materials Science and Engineering, Zhejiang University, Hangzhou 310027, China

Zhigang Yin

13 Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, Fuzhou 350002, China

Ronald Roepman

14 Medical Center, Radboud University, 6500 Nijmegen, the Netherlands

Sabine Dietmann

15 Institute for Informatics, Washington University School of Medicine, St. Louis, MO 63110, USA

Marko Virta

16 Department of Microbiology, University of Helsinki, 00014 Helsinki, Finland

Fredrick Kengara

17 School of Pure and Applied Sciences, Bomet University College, Bomet 20400, Kenya

19 Agriculture College of Shihezi University, Xinjiang 832000, China

Taolan Zhao

20 Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing 100101, China

21 The Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China

38 Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China

Jialiang Yang

22 Geneis (Beijing) Co., Ltd, Beijing 100102, China

23 Department of Communication Studies, Hong Kong Baptist University, Hong Kong, China

24 South China Botanical Garden, Chinese Academy of Sciences, Guangzhou 510650, China

39 Center of Economic Botany, Core Botanical Gardens, Chinese Academy of Sciences, Guangzhou 510650, China

Zhaofeng Liu

27 Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China

29 Suzhou Institute of Nano-Tech and Nano-Bionics, Chinese Academy of Sciences, Suzhou 215123, China

Xiaohong Liu

30 Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China

James P. Lewis

James m. tiedje.

34 Center for Microbial Ecology, Department of Plant, Soil and Microbial Sciences, Michigan State University, East Lansing, MI 48824, USA

40 Zhejiang Lab, Hangzhou 311121, China

25 Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai 200031, China

31 Department of Computer Science, Aberystwyth University, Aberystwyth, Ceredigion SY23 3FL, UK

Zhipeng Cai

32 Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA

33 Institute of Soil Science, Chinese Academy of Sciences, Nanjing 210008, China

Jiabao Zhang

Artificial intelligence (AI) coupled with promising machine learning (ML) techniques well known from computer science is broadly affecting many aspects of various fields including science and technology, industry, and even our day-to-day life. The ML techniques have been developed to analyze high-throughput data with a view to obtaining useful insights, categorizing, predicting, and making evidence-based decisions in novel ways, which will promote the growth of novel applications and fuel the sustainable booming of AI. This paper undertakes a comprehensive survey on the development and application of AI in different aspects of fundamental sciences, including information science, mathematics, medical science, materials science, geoscience, life science, physics, and chemistry. The challenges that each discipline of science meets, and the potentials of AI techniques to handle these challenges, are discussed in detail. Moreover, we shed light on new research trends entailing the integration of AI into each scientific discipline. The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.

Graphical abstract

An external file that holds a picture, illustration, etc.
Object name is fx1.jpg

Public summary

  • • “Can machines think?” The goal of artificial intelligence (AI) is to enable machines to mimic human thoughts and behaviors, including learning, reasoning, predicting, and so on.
  • • “Can AI do fundamental research?” AI coupled with machine learning techniques is impacting a wide range of fundamental sciences, including mathematics, medical science, physics, etc.
  • • “How does AI accelerate fundamental research?” New research and applications are emerging rapidly with the support by AI infrastructure, including data storage, computing power, AI algorithms, and frameworks.

Introduction

“Can machines think?” Alan Turing posed this question in his famous paper “Computing Machinery and Intelligence.” 1 He believes that to answer this question, we need to define what thinking is. However, it is difficult to define thinking clearly, because thinking is a subjective behavior. Turing then introduced an indirect method to verify whether a machine can think, the Turing test, which examines a machine's ability to show intelligence indistinguishable from that of human beings. A machine that succeeds in the test is qualified to be labeled as artificial intelligence (AI).

AI refers to the simulation of human intelligence by a system or a machine. The goal of AI is to develop a machine that can think like humans and mimic human behaviors, including perceiving, reasoning, learning, planning, predicting, and so on. Intelligence is one of the main characteristics that distinguishes human beings from animals. With the interminable occurrence of industrial revolutions, an increasing number of types of machine types continuously replace human labor from all walks of life, and the imminent replacement of human resources by machine intelligence is the next big challenge to be overcome. Numerous scientists are focusing on the field of AI, and this makes the research in the field of AI rich and diverse. AI research fields include search algorithms, knowledge graphs, natural languages processing, expert systems, evolution algorithms, machine learning (ML), deep learning (DL), and so on.

The general framework of AI is illustrated in Figure 1 . The development process of AI includes perceptual intelligence, cognitive intelligence, and decision-making intelligence. Perceptual intelligence means that a machine has the basic abilities of vision, hearing, touch, etc., which are familiar to humans. Cognitive intelligence is a higher-level ability of induction, reasoning and acquisition of knowledge. It is inspired by cognitive science, brain science, and brain-like intelligence to endow machines with thinking logic and cognitive ability similar to human beings. Once a machine has the abilities of perception and cognition, it is often expected to make optimal decisions as human beings, to improve the lives of people, industrial manufacturing, etc. Decision intelligence requires the use of applied data science, social science, decision theory, and managerial science to expand data science, so as to make optimal decisions. To achieve the goal of perceptual intelligence, cognitive intelligence, and decision-making intelligence, the infrastructure layer of AI, supported by data, storage and computing power, ML algorithms, and AI frameworks is required. Then by training models, it is able to learn the internal laws of data for supporting and realizing AI applications. The application layer of AI is becoming more and more extensive, and deeply integrated with fundamental sciences, industrial manufacturing, human life, social governance, and cyberspace, which has a profound impact on our work and lifestyle.

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

The general framework of AI

History of AI

The beginning of modern AI research can be traced back to John McCarthy, who coined the term “artificial intelligence (AI),” during at a conference at Dartmouth College in 1956. This symbolized the birth of the AI scientific field. Progress in the following years was astonishing. Many scientists and researchers focused on automated reasoning and applied AI for proving of mathematical theorems and solving of algebraic problems. One of the famous examples is Logic Theorist, a computer program written by Allen Newell, Herbert A. Simon, and Cliff Shaw, which proves 38 of the first 52 theorems in “Principia Mathematica” and provides more elegant proofs for some. 2 These successes made many AI pioneers wildly optimistic, and underpinned the belief that fully intelligent machines would be built in the near future. However, they soon realized that there was still a long way to go before the end goals of human-equivalent intelligence in machines could come true. Many nontrivial problems could not be handled by the logic-based programs. Another challenge was the lack of computational resources to compute more and more complicated problems. As a result, organizations and funders stopped supporting these under-delivering AI projects.

AI came back to popularity in the 1980s, as several research institutions and universities invented a type of AI systems that summarizes a series of basic rules from expert knowledge to help non-experts make specific decisions. These systems are “expert systems.” Examples are the XCON designed by Carnegie Mellon University and the MYCIN designed by Stanford University. The expert system derived logic rules from expert knowledge to solve problems in the real world for the first time. The core of AI research during this period is the knowledge that made machines “smarter.” However, the expert system gradually revealed several disadvantages, such as privacy technologies, lack of flexibility, poor versatility, expensive maintenance cost, and so on. At the same time, the Fifth Generation Computer Project, heavily funded by the Japanese government, failed to meet most of its original goals. Once again, the funding for AI research ceased, and AI was at the second lowest point of its life.

In 2006, Geoffrey Hinton and coworkers 3 , 4 made a breakthrough in AI by proposing an approach of building deeper neural networks, as well as a way to avoid gradient vanishing during training. This reignited AI research, and DL algorithms have become one of the most active fields of AI research. DL is a subset of ML based on multiple layers of neural networks with representation learning, 5 while ML is a part of AI that a computer or a program can use to learn and acquire intelligence without human intervention. Thus, “learn” is the keyword of this era of AI research. Big data technologies, and the improvement of computing power have made deriving features and information from massive data samples more efficient. An increasing number of new neural network structures and training methods have been proposed to improve the representative learning ability of DL, and to further expand it into general applications. Current DL algorithms match and exceed human capabilities on specific datasets in the areas of computer vision (CV) and natural language processing (NLP). AI technologies have achieved remarkable successes in all walks of life, and continued to show their value as backbones in scientific research and real-world applications.

Within AI, ML is having a substantial broad effect across many aspects of technology and science: from computer science to geoscience to materials science, from life science to medical science to chemistry to mathematics and to physics, from management science to economics to psychology, and other data-intensive empirical sciences, as ML methods have been developed to analyze high-throughput data to obtain useful insights, categorize, predict, and make evidence-based decisions in novel ways. To train a system by presenting it with examples of desired input-output behavior, could be far easier than to program it manually by predicting the desired response for all potential inputs. The following sections survey eight fundamental sciences, including information science (informatics), mathematics, medical science, materials science, geoscience, life science, physics, and chemistry, which develop or exploit AI techniques to promote the development of sciences and accelerate their applications to benefit human beings, society, and the world.

AI in information science

AI aims to provide the abilities of perception, cognition, and decision-making for machines. At present, new research and applications in information science are emerging at an unprecedented rate, which is inseparable from the support by the AI infrastructure. As shown in Figure 2 , the AI infrastructure layer includes data, storage and computing power, ML algorithms, and the AI framework. The perception layer enables machines have the basic ability of vision, hearing, etc. For instance, CV enables machines to “see” and identify objects, while speech recognition and synthesis helps machines to “hear” and recognize speech elements. The cognitive layer provides higher ability levels of induction, reasoning, and acquiring knowledge with the help of NLP, 6 knowledge graphs, 7 and continual learning. 8 In the decision-making layer, AI is capable of making optimal decisions, such as automatic planning, expert systems, and decision-supporting systems. Numerous applications of AI have had a profound impact on fundamental sciences, industrial manufacturing, human life, social governance, and cyberspace. The following subsections provide an overview of the AI framework, automatic machine learning (AutoML) technology, and several state-of-the-art AI/ML applications in the information field.

An external file that holds a picture, illustration, etc.
Object name is gr2.jpg

The knowledge graph of the AI framework

The AI framework provides basic tools for AI algorithm implementation

In the past 10 years, applications based on AI algorithms have played a significant role in various fields and subjects, on the basis of which the prosperity of the DL framework and platform has been founded. AI frameworks and platforms reduce the requirement of accessing AI technology by integrating the overall process of algorithm development, which enables researchers from different areas to use it across other fields, allowing them to focus on designing the structure of neural networks, thus providing better solutions to problems in their fields. At the beginning of the 21st century, only a few tools, such as MATLAB, OpenNN, and Torch, were capable of describing and developing neural networks. However, these tools were not originally designed for AI models, and thus faced problems, such as complicated user API and lacking GPU support. During this period, using these frameworks demanded professional computer science knowledge and tedious work on model construction. As a solution, early frameworks of DL, such as Caffe, Chainer, and Theano, emerged, allowing users to conveniently construct complex deep neural networks (DNNs), such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and LSTM conveniently, and this significantly reduced the cost of applying AI models. Tech giants then joined the march in researching AI frameworks. 9 Google developed the famous open-source framework, TensorFlow, while Facebook's AI research team released another popular platform, PyTorch, which is based on Torch; Microsoft Research published CNTK, and Amazon announced MXNet. Among them, TensorFlow, also the most representative framework, referred to Theano's declarative programming style, offering a larger space for graph-based optimization, while PyTorch inherited the imperative programming style of Torch, which is intuitive, user friendly, more flexible, and easier to be traced. As modern AI frameworks and platforms are being widely applied, practitioners can now assemble models swiftly and conveniently by adopting various building block sets and languages specifically suitable for given fields. Polished over time, these platforms gradually developed a clearly defined user API, the ability for multi-GPU training and distributed training, as well as a variety of model zoos and tool kits for specific tasks. 10 Looking forward, there are a few trends that may become the mainstream of next-generation framework development. (1) Capability of super-scale model training. With the emergence of models derived from Transformer, such as BERT and GPT-3, the ability of training large models has become an ideal feature of the DL framework. It requires AI frameworks to train effectively under the scale of hundreds or even thousands of devices. (2) Unified API standard. The APIs of many frameworks are generally similar but slightly different at certain points. This leads to some difficulties and unnecessary learning efforts, when the user attempts to shift from one framework to another. The API of some frameworks, such as JAX, has already become compatible with Numpy standard, which is familiar to most practitioners. Therefore, a unified API standard for AI frameworks may gradually come into being in the future. (3) Universal operator optimization. At present, kernels of DL operator are implemented either manually or based on third-party libraries. Most third-party libraries are developed to suit certain hardware platforms, causing large unnecessary spending when models are trained or deployed on different hardware platforms. The development speed of new DL algorithms is usually much faster than the update rate of libraries, which often makes new algorithms to be beyond the range of libraries' support. 11

To improve the implementation speed of AI algorithms, much research focuses on how to use hardware for acceleration. The DianNao family is one of the earliest research innovations on AI hardware accelerators. 12 It includes DianNao, DaDianNao, ShiDianNao, and PuDianNao, which can be used to accelerate the inference speed of neural networks and other ML algorithms. Of these, the best performance of a 64-chip DaDianNao system can achieve a speed up of 450.65× over a GPU, and reduce the energy by 150.31×. Prof. Chen and his team in the Institute of Computing Technology also designed an Instruction Set Architecture for a broad range of neural network accelerators, called Cambricon, which developed into a serial DL accelerator. After Cambricon, many AI-related companies, such as Apple, Google, HUAWEI, etc., developed their own DL accelerators, and AI accelerators became an important research field of AI.

AI for AI—AutoML

AutoML aims to study how to use evolutionary computing, reinforcement learning (RL), and other AI algorithms, to automatically generate specified AI algorithms. Research on the automatic generation of neural networks has existed before the emergence of DL, e.g., neural evolution. 13 The main purpose of neural evolution is to allow neural networks to evolve according to the principle of survival of the fittest in the biological world. Through selection, crossover, mutation, and other evolutionary operators, the individual quality in a population is continuously improved and, finally, the individual with the greatest fitness represents the best neural network. The biological inspiration in this field lies in the evolutionary process of human brain neurons. The human brain has such developed learning and memory functions that it cannot do without the complex neural network system in the brain. The whole neural network system of the human brain benefits from a long evolutionary process rather than gradient descent and back propagation. In the era of DL, the application of AI algorithms to automatically generate DNN has attracted more attention and, gradually, developed into an important direction of AutoML research: neural architecture search. The implementation methods of neural architecture search are usually divided into the RL-based method and the evolutionary algorithm-based method. In the RL-based method, an RNN is used as a controller to generate a neural network structure layer by layer, and then the network is trained, and the accuracy of the verification set is used as the reward signal of the RNN to calculate the strategy gradient. During the iteration, the controller will give the neural network, with higher accuracy, a higher probability value, so as to ensure that the strategy function can output the optimal network structure. 14 The method of neural architecture search through evolution is similar to the neural evolution method, which is based on a population and iterates continuously according to the principle of survival of the fittest, so as to obtain a high-quality neural network. 15 Through the application of neural architecture search technology, the design of neural networks is more efficient and automated, and the accuracy of the network gradually outperforms that of the networks designed by AI experts. For example, Google's SOTA network EfficientNet was realized through the baseline network based on neural architecture search. 16

AI enabling networking design adaptive to complex network conditions

The application of DL in the networking field has received strong interest. Network design often relies on initial network conditions and/or theoretical assumptions to characterize real network environments. However, traditional network modeling and design, regulated by mathematical models, are unlikely to deal with complex scenarios with many imperfect and high dynamic network environments. Integrating DL into network research allows for a better representation of complex network environments. Furthermore, DL could be combined with the Markov decision process and evolve into the deep reinforcement learning (DRL) model, which finds an optimal policy based on the reward function and the states of the system. Taken together, these techniques could be used to make better decisions to guide proper network design, thereby improving the network quality of service and quality of experience. With regard to the aspect of different layers of the network protocol stack, DL/DRL can be adopted for network feature extraction, decision-making, etc. In the physical layer, DL can be used for interference alignment. It can also be used to classify the modulation modes, design efficient network coding 17 and error correction codes, etc. In the data link layer, DL can be used for resource (such as channels) allocation, medium access control, traffic prediction, 18 link quality evaluation, and so on. In the network (routing) layer, routing establishment and routing optimization 19 can help to obtain an optimal routing path. In higher layers (such as the application layer), enhanced data compression and task allocation is used. Besides the above protocol stack, one critical area of using DL is network security. DL can be used to classify the packets into benign/malicious types, and how it can be integrated with other ML schemes, such as unsupervised clustering, to achieve a better anomaly detection effect.

AI enabling more powerful and intelligent nanophotonics

Nanophotonic components have recently revolutionized the field of optics via metamaterials/metasurfaces by enabling the arbitrary manipulation of light-matter interactions with subwavelength meta-atoms or meta-molecules. 20 , 21 , 22 The conventional design of such components involves generally forward modeling, i.e., solving Maxwell's equations based on empirical and intuitive nanostructures to find corresponding optical properties, as well as the inverse design of nanophotonic devices given an on-demand optical response. The trans-dimensional feature of macro-optical components consisting of complex nano-antennas makes the design process very time consuming, computationally expensive, and even numerically prohibitive, such as device size and complexity increase. DL is an efficient and automatic platform, enabling novel efficient approaches to designing nanophotonic devices with high-performance and versatile functions. Here, we present briefly the recent progress of DL-based nanophotonics and its wide-ranging applications. DL was exploited for forward modeling at first using a DNN. 23 The transmission or reflection coefficients can be well predicted after training on huge datasets. To improve the prediction accuracy of DNN in case of small datasets, transfer learning was introduced to migrate knowledge between different physical scenarios, which greatly reduced the relative error. Furthermore, a CNN and an RNN were developed for the prediction of optical properties from arbitrary structures using images. 24 The CNN-RNN combination successfully predicted the absorption spectra from the given input structural images. In inverse design of nanophotonic devices, there are three different paradigms of DL methods, i.e., supervised, unsupervised, and RL. 25 Supervised learning has been utilized to design structural parameters for the pre-defined geometries, such as tandem DNN and bidirectional DNNs. Unsupervised learning methods learn by themselves without a specific target, and thus are more accessible to discovering new and arbitrary patterns 26 in completely new data than supervised learning. A generative adversarial network (GAN)-based approach, combining conditional GANs and Wasserstein GANs, was proposed to design freeform all-dielectric multifunctional metasurfaces. RL, especially double-deep Q-learning, powers up the inverse design of high-performance nanophotonic devices. 27 DL has endowed nanophotonic devices with better performance and more emerging applications. 28 , 29 For instance, an intelligent microwave cloak driven by DL exhibits millisecond and self-adaptive response to an ever-changing incident wave and background. 28 Another example is that a DL-augmented infrared nanoplasmonic metasurface is developed for monitoring dynamics between four major classes of bio-molecules, which could impact the fields of biology, bioanalytics, and pharmacology from fundamental research, to disease diagnostics, to drug development. 29 The potential of DL in the wide arena of nanophotonics has been unfolding. Even end-users without optics and photonics background could exploit the DL as a black box toolkit to design powerful optical devices. Nevertheless, how to interpret/mediate the intermediate DL process and determine the most dominant factors in the search for optimal solutions, are worthy of being investigated in depth. We optimistically envisage that the advancements in DL algorithms and computation/optimization infrastructures would enable us to realize more efficient and reliable training approaches, more complex nanostructures with unprecedented shapes and sizes, and more intelligent and reconfigurable optic/optoelectronic systems.

AI in other fields of information science

We believe that AI has great potential in the following directions:

  • • AI-based risk control and management in utilities can prevent costly or hazardous equipment failures by using sensors that detect and send information regarding the machine's health to the manufacturer, predicting possible issues that could occur so as to ensure timely maintenance or automated shutdown.
  • • AI could be used to produce simulations of real-world objects, called digital twins. When applied to the field of engineering, digital twins allow engineers and technicians to analyze the performance of an equipment virtually, thus avoiding safety and budget issues associated with traditional testing methods.
  • • Combined with AI, intelligent robots are playing an important role in industry and human life. Different from traditional robots working according to the procedures specified by humans, intelligent robots have the ability of perception, recognition, and even automatic planning and decision-making, based on changes in environmental conditions.
  • • AI of things (AIoT) or AI-empowered IoT applications. 30 have become a promising development trend. AI can empower the connected IoT devices, embedded in various physical infrastructures, to perceive, recognize, learn, and act. For instance, smart cities constantly collect data regarding quality-of-life factors, such as the status of power supply, public transportation, air pollution, and water use, to manage and optimize systems in cities. Due to these data, especially personal data being collected from informed or uninformed participants, data security, and privacy 31 require protection.

AI in mathematics

Mathematics always plays a crucial and indispensable role in AI. Decades ago, quite a few classical AI-related approaches, such as k-nearest neighbor, 32 support vector machine, 33 and AdaBoost, 34 were proposed and developed after their rigorous mathematical formulations had been established. In recent years, with the rapid development of DL, 35 AI has been gaining more and more attention in the mathematical community. Equipped with the Markov process, minimax optimization, and Bayesian statistics, RL, 36 GANs, 37 and Bayesian learning 38 became the most favorable tools in many AI applications. Nevertheless, there still exist plenty of open problems in mathematics for ML, including the interpretability of neural networks, the optimization problems of parameter estimation, and the generalization ability of learning models. In the rest of this section, we discuss these three questions in turn.

The interpretability of neural networks

From a mathematical perspective, ML usually constructs nonlinear models, with neural networks as a typical case, to approximate certain functions. The well-known Universal Approximation Theorem suggests that, under very mild conditions, any continuous function can be uniformly approximated on compact domains by neural networks, 39 which serves a vital function in the interpretability of neural networks. However, in real applications, ML models seem to admit accurate approximations of many extremely complicated functions, sometimes even black boxes, which are far beyond the scope of continuous functions. To understand the effectiveness of ML models, many researchers have investigated the function spaces that can be well approximated by them, and the corresponding quantitative measures. This issue is closely related to the classical approximation theory, but the approximation scheme is distinct. For example, Bach 40 finds that the random feature model is naturally associated with the corresponding reproducing kernel Hilbert space. In the same way, the Barron space is identified as the natural function space associated with two-layer neural networks, and the approximation error is measured using the Barron norm. 41 The corresponding quantities of residual networks (ResNets) are defined for the flow-induced spaces. For multi-layer networks, the natural function spaces for the purposes of approximation theory are the tree-like function spaces introduced in Wojtowytsch. 42 There are several works revealing the relationship between neural networks and numerical algorithms for solving partial differential equations. For example, He and Xu 43 discovered that CNNs for image classification have a strong connection with multi-grid (MG) methods. In fact, the pooling operation and feature extraction in CNNs correspond directly to restriction operation and iterative smoothers in MG, respectively. Hence, various convolution and pooling operations used in CNNs can be better understood.

The optimization problems of parameter estimation

In general, the optimization problem of estimating parameters of certain DNNs is in practice highly nonconvex and often nonsmooth. Can the global minimizers be expected? What is the landscape of local minimizers? How does one handle the nonsmoothness? All these questions are nontrivial from an optimization perspective. Indeed, numerous works and experiments demonstrate that the optimization for parameter estimation in DL is itself a much nicer problem than once thought; see, e.g., Goodfellow et al. 44 As a consequence, the study on the solution landscape ( Figure 3 ), also known as loss surface of neural networks, is no longer supposed to be inaccessible and can even in turn provide guidance for global optimization. Interested readers can refer to the survey paper (Sun et al. 45 ) for recent progress in this aspect.

An external file that holds a picture, illustration, etc.
Object name is gr3.jpg

Recent studies indicate that nonsmooth activation functions, e.g., rectified linear units, are better than smooth ones in finding sparse solutions. However, the chain rule does not work in the case that the activation functions are nonsmooth, which then makes the widely used stochastic gradient (SG)-based approaches not feasible in theory. Taking approximated gradients at nonsmooth iterates as a remedy ensures that SG-type methods are still in extensive use, but that the numerical evidence has also exposed their limitations. Also, the penalty-based approaches proposed by Cui et al. 46 and Liu et al. 47 provide a new direction to solve the nonsmooth optimization problems efficiently.

The generalization ability of learning models

A small training error does not always lead to a small test error. This gap is caused by the generalization ability of learning models. A key finding in statistical learning theory states that the generalization error is bounded by a quantity that grows with the increase of the model capacity, but shrinks as the number of training examples increases. 48 A common conjecture relating generalization to solution landscape is that flat and wide minima generalize better than sharp ones. Thus, regularization techniques, including the dropout approach, 49 have emerged to force the algorithms to bypass the sharp minima. However, the mechanism behind this has not been fully explored. Recently, some researchers have focused on the ResNet-type architecture, with dropout being inserted after the last convolutional layer of each modular building. They thus managed to explain the stochastic dropout training process and the ensuing dropout regularization effect from the perspective of optimal control. 50

AI in medical science

There is a great trend for AI technology to grow more and more significant in daily operations, including medical fields. With the growing needs of healthcare for patients, hospital needs are evolving from informationization networking to the Internet Hospital and eventually to the Smart Hospital. At the same time, AI tools and hardware performance are also growing rapidly with each passing day. Eventually, common AI algorithms, such as CV, NLP, and data mining, will begin to be embedded in the medical equipment market ( Figure 4 ).

An external file that holds a picture, illustration, etc.
Object name is gr4.jpg

AI doctor based on electronic medical records

For medical history data, it is inevitable to mention Doctor Watson, developed by the Watson platform of IBM, and Modernizing Medicine, which aims to solve oncology, and is now adopted by CVS & Walgreens in the US and various medical organizations in China as well. Doctor Watson takes advantage of the NLP performance of the IBM Watson platform, which already collected vast data of medical history, as well as prior knowledge in the literature for reference. After inputting the patients' case, Doctor Watson searches the medical history reserve and forms an elementary treatment proposal, which will be further ranked by prior knowledge reserves. With the multiple models stored, Doctor Watson gives the final proposal as well as the confidence of the proposal. However, there are still problems for such AI doctors because, 51 as they rely on prior experience from US hospitals, the proposal may not be suitable for other regions with different medical insurance policies. Besides, the knowledge updating of the Watson platform also relies highly on the updating of the knowledge reserve, which still needs manual work.

AI for public health: Outbreak detection and health QR code for COVID-19

AI can be used for public health purposes in many ways. One classical usage is to detect disease outbreaks using search engine query data or social media data, as Google did for prediction of influenza epidemics 52 and the Chinese Academy of Sciences did for modeling the COVID-19 outbreak through multi-source information fusion. 53 After the COVID-19 outbreak, a digital health Quick Response (QR) code system has been developed by China, first to detect potential contact with confirmed COVID-19 cases and, secondly, to indicate the person's health status using mobile big data. 54 Different colors indicate different health status: green means healthy and is OK for daily life, orange means risky and requires quarantine, and red means confirmed COVID-19 patient. It is easy to use for the general public, and has been adopted by many other countries. The health QR code has made great contributions to the worldwide prevention and control of the COVID-19 pandemic.

Biomarker discovery with AI

High-dimensional data, including multi-omics data, patient characteristics, medical laboratory test data, etc., are often used for generating various predictive or prognostic models through DL or statistical modeling methods. For instance, the COVID-19 severity evaluation model was built through ML using proteomic and metabolomic profiling data of sera 55 ; using integrated genetic, clinical, and demographic data, Taliaz et al. built an ML model to predict patient response to antidepressant medications 56 ; prognostic models for multiple cancer types (such as liver cancer, lung cancer, breast cancer, gastric cancer, colorectal cancer, pancreatic cancer, prostate cancer, ovarian cancer, lymphoma, leukemia, sarcoma, melanoma, bladder cancer, renal cancer, thyroid cancer, head and neck cancer, etc.) were constructed through DL or statistical methods, such as least absolute shrinkage and selection operator (LASSO), combined with Cox proportional hazards regression model using genomic data. 57

Image-based medical AI

Medical image AI is one of the most developed mature areas as there are numerous models for classification, detection, and segmentation tasks in CV. For the clinical area, CV algorithms can also be used for computer-aided diagnosis and treatment with ECG, CT, eye fundus imaging, etc. As human doctors may be tired and prone to make mistakes after viewing hundreds and hundreds of images for diagnosis, AI doctors can outperform a human medical image viewer due to their specialty at repeated work without fatigue. The first medical AI product approved by FDA is IDx-DR, which uses an AI model to make predictions of diabetic retinopathy. The smartphone app SkinVision can accurately detect melanomas. 58 It uses “fractal analysis” to identify moles and their surrounding skin, based on size, diameter, and many other parameters, and to detect abnormal growth trends. AI-ECG of LEPU Medical can automatically detect heart disease with ECG images. Lianying Medical takes advantage of their hardware equipment to produce real-time high-definition image-guided all-round radiotherapy technology, which successfully achieves precise treatment.

Wearable devices for surveillance and early warning

For wearable devices, AliveCor has developed an algorithm to automatically predict the presence of atrial fibrillation, which is an early warning sign of stroke and heart failure. The 23andMe company can also test saliva samples at a small cost, and a customer can be provided with information based on their genes, including who their ancestors were or potential diseases they may be prone to later in life. It provides accurate health management solutions based on individual and family genetic data. In the 20–30 years of the near feature, we believe there are several directions for further research: (1) causal inference for real-time in-hospital risk prediction. Clinical doctors usually acquire reasonable explanations for certain medical decisions, but the current AI models nowadays are usually black box models. The casual inference will help doctors to explain certain AI decisions and even discover novel ground truths. (2) Devices, including wearable instruments for multi-dimensional health monitoring. The multi-modality model is now a trend for AI research. With various devices to collect multi-modality data and a central processor to fuse all these data, the model can monitor the user's overall real-time health condition and give precautions more precisely. (3) Automatic discovery of clinical markers for diseases that are difficult to diagnose. Diseases, such as ALS, are still difficult for clinical doctors to diagnose because they lack any effective general marker. It may be possible for AI to discover common phenomena for these patients and find an effective marker for early diagnosis.

AI-aided drug discovery

Today we have come into the precision medicine era, and the new targeted drugs are the cornerstones for precision therapy. However, over the past decades, it takes an average of over one billion dollars and 10 years to bring a new drug into the market. How to accelerate the drug discovery process, and avoid late-stage failure, are key concerns for all the big and fiercely competitive pharmaceutical companies. The highlighted emerging role of AI, including ML, DL, expert systems, and artificial neural networks (ANNs), has brought new insights and high efficiency into the new drug discovery processes. AI has been adopted in many aspects of drug discovery, including de novo molecule design, structure-based modeling for proteins and ligands, quantitative structure-activity relationship research, and druggable property judgments. DL-based AI appliances demonstrate superior merits in addressing some challenging problems in drug discovery. Of course, prediction of chemical synthesis routes and chemical process optimization are also valuable in accelerating new drug discovery, as well as lowering production costs.

There has been notable progress in the AI-aided new drug discovery in recent years, for both new chemical entity discovery and the relating business area. Based on DNNs, DeepMind built the AlphaFold platform to predict 3D protein structures that outperformed other algorithms. As an illustration of great achievement, AlphaFold successfully and accurately predicted 25 scratch protein structures from a 43 protein panel without using previously built proteins models. Accordingly, AlphaFold won the CASP13 protein-folding competition in December 2018. 59 Based on the GANs and other ML methods, Insilico constructed a modular drug design platform GENTRL system. In September 2019, they reported the discovery of the first de novo active DDR1 kinase inhibitor developed by the GENTRL system. It took the team only 46 days from target selection to get an active drug candidate using in vivo data. 60 Exscientia and Sumitomo Dainippon Pharma developed a new drug candidate, DSP-1181, for the treatment of obsessive-compulsive disorder on the Centaur Chemist AI platform. In January 2020, DSP-1181 started its phase I clinical trials, which means that, from program initiation to phase I study, the comprehensive exploration took less than 12 months. In contrast, comparable drug discovery using traditional methods usually needs 4–5 years with traditional methods.

How AI transforms medical practice: A case study of cervical cancer

As the most common malignant tumor in women, cervical cancer is a disease that has a clear cause and can be prevented, and even treated, if detected early. Conventionally, the screening strategy for cervical cancer mainly adopts the “three-step” model of “cervical cytology-colposcopy-histopathology.” 61 However, limited by the level of testing methods, the efficiency of cervical cancer screening is not high. In addition, owing to the lack of knowledge by doctors in some primary hospitals, patients cannot be provided with the best diagnosis and treatment decisions. In recent years, with the advent of the era of computer science and big data, AI has gradually begun to extend and blend into various fields. In particular, AI has been widely used in a variety of cancers as a new tool for data mining. For cervical cancer, a clinical database with millions of medical records and pathological data has been built, and an AI medical tool set has been developed. 62 Such an AI analysis algorithm supports doctors to access the ability of rapid iterative AI model training. In addition, a prognostic prediction model established by ML and a web-based prognostic result calculator have been developed, which can accurately predict the risk of postoperative recurrence and death in cervical cancer patients, and thereby better guide decision-making in postoperative adjuvant treatment. 63

AI in materials science

As the cornerstone of modern industry, materials have played a crucial role in the design of revolutionary forms of matter, with targeted properties for broad applications in energy, information, biomedicine, construction, transportation, national security, spaceflight, and so forth. Traditional strategies rely on the empirical trial and error experimental approaches as well as the theoretical simulation methods, e.g., density functional theory, thermodynamics, or molecular dynamics, to discover novel materials. 64 These methods often face the challenges of long research cycles, high costs, and low success rates, and thus cannot meet the increasingly growing demands of current materials science. Accelerating the speed of discovery and deployment of advanced materials will therefore be essential in the coming era.

With the rapid development of data processing and powerful algorithms, AI-based methods, such as ML and DL, are emerging with good potentials in the search for and design of new materials prior to actually manufacturing them. 65 , 66 By integrating material property data, such as the constituent element, lattice symmetry, atomic radius, valence, binding energy, electronegativity, magnetism, polarization, energy band, structure-property relation, and functionalities, the machine can be trained to “think” about how to improve material design and even predict the properties of new materials in a cost-effective manner ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is gr5.jpg

AI is expected to power the development of materials science

AI in discovery and design of new materials

Recently, AI techniques have made significant advances in rational design and accelerated discovery of various materials, such as piezoelectric materials with large electrostrains, 67 organic-inorganic perovskites for photovoltaics, 68 molecular emitters for efficient light-emitting diodes, 69 inorganic solid materials for thermoelectrics, 70 and organic electronic materials for renewable-energy applications. 66 , 71 The power of data-driven computing and algorithmic optimization can promote comprehensive applications of simulation and ML (i.e., high-throughput virtual screening, inverse molecular design, Bayesian optimization, and supervised learning, etc.), in material discovery and property prediction in various fields. 72 For instance, using a DL Bayesian framework, the attribute-driven inverse materials design has been demonstrated for efficient and accurate prediction of functional molecular materials, with desired semiconducting properties or redox stability for applications in organic thin-film transistors, organic solar cells, or lithium-ion batteries. 73 It is meaningful to adopt automation tools for quick experimental testing of potential materials and utilize high-performance computing to calculate their bulk, interface, and defect-related properties. 74 The effective convergence of automation, computing, and ML can greatly speed up the discovery of materials. In the future, with the aid of AI techniques, it will be possible to accomplish the design of superconductors, metallic glasses, solder alloys, high-entropy alloys, high-temperature superalloys, thermoelectric materials, two-dimensional materials, magnetocaloric materials, polymeric bio-inspired materials, sensitive composite materials, and topological (electronic and phonon) materials, and so on. In the past decade, topological materials have ignited the research enthusiasm of condensed matter physicists, materials scientists, and chemists, as they exhibit exotic physical properties with potential applications in electronics, thermoelectrics, optics, catalysis, and energy-related fields. From the most recent predictions, more than a quarter of all inorganic materials in nature are topologically nontrivial. The establishment of topological electronic materials databases 75 , 76 , 77 and topological phononic materials databases 78 using high-throughput methods will help to accelerate the screening and experimental discovery of new topological materials for functional applications. It is recognized that large-scale high-quality datasets are required to practice AI. Great efforts have also been expended in building high-quality materials science databases. As one of the top-ranking databases of its kind, the “atomly.net” materials data infrastructure, 79 has calculated the properties of more than 180,000 inorganic compounds, including their equilibrium structures, electron energy bands, dielectric properties, simulated diffraction patterns, elasticity tensors, etc. As such, the atomly.net database has set a solid foundation for extending AI into the area of materials science research. The X-ray diffraction (XRD)-matcher model of atomly.net uses ML to match and classify the experimental XRD to the simulated patterns. Very recently, by using the dataset from atomly.net, an accurate AI model was built to rapidly predict the formation energy of almost any given compound to yield a fairly good predictive ability. 80

AI-powered Materials Genome Initiative

The Materials Genome Initiative (MGI) is a great plan for rational realization of new materials and related functions, and it aims to discover, manufacture, and deploy advanced materials efficiently, cost-effectively, and intelligently. The initiative creates policy, resources, and infrastructure for accelerating materials development at a high level. This is a new paradigm for the discovery and design of next-generation materials, and runs from a view point of fundamental building blocks toward general materials developments, and accelerates materials development through efforts in theory, computation, and experiment, in a highly integrated high-throughput manner. MGI raises an ultimately high goal and high level for materials development and materials science for humans in the future. The spirit of MGI is to design novel materials by using data pools and powerful computation once the requirements or aspirations of functional usages appear. The theory, computation, and algorithm are the primary and substantial factors in the establishment and implementation of MGI. Advances in theories, computations, and experiments in materials science and engineering provide the footstone to not only accelerate the speed at which new materials are realized but to also shorten the time needed to push new products into the market. These AI techniques bring a great promise to the developing MGI. The applications of new technologies, such as ML and DL, directly accelerate materials research and the establishment of MGI. The model construction and application to science and engineering, as well as the data infrastructure, are of central importance. When the AI-powered MGI approaches are coupled with the ongoing autonomy of manufacturing methods, the potential impact to society and the economy in the future is profound. We are now beginning to see that the AI-aided MGI, among other things, integrates experiments, computation, and theory, and facilitates access to materials data, equips the next generation of the materials workforce, and enables a paradigm shift in materials development. Furthermore, the AI-powdered MGI could also design operational procedures and control the equipment to execute experiments, and to further realize autonomous experimentation in future material research.

Advanced functional materials for generation upgrade of AI

The realization and application of AI techniques depend on the computational capability and computer hardware, and this bases physical functionality on the performance of computers or supercomputers. For our current technology, the electric currents or electric carriers for driving electric chips and devices consist of electrons with ordinary characteristics, such as heavy mass and low mobility. All chips and devices emit relatively remarkable heat levels, consuming too much energy and lowering the efficiency of information transmission. Benefiting from the rapid development of modern physics, a series of advanced materials with exotic functional effects have been discovered or designed, including superconductors, quantum anomalous Hall insulators, and topological fermions. In particular, the superconducting state or topologically nontrivial electrons will promote the next-generation AI techniques once the (near) room temperature applications of these states are realized and implanted in integrated circuits. 81 In this case, the central processing units, signal circuits, and power channels will be driven based on the electronic carriers that show massless, energy-diffusionless, ultra-high mobility, or chiral-protection characteristics. The ordinary electrons will be removed from the physical circuits of future-generation chips and devices, leaving superconducting and topological chiral electrons running in future AI chips and supercomputers. The efficiency of transmission, for information and logic computing will be improved on a vast scale and at a very low cost.

AI for materials and materials for AI

The coming decade will continue to witness the development of advanced ML algorithms, newly emerging data-driven AI methodologies, and integrated technologies for facilitating structure design and property prediction, as well as to accelerate the discovery, design, development, and deployment of advanced materials into existing and emerging industrial sectors. At this moment, we are facing challenges in achieving accelerated materials research through the integration of experiment, computation, and theory. The great MGI, proposed for high-level materials research, helps to promote this process, especially when it is assisted by AI techniques. Still, there is a long way to go for the usage of these advanced functional materials in future-generation electric chips and devices to be realized. More materials and functional effects need to be discovered or improved by the developing AI techniques. Meanwhile, it is worth noting that materials are the core components of devices and chips that are used for construction of computers or machines for advanced AI systems. The rapid development of new materials, especially the emergence of flexible, sensitive, and smart materials, is of great importance for a broad range of attractive technologies, such as flexible circuits, stretchable tactile sensors, multifunctional actuators, transistor-based artificial synapses, integrated networks of semiconductor/quantum devices, intelligent robotics, human-machine interactions, simulated muscles, biomimetic prostheses, etc. These promising materials, devices, and integrated technologies will greatly promote the advancement of AI systems toward wide applications in human life. Once the physical circuits are upgraded by advanced functional or smart materials, AI techniques will largely promote the developments and applications of all disciplines.

AI in geoscience

Ai technologies involved in a large range of geoscience fields.

Momentous challenges threatening current society require solutions to problems that belong to geoscience, such as evaluating the effects of climate change, assessing air quality, forecasting the effects of disaster incidences on infrastructure, by calculating the incoming consumption and availability of food, water, and soil resources, and identifying factors that are indicators for potential volcanic eruptions, tsunamis, floods, and earthquakes. 82 , 83 It has become possible, with the emergence of advanced technology products (e.g., deep sea drilling vessels and remote sensing satellites), for enhancements in computational infrastructure that allow for processing large-scale, wide-range simulations of multiple models in geoscience, and internet-based data analysis that facilitates collection, processing, and storage of data in distributed and crowd-sourced environments. 84 The growing availability of massive geoscience data provides unlimited possibilities for AI—which has popularized all aspects of our daily life (e.g., entertainment, transportation, and commerce)—to significantly contribute to geoscience problems of great societal relevance. As geoscience enters the era of massive data, AI, which has been extensively successful in different fields, offers immense opportunities for settling a series of problems in Earth systems. 85 , 86 Accompanied by diversified data, AI-enabled technologies, such as smart sensors, image visualization, and intelligent inversion, are being actively examined in a large range of geoscience fields, such as marine geoscience, rock physics, geology, ecology, seismicity, environment, hydrology, remote sensing, Arc GIS, and planetary science. 87

Multiple challenges in the development of geoscience

There are some traits of geoscience development that restrict the applicability of fundamental algorithms for knowledge discovery: (1) inherent challenges of geoscience processes, (2) limitation of geoscience data collection, and (3) uncertainty in samples and ground truth. 88 , 89 , 90 Amorphous boundaries generally exist in geoscience objects between space and time that are not as well defined as objects in other fields. Geoscience phenomena are also significantly multivariate, obey nonlinear relationships, and exhibit spatiotemporal structure and non-stationary characteristics. Except for the inherent challenges of geoscience observations, the massive data at multiple dimensions of time and space, with different levels of incompleteness, noise, and uncertainties, disturb processes in geoscience. For supervised learning approaches, there are other difficulties owing to the lack of gold standard ground truth and the “small size” of samples (e.g., a small amount of historical data with sufficient observations) in geoscience applications.

Usage of AI technologies as efficient approaches to promote the geoscience processes

Geoscientists continually make every effort to develop better techniques for simulating the present status of the Earth system (e.g., how much greenhouse gases are released into the atmosphere), and the connections between and within its subsystems (e.g., how does the elevated temperature influence the ocean ecosystem). Viewed from the perspective of geoscience, newly emerging approaches, with the aid of AI, are a perfect combination for these issues in the application of geoscience: (1) characterizing objects and events 91 ; (2) estimating geoscience variables from observations 92 ; (3) forecasting geoscience variables according to long-term observations 85 ; (4) exploring geoscience data relationships 93 ; and (5) causal discovery and causal attribution. 94 While characterizing geoscience objects and events using traditional methods are primarily rooted in hand-coded features, algorithms can automatically detect the data by improving the performance with pattern-mining techniques. However, due to spatiotemporal targets with vague boundaries and the related uncertainties, it can be necessary to advance pattern-mining methods that can explain the temporal and spatial characteristics of geoscience data when characterizing different events and objects. To address the non-stationary issue of geoscience data, AI-aided algorithms have been expanded to integrate the holistic results of professional predictors and engender robust estimations of climate variables (e.g., humidity and temperature). Furthermore, forecasting long-term trends of the current situation in the Earth system using AI-enabled technologies can simulate future scenarios and formulate early resource planning and adaptation policies. Mining geoscience data relationships can help us seize vital signs of the Earth system and promote our understanding of geoscience developments. Of great interest is the advancement of AI-decision methodology with uncertain prediction probabilities, engendering vague risks with poorly resolved tails, signifying the most extreme, transient, and rare events formulated by model sets, which supports various cases to improve accuracy and effectiveness.

AI technologies for optimizing the resource management in geoscience

Currently, AI can perform better than humans in some well-defined tasks. For example, AI techniques have been used in urban water resource planning, mainly due to their remarkable capacity for modeling, flexibility, reasoning, and forecasting the water demand and capacity. Design and application of an Adaptive Intelligent Dynamic Water Resource Planning system, the subset of AI for sustainable water resource management in urban regions, largely prompted the optimization of water resource allocation, will finally minimize the operation costs and improve the sustainability of environmental management 95 ( Figure 6 ). Also, meteorology requires collecting tremendous amounts of data on many different variables, such as humidity, altitude, and temperature; however, dealing with such a huge dataset is a big challenge. 96 An AI-based technique is being utilized to analyze shallow-water reef images, recognize the coral color—to track the effects of climate change, and to collect humidity, temperature, and CO 2 data—to grasp the health of our ecological environment. 97 Beyond AI's capabilities for meteorology, it can also play a critical role in decreasing greenhouse gas emissions originating from the electric-power sector. Comprised of production, transportation, allocation, and consumption of electricity, many opportunities exist in the electric-power sector for Al applications, including speeding up the development of new clean energy, enhancing system optimization and management, improving electricity-demand forecasts and distribution, and advancing system monitoring. 98 New materials may even be found, with the auxiliary of AI, for batteries to store energy or materials and absorb CO 2 from the atmosphere. 99 Although traditional fossil fuel operations have been widely used for thousands of years, AI techniques are being used to help explore the development of more potential sustainable energy sources for the development (e.g., fusion technology). 100

An external file that holds a picture, illustration, etc.
Object name is gr6.jpg

Applications of AI in hydraulic resource management

In addition to the adjustment of energy structures due to climate change (a core part of geoscience systems), a second, less-obvious step could also be taken to reduce greenhouse gas emission: using AI to target inefficiencies. A related statistical report by the Lawrence Livermore National Laboratory pointed out that around 68% of energy produced in the US could be better used for purposeful activities, such as electricity generation or transportation, but is instead contributing to environmental burdens. 101 AI is primed to reduce these inefficiencies of current nuclear power plants and fossil fuel operations, as well as improve the efficiency of renewable grid resources. 102 For example, AI can be instrumental in the operation and optimization of solar and wind farms to make these utility-scale renewable-energy systems far more efficient in the production of electricity. 103 AI can also assist in reducing energy losses in electricity transportation and allocation. 104 A distribution system operator in Europe used AI to analyze load, voltage, and network distribution data, to help “operators assess available capacity on the system and plan for future needs.” 105 AI allowed the distribution system operator to employ existing and new resources to make the distribution of energy assets more readily available and flexible. The International Energy Agency has proposed that energy efficiency is core to the reform of energy systems and will play a key role in reducing the growth of global energy demand to one-third of the current level by 2040.

AI as a building block to promote development in geoscience

The Earth’s system is of significant scientific interest, and affects all aspects of life. 106 The challenges, problems, and promising directions provided by AI are definitely not exhaustive, but rather, serve to illustrate that there is great potential for future AI research in this important field. Prosperity, development, and popularization of AI approaches in the geosciences is commonly driven by a posed scientific question, and the best way to succeed is that AI researchers work closely with geoscientists at all stages of research. That is because the geoscientists can better understand which scientific question is important and novel, which sample collection process can reasonably exhibit the inherent strengths, which datasets and parameters can be used to answer that question, and which pre-processing operations are conducted, such as removing seasonal cycles or smoothing. Similarly, AI researchers are better suited to decide which data analysis approaches are appropriate and available for the data, the advantages and disadvantages of these approaches, and what the approaches actually acquire. Interpretability is also an important goal in geoscience because, if we can understand the basic reasoning behind the models, patterns, or relationships extracted from the data, they can be used as building blocks in scientific knowledge discovery. Hence, frequent communication between the researchers avoids long detours and ensures that analysis results are indeed beneficial to both geoscientists and AI researchers.

AI in the life sciences

The developments of AI and the life sciences are intertwined. The ultimate goal of AI is to achieve human-like intelligence, as the human brain is capable of multi-tasking, learning with minimal supervision, and generalizing learned skills, all accomplished with high efficiency and low energy cost. 107

Mutual inspiration between AI and neuroscience

In the past decades, neuroscience concepts have been introduced into ML algorithms and played critical roles in triggering several important advances in AI. For example, the origins of DL methods lie directly in neuroscience, 5 which further stimulated the emergence of the field of RL. 108 The current state-of-the-art CNNs incorporate several hallmarks of neural computation, including nonlinear transduction, divisive normalization, and maximum-based pooling of inputs, 109 which were directly inspired by the unique processing of visual input in the mammalian visual cortex. 110 By introducing the brain's attentional mechanisms, a novel network has been shown to produce enhanced accuracy and computational efficiency at difficult multi-object recognition tasks than conventional CNNs. 111 Other neuroscience findings, including the mechanisms underlying working memory, episodic memory, and neural plasticity, have inspired the development of AI algorithms that address several challenges in deep networks. 108 These algorithms can be directly implemented in the design and refinement of the brain-machine interface and neuroprostheses.

On the other hand, insights from AI research have the potential to offer new perspectives on the basics of intelligence in the brains of humans and other species. Unlike traditional neuroscientists, AI researchers can formalize the concepts of neural mechanisms in a quantitative language to extract their necessity and sufficiency for intelligent behavior. An important illustration of such exchange is the development of the temporal-difference (TD) methods in RL models and the resemblance of TD-form learning in the brain. 112 Therefore, the China Brain Project covers both basic research on cognition and translational research for brain disease and brain-inspired intelligence technology. 113

AI for omics big data analysis

Currently, AI can perform better than humans in some well-defined tasks, such as omics data analysis and smart agriculture. In the big data era, 114 there are many types of data (variety), the volume of data is big, and the generation of data (velocity) is fast. The high variety, big volume, and fast velocity of data makes having it a matter of big value, but also makes it difficult to analyze the data. Unlike traditional statistics-based methods, AI can easily handle big data and reveal hidden associations.

In genetics studies, there are many successful applications of AI. 115 One of the key questions is to determine whether a single amino acid polymorphism is deleterious. 116 There have been sequence conservation-based SIFT 117 and network-based SySAP, 118 but all these methods have met bottlenecks and cannot be further improved. Sundaram et al. developed PrimateAI, which can predict the clinical outcome of mutation based on DNN. 119 Another problem is how to call copy-number variations, which play important roles in various cancers. 120 , 121 Glessner et al. proposed a DL-based tool DeepCNV, in which the area under the receiver operating characteristic (ROC) curve was 0.909, much higher than other ML methods. 122 In epigenetic studies, m6A modification is one of the most important mechanisms. 123 Zhang et al. developed an ensemble DL predictor (EDLm6APred) for mRNA m6A site prediction. 124 The area under the ROC curve of EDLm6APred was 86.6%, higher than existing m6A methylation site prediction models. There are many other DL-based omics tools, such as DeepCpG 125 for methylation, DeepPep 126 for proteomics, AtacWorks 127 for assay for transposase-accessible chromatin with high-throughput sequencing, and deepTCR 128 for T cell receptor sequencing.

Another emerging application is DL for single-cell sequencing data. Unlike bulk data, in which the sample size is usually much smaller than the number of features, the sample size of cells in single-cell data could also be big compared with the number of genes. That makes the DL algorithm applicable for most single-cell data. Since the single-cell data are sparse and have many unmeasured missing values, DeepImpute can accurately impute these missing values in the big gene × cell matrix. 129 During the quality control of single-cell data, it is important to remove the doublet solo embedded cells, using autoencoder, and then build a feedforward neural network to identify the doublet. 130 Potential energy underlying single-cell gradients used generative modeling to learn the underlying differentiation landscape from time series single-cell RNA sequencing data. 131

In protein structure prediction, the DL-based AIphaFold2 can accurately predict the 3D structures of 98.5% of human proteins, and will predict the structures of 130 million proteins of other organisms in the next few months. 132 It is even considered to be the second-largest breakthrough in life sciences after the human genome project 133 and will facilitate drug development among other things.

AI makes modern agriculture smart

Agriculture is entering a fourth revolution, termed agriculture 4.0 or smart agriculture, benefiting from the arrival of the big data era as well as the rapid progress of lots of advanced technologies, in particular ML, modern information, and communication technologies. 134 , 135 Applications of DL, information, and sensing technologies in agriculture cover the whole stages of agricultural production, including breeding, cultivation, and harvesting.

Traditional breeding usually exploits genetic variations by searching natural variation or artificial mutagenesis. However, it is hard for either method to expose the whole mutation spectrum. Using DL models trained on the existing variants, predictions can be made on multiple unidentified gene loci. 136 For example, an ML method, multi-criteria rice reproductive gene predictor, was developed and applied to predict coding and lincRNA genes associated with reproductive processes in rice. 137 Moreover, models trained in species with well-studied genomic data (such as Arabidopsis and rice) can also be applied to other species with limited genome information (such as wild strawberry and soybean). 138 In most cases, the links between genotypes and phenotypes are more complicated than we expected. One gene can usually respond to multiple phenotypes, and one trait is generally the product of the synergism between multi-genes and multi-development. For this reason, multi-traits DL models were developed and enabled genomic editing in plant breeding. 139 , 140

It is well known that dynamic and accurate monitoring of crops during the whole growth period is vitally important to precision agriculture. In the new stage of agriculture, both remote sensing and DL play indispensable roles. Specifically, remote sensing (including proximal sensing) could produce agricultural big data from ground, air-borne, to space-borne platforms, which have a unique potential to offer an economical approach for non-destructive, timely, objective, synoptic, long-term, and multi-scale information for crop monitoring and management, thereby greatly assisting in precision decisions regarding irrigation, nutrients, disease, pests, and yield. 141 , 142 DL makes it possible to simply, efficiently, and accurately discover knowledge from massive and complicated data, especially for remote sensing big data that are characterized with multiple spatial-temporal-spectral information, owing to its strong capability for feature representation and superiority in capturing the essential relation between observation data and agronomy parameters or crop traits. 135 , 143 Integration of DL and big data for agriculture has demonstrated the most disruptive force, as big as the green revolution. As shown in Figure 7 , for possible application a scenario of smart agriculture, multi-source satellite remote sensing data with various geo- and radio-metric information, as well as abundance of spectral information from UV, visible, and shortwave infrared to microwave regions, can be collected. In addition, advanced aircraft systems, such as unmanned aerial vehicles with multi/hyper-spectral cameras on board, and smartphone-based portable devices, will be used to obtain multi/hyper-spectral data in specific fields. All types of data can be integrated by DL-based fusion techniques for different purposes, and then shared for all users for cloud computing. On the cloud computing platform, different agriculture remote sensing models developed by a combination of data-driven ML methods and physical models, will be deployed and applied to acquire a range of biophysical and biochemical parameters of crops, which will be further analyzed by a decision-making and prediction system to obtain the current water/nutrient stress, growth status, and to predict future development. As a result, an automatic or interactive user service platform can be accessible to make the correct decisions for appropriate actions through an integrated irrigation and fertilization system.

An external file that holds a picture, illustration, etc.
Object name is gr7.jpg

Integration of AI and remote sensing in smart agriculture

Furthermore, DL presents unique advantages in specific agricultural applications, such as for dense scenes, that increase the difficulty of artificial planting and harvesting. It is reported that CNNs and Autoencoder models, trained with image data, are being used increasingly for phenotyping and yield estimation, 144 such as counting fruits in orchards, grain recognition and classification, disease diagnosis, etc. 145 , 146 , 147 Consequently, this may greatly liberate the labor force.

The application of DL in agriculture is just beginning. There are still many problems and challenges for the future development of DL technology. We believe, with the continuous acquisition of massive data and the optimization of algorithms, DL will have a better prospect in agricultural production.

AI in physics

The scale of modern physics ranges from the size of a neutron to the size of the Universe ( Figure 8 ). According to the scale, physics can be divided into four categories: particle physics on the scale of neutrons, nuclear physics on the scale of atoms, condensed matter physics on the scale of molecules, and cosmic physics on the scale of the Universe. AI, also called ML, plays an important role in all physics in different scales, since the use of the AI algorithm will be the main trend in data analyses, such as the reconstruction and analysis of images.

An external file that holds a picture, illustration, etc.
Object name is gr8.jpg

Scale of the physics

Speeding up simulations and identifications of particles with AI

There are many applications or explorations of applications of AI in particle physics. We cannot cover all of them here, but only use lattice quantum chromodynamics (LQCD) and the experiments on the Beijing spectrometer (BES) and the large hadron collider (LHC) to illustrate the power of ML in both theoretical and experimental particle physics.

LQCD studies the nonperturbative properties of QCD by using Monte Carlo simulations on supercomputers to help us understand the strong interaction that binds quarks together to form nucleons. Markov chain Monte Carlo simulations commonly used in LQCD suffer from topological freezing and critical slowing down as the simulations approach the real situation of the actual world. New algorithms with the help of DL are being proposed and tested to overcome those difficulties. 148 , 149 Physical observables are extracted from LQCD data, whose signal-to-noise ratio deteriorates exponentially. For non-Abelian gauge theories, such as QCD, complicated contour deformations can be optimized by using ML to reduce the variance of LQCD data. Proof-of-principle applications in two dimensions have been studied. 150 ML can also be used to reduce the time cost of generating LQCD data. 151

On the experimental side, particle identification (PID) plays an important role. Recently, a few PID algorithms on BES-III were developed, and the ANN 152 is one of them. Also, extreme gradient boosting has been used for multi-dimensional distribution reweighting, muon identification, and cluster reconstruction, and can improve the muon identification. U-Net is a convolutional network for pixel-level semantic segmentation, which is widely used in CV. It has been applied on BES-III to solve the problem of multi-turn curling track finding for the main drift chamber. The average efficiency and purity for the first turn's hits is about 91%, at the threshold of 0.85. Current (and future) particle physics experiments are producing a huge amount of data. Machine leaning can be used to discriminate between signal and overwhelming background events. Examples of data analyses on LHC, using supervised ML, can be found in a 2018 collaboration. 153 To take the potential advantage of quantum computers forward, quantum ML methods are also being investigated, see, for example, Wu et al., 154 and references therein, for proof-of-concept studies.

AI makes nuclear physics powerful

Cosmic ray muon tomography (Muography) 155 is an imaging graphe technology using natural cosmic ray muon radiation rather than artificial radiation to reduce the dangers. As an advantage, this technology can detect high-Z materials without destruction, as muon is sensitive to high-Z materials. The Classification Model Algorithm (CMA) algorithm is based on the classification in the supervised learning and gray system theory, and generates a binary classifier designing and decision function with the input of the muon track, and the output indicates whether the material exists at the location. The AI helps the user to improve the efficiency of the scanning time with muons.

AIso, for nuclear detection, the Cs 2 LiYCl 6 :Ce (CLYC) signal can react to both electrons and neutrons to create a pulse signal, and can therefore be applied to detect both neutrons and electrons, 156 but needs identification of the two particles by analyzing the shapes of the waves, that is n-γ ID. The traditional method has been the PSD (pulse shape discrimination) method, which is used to separate the waves of two particles by analyzing the distribution of the pulse information—such as amplitude, width, raise time, fall time, and the two particles that can be separated when the distribution has two separated Gaussian distributions. The traditional PSD can only analyze single-pulse waves, rather than multipulse waves, when two particles react with CLYC closely. But it can be solved by using an ANN method for classification of the six categories (n,γ,n + n,n + γ,γ + n,γ). Also, there are several parameters that could be used by AI to improve the reconstruction algorithm with high efficiency and less error.

AI-aided condensed matter physics

AI opens up a new avenue for physical science, especially when a trove of data is available. Recent works demonstrate that ML provides useful insights to improve the density functional theory (DFT), in which the single-electron picture of the Kohn-Sham scheme has the difficulty of taking care of the exchange and correlation effects of many-body systems. Yu et al. proposed a Bayesian optimization algorithm to fit the Hubbard U parameter, and the new method can find the optimal Hubbard U through a self-consistent process with good efficiency compared with the linear response method, 157 and boost the accuracy to the near-hybrid-functional-level. Snyder et al. developed an ML density functional for a 1D non-interacting non-spin-polarized fermion system to obtain significantly improved kinetic energy. This method enabled a direct approximation of the kinetic energy of a quantum system and can be utilized in orbital-free DFT modeling, and can even bypass the solving of the Kohn-Sham equation—while maintaining the precision to the quantum chemical level when a strong correlation term is included. Recently, FermiNet showed that the many-body quantum mechanics equations can be solved via AI. AI models also show advantages of capturing the interatom force field. In 2010, the Gaussian approximation potential (GAP) 158 was introduced as a powerful interatomic force field to describe the interactions between atoms. GAP uses kernel regression and invariant many-body representations, and performs quite well. For instance, it can simulate crystallization of amorphous crystals under high pressure fairly accurately. By employing the smooth overlap of the atomic position kernel (SOAP), 159 the accuracy of the potential can be further enhanced and, therefore, the SOAP-GAP can be viewed as a field-leading method for AI molecular dynamic simulation. There are also several other well-developed AI interatomic potentials out there, e.g., crystal graph CNNs provide a widely applicable way of vectorizing crystalline materials; SchNet embeds the continuous-filter convolutional layers into its DNNs for easing molecular dynamic as the potentials are space continuous; DimeNet constructs the directional message passing neural network by adding not only the bond length between atoms but also the bond angle, the dihedral angle, and the interactions between unconnected atoms into the model to obtain good accuracy.

AI helps explore the Universe

AI is one of the newest technologies, while astronomy is one of the oldest sciences. When the two meet, new opportunities for scientific breakthroughs are often triggered. Observations and data analysis play a central role in astronomy. The amount of data collected by modern telescopes has reached unprecedented levels, even the most basic task of constructing a catalog has become challenging with traditional source-finding tools. 160 Astronomers have developed automated and intelligent source-finding tools based on DL, which not only offer significant advantages in operational speed but also facilitate a comprehensive understanding of the Universe by identifying particular forms of objects that cannot be detected by traditional software and visual inspection. 160 , 161

More than a decade ago, a citizen science project called “Galaxy Zoo” was proposed to help label one million images of galaxies collected by the Sloan Digital Sky Survey (SDSS) by posting images online and recruiting volunteers. 162 Larger optical telescopes, in operation or under construction, produce data several orders of magnitude higher than SDSS. Even with volunteers involved, there is no way to analyze the vast amount of data received. The advantages of ML are not limited to source-finding and galaxy classification. In fact, it has a much wider range of applications. For example, CNN plays an important role in detecting and decoding gravitational wave signals in real time, reconstructing all parameters within 2 ms, while traditional algorithms take several days to accomplish the same task. 163 Such DL systems have also been used to automatically generate alerts for transients and track asteroids and other fast-moving near-Earth objects, improving detection efficiency by several orders of magnitude. In addition, astrophysicists are exploring the use of neural networks to measure galaxy clusters and study the evolution of the Universe.

In addition to the amazing speed, neural networks seem to have a deeper understanding of the data than expected and can recognize more complex patterns, indicating that the “machine” is evolving rather than just learning the characteristics of the input data.

AI in chemistry

Chemistry plays an important “central” role in other sciences 164 because it is the investigation of the structure and properties of matter, and identifies the chemical reactions that convert substances into to other substances. Accordingly, chemistry is a data-rich branch of science containing complex information resulting from centuries of experiments and, more recently, decades of computational analysis. This vast treasure trove of data is most apparent within the Chemical Abstract Services, which has collected more than 183 million unique organic and inorganic substances, including alloys, coordination compounds, minerals, mixtures, polymers, and salts, and is expanding by addition of thousands of additional new substances daily. 165 The unlimited complexity in the variety of material compounds explains why chemistry research is still a labor-intensive task. The level of complexity and vast amounts of data within chemistry provides a prime opportunity to achieve significant breakthroughs with the application of AI. First, the type of molecules that can be constructed from atoms are almost unlimited, which leads to unlimited chemical space 166 ; the interconnection of these molecules with all possible combinations of factors, such as temperature, substrates, and solvents, are overwhelmingly large, giving rise to unlimited reaction space. 167 Exploration of the unlimited chemical space and reaction space, and navigating to the optimum ones with the desired properties, is thus practically impossible solely from human efforts. Secondly, in chemistry, the huge assortment of molecules and the interplay of them with the external environments brings a new level of complexity, which cannot be simply predicted using physical laws. While many concepts, rules, and theories have been generalized from centuries of experience from studying trivial (i.e., single component) systems, nontrivial complexities are more likely as we discover that “more is different” in the words of Philip Warren Anderson, American physicist and Nobel Laureate. 168 Nontrivial complexities will occur when the scale changes, and the breaking of symmetry in larger, increasingly complex systems, and the rules will shift from quantitative to qualitative. Due to lack of systematic and analytical theory toward the structures, properties, and transformations of macroscopic substances, chemistry research is thus, incorrectly, guided by heuristics and fragmental rules accumulated over the previous centuries, yielding progress that only proceeds through trial and error. ML will recognize patterns from large amounts of data; thereby offering an unprecedented way of dealing with complexity, and reshaping chemistry research by revolutionizing the way in which data are used. Every sub-field of chemistry, currently, has utilized some form of AI, including tools for chemistry research and data generation, such as analytical chemistry and computational chemistry, as well as application to organic chemistry, catalysis, and medical chemistry, which we discuss herein.

AI breaks the limitations of manual feature selection methods

In analytical chemistry, the extraction of information has traditionally relied heavily on the feature selection techniques, which are based on prior human experiences. Unfortunately, this approach is inefficient, incomplete, and often biased. Automated data analysis based on AI will break the limitations of manual variable selection methods by learning from large amounts of data. Feature selection through DL algorithms enables information extraction from the datasets in NMR, chromatography, spectroscopy, and other analytical tools, 169 thereby improving the model prediction accuracy for analysis. These ML approaches will greatly accelerate the analysis of materials, leading to the rapid discovery of new molecules or materials. Raman scattering, for instance, since its discovery in the 1920s, has been widely employed as a powerful vibrational spectroscopy technology, capable of providing vibrational fingerprints intrinsic to analytes, thus enabling identification of molecules. 170 Recently, ML methods have been trained to recognize features in Raman (or SERS) spectra for the identity of an analyte by applying DL networks, including ANN, CNN, and fully convolutional network for feature engineering. 171 For example, Leong et al. designed a machine-learning-driven “SERS taster” to simultaneously harness useful vibrational information from multiple receptors for enhanced multiplex profiling of five wine flavor molecules at ppm levels. Principal-component analysis is employed for the discrimination of alcohols with varying degrees of substitution, and supported with vector machine discriminant analysis, is used to quantitatively classify all flavors with 100% accuracy. 172 Overall, AI techniques provide the first glimmer of hope for a universal method for spectral data analysis, which is fast, accurate, objective and definitive and with attractive advantages in a wide range of applications.

AI improves the accuracy and efficiency for various levels of computational theory

Complementary to analytical tools, computational chemistry has proven a powerful approach for using simulations to understand chemical properties; however, it is faced with an accuracy-versus-efficiency dilemma. This dilemma greatly limits the application of computational chemistry to real-world chemistry problems. To overcome this dilemma, ML and other AI methods are being applied to improve the accuracy and efficiency for various levels of theory used to describe the effects arising at different time and length scales, in the multi-scaling of chemical reactions. 173 Many of the open challenges in computational chemistry can be solved by ML approaches, for example, solving Schrödinger's equation, 174 developing atomistic 175 or coarse graining 176 potentials, constructing reaction coordinates, 177 developing reaction kinetics models, 178 and identifying key descriptors for computable properties. 179 In addition to analytical chemistry and computational chemistry, several disciplines of chemistry have incorporated AI technology to chemical problems. We discuss the areas of organic chemistry, catalysis, and medical chemistry as examples of where ML has made a significant impact. Many examples exist in literature for other subfields of chemistry and AI will continue to demonstrate breakthroughs in a wide range of chemical applications.

AI enables robotics capable of automating the synthesis of molecules

Organic chemistry studies the structure, property, and reaction of carbon-based molecules. The complexity of the chemical and reaction space, for a given property, presents an unlimited number of potential molecules that can be synthesized by chemists. Further complications are added when faced with the problems of how to synthesize a particular molecule, given that the process relies much on heuristics and laborious testing. Challenges have been addressed by researchers using AI. Given enough data, any properties of interest of a molecule can be predicted by mapping the molecular structure to the corresponding property using supervised learning, without resorting to physical laws. In addition to known molecules, new molecules can be designed by sampling the chemical space 180 using methods, such as autoencoders and CNNs, with the molecules coded as sequences or graphs. Retrosynthesis, the planning of synthetic routes, which was once considered an art, has now become much simpler with the help of ML algorithms. The Chemetica system, 181 for instance, is now capable of autonomous planning of synthetic routes that are subsequently proven to work in the laboratory. Once target molecules and the route of synthesis are determined, suitable reaction conditions can be predicted or optimized using ML techniques. 182

The integration of these AI-based approaches with robotics has enabled fully AI-guided robotics capable of automating the synthesis of small organic molecules without human intervention Figure 9 . 183 , 184

An external file that holds a picture, illustration, etc.
Object name is gr9.jpg

A closed loop workflow to enable automatic and intelligent design, synthesis, and assay of molecules in organic chemistry by AI

AI helps to search through vast catalyst design spaces

Catalytic chemistry originates from catalyst technologies in the chemical industry for efficient and sustainable production of chemicals and fuels. Thus far, it is still a challenging endeavor to make novel heterogeneous catalysts with good performance (i.e., stable, active, and selective) because a catalyst's performance depends on many properties: composition, support, surface termination, particle size, particle morphology, atomic coordination environment, porous structure, and reactor during the reaction. The inherent complexity of catalysis makes discovering and developing catalysts with desired properties more dependent on intuition and experiment, which is costly and time consuming. AI technologies, such as ML, when combined with experimental and in silico high-throughput screening of combinatorial catalyst libraries, can aid catalyst discovery by helping to search through vast design spaces. With a well-defined structure and standardized data, including reaction results and in situ characterization results, the complex association between catalytic structure and catalytic performance will be revealed by AI. 185 , 186 An accurate descriptor of the effect of molecules, molecular aggregation states, and molecular transport, on catalysts, could also be predicted. With this approach, researchers can build virtual laboratories to develop new catalysts and catalytic processes.

AI enables screening of chemicals in toxicology with minimum ethical concerns

A more complicated sub-field of chemistry is medical chemistry, which is a challenging field due to the complex interactions between the exotic substances and the inherent chemistry within a living system. Toxicology, for instance, as a broad field, seeks to predict and eliminate substances (e.g., pharmaceuticals, natural products, food products, and environmental substances), which may cause harm to a living organism. Living organisms are already complex, nearly any known substance can cause toxicity at a high enough exposure because of the already inherent complexity within living organisms. Moreover, toxicity is dependent on an array of other factors, including organism size, species, age, sex, genetics, diet, combination with other chemicals, overall health, and/or environmental context. Given the scale and complexity of toxicity problems, AI is likely to be the only realistic approach to meet regulatory body requirements for screening, prioritization, and risk assessment of chemicals (including mixtures), therefore revolutionizing the landscape in toxicology. 187 In summary, AI is turning chemistry from a labor-intensive branch of science to a highly intelligent, standardized, and automated field, and much more can be achieved compared with the limitation of human labor. Underlying knowledge with new concepts, rules, and theories is expected to advance with the application of AI algorithms. A large portion of new chemistry knowledge leading to significant breakthroughs is expected to be generated from AI-based chemistry research in the decades to come.

Conclusions

This paper carries out a comprehensive survey on the development and application of AI across a broad range of fundamental sciences, including information science, mathematics, medical science, materials science, geoscience, life science, physics, and chemistry. Despite the fact that AI has been pervasively used in a wide range of applications, there still exist ML security risks on data and ML models as attack targets during both training and execution phases. Firstly, since the performance of an ML system is highly dependent on the data used to train it, these input data are crucial for the security of the ML system. For instance, adversarial example attacks 188 providing malicious input data often lead the ML system into making false judgments (predictions or categorizations) with small perturbations that are imperceptible to humans; data poisoning by intentionally manipulating raw, training, or testing data can result in a decrease in model accuracy or lead to other error-specific attack purposes. Secondly, ML model attacks include backdoor attacks on DL, CNN, and federated learning that manipulate the model's parameters directly, as well as model stealing attack, model inversion attack, and membership inference attack, which can steal the model parameters or leak the sensitive training data. While a number of defense techniques against these security threats have been proposed, new attack models that target ML systems are constantly emerging. Thus, it is necessary to address the problem of ML security and develop robust ML systems that remain effective under malicious attacks.

Due to the data-driven character of the ML method, features of the training and testing data must be drawn from the same distribution, which is difficult to guarantee in practice. This is because, in practical application, the data source might be different from that in the training dataset. In addition, the data feature distribution may drift over time, which leads to a decline of the performance of the model. Moreover, if the model is trained with only new data, it will lead to catastrophic “forgetting” of the model, which means the model only remembers the new features and forgets the previously learned features. To solve this problem, more and more scholars pay attention on how to make the model have the ability of lifelong learning, that is, a change in the computing paradigm from “offline learning + online reasoning” to “online continuous learning,” and thus give the model have the ability of lifelong learning, just like a human being.

Acknowledgments

This work was partially supported by the National Key R&D Program of China (2018YFA0404603, 2019YFA0704900, 2020YFC1807000, and 2020YFB1313700), the Youth Innovation Promotion Association CAS (2011225, 2012006, 2013002, 2015316, 2016275, 2017017, 2017086, 2017120, 2017204, 2017300, 2017399, 2018356, 2020111, 2020179, Y201664, Y201822, and Y201911), NSFC (nos. 11971466, 12075253, 52173241, and 61902376), the Foundation of State Key Laboratory of Particle Detection and Electronics (SKLPDE-ZZ-201902), the Program of Science & Technology Service Network of CAS (KFJ-STS-QYZX-050), the Fundamental Science Center of the National Nature Science Foundation of China (nos. 52088101 and 11971466), the Scientific Instrument Developing Project of CAS (ZDKYYQ20210003), the Strategic Priority Research Program (B) of CAS (XDB33000000), the National Science Foundation of Fujian Province for Distinguished Young Scholars (2019J06023), the Key Research Program of Frontier Sciences, CAS (nos. ZDBS-LY-7022 and ZDBS-LY-DQC012), the CAS Project for Young Scientists in Basic Research (no. YSBR-005). The study is dedicated to the 10th anniversary of the Youth Innovation Promotion Association of the Chinese Academy of Sciences.

Author contributions

Y.X., Q.W., Z.A., Fei W., C.L., Z.C., J.M.T., and J.Z. conceived and designed the research. Z.A., Q.W., Fei W., Libo.Z., Y.W., F.D., and C.W.-Q. wrote the “ AI in information science ” section. Xin.L. wrote the “ AI in mathematics ” section. J.Q., K.H., W.S., J.W., H.X., Y.H., and X.C. wrote the “ AI in medical science ” section. E.L., C.F., Z.Y., and M.L. wrote the “ AI in materials science ” section. Fang W., R.R., S.D., M.V., and F.K. wrote the “ AI in geoscience ” section. C.H., Z.Z., L.Z., T.Z., J.D., J.Y., L.L., M.L., and T.H. wrote the “ AI in life sciences ” section. Z.L., S.Q., and T.A. wrote the “ AI in physics ” section. X.L., B.Z., X.H., S.C., X.L., W.Z., and J.P.L. wrote the “ AI in chemistry ” section. Y.X., Q.W., and Z.A. wrote the “Abstract,” “ introduction ,” “ history of AI ,” and “ conclusions ” sections.

Declaration of interests

The authors declare no competing interests.

Published Online: October 28, 2021

This paper is in the following e-collection/theme issue:

Published on 19.4.2024 in Vol 26 (2024)

Psychometric Evaluation of a Tablet-Based Tool to Detect Mild Cognitive Impairment in Older Adults: Mixed Methods Study

Authors of this article:

Author Orcid Image

Original Paper

  • Josephine McMurray 1, 2 * , MBA, PhD   ; 
  • AnneMarie Levy 1 * , MSc, PhD   ; 
  • Wei Pang 1, 3 * , BTM   ; 
  • Paul Holyoke 4 , PhD  

1 Lazaridis School of Business & Economics, Wilfrid Laurier University, Brantford, ON, Canada

2 Health Studies, Faculty of Human and Social Sciences, Wilfrid Laurier University, Brantford, ON, Canada

3 Biomedical Informatics & Data Science, Yale University, New Haven, CT, United States

4 SE Research Centre, Markham, ON, Canada

*these authors contributed equally

Corresponding Author:

Josephine McMurray, MBA, PhD

Lazaridis School of Business & Economics

Wilfrid Laurier University

73 George St

Brantford, ON, N3T3Y3

Phone: 1 548 889 4492

Email: [email protected]

Background: With the rapid aging of the global population, the prevalence of mild cognitive impairment (MCI) and dementia is anticipated to surge worldwide. MCI serves as an intermediary stage between normal aging and dementia, necessitating more sensitive and effective screening tools for early identification and intervention. The BrainFx SCREEN is a novel digital tool designed to assess cognitive impairment. This study evaluated its efficacy as a screening tool for MCI in primary care settings, particularly in the context of an aging population and the growing integration of digital health solutions.

Objective: The primary objective was to assess the validity, reliability, and applicability of the BrainFx SCREEN (hereafter, the SCREEN) for MCI screening in a primary care context. We conducted an exploratory study comparing the SCREEN with an established screening tool, the Quick Mild Cognitive Impairment (Qmci) screen.

Methods: A concurrent mixed methods, prospective study using a quasi-experimental design was conducted with 147 participants from 5 primary care Family Health Teams (FHTs; characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. Participants included health care practitioners, patients, and FHT administrative executives. Individuals aged ≥55 years with no history of MCI or diagnosis of dementia rostered in a participating FHT were eligible to participate. Participants were screened using both the SCREEN and Qmci. The study also incorporated the Geriatric Anxiety Scale–10 to assess general anxiety levels at each cognitive screening. The SCREEN’s scoring was compared against that of the Qmci and the clinical judgment of health care professionals. Statistical analyses included sensitivity, specificity, internal consistency, and test-retest reliability assessments.

Results: The study found that the SCREEN’s longer administration time and complex scoring algorithm, which is proprietary and unavailable for independent analysis, presented challenges. Its internal consistency, indicated by a Cronbach α of 0.63, was below the acceptable threshold. The test-retest reliability also showed limitations, with moderate intraclass correlation coefficient (0.54) and inadequate κ (0.15) values. Sensitivity and specificity were consistent (63.25% and 74.07%, respectively) between cross-tabulation and discrepant analysis. In addition, the study faced limitations due to its demographic skew (96/147, 65.3% female, well-educated participants), the absence of a comprehensive gold standard for MCI diagnosis, and financial constraints limiting the inclusion of confirmatory neuropsychological testing.

Conclusions: The SCREEN, in its current form, does not meet the necessary criteria for an optimal MCI screening tool in primary care settings, primarily due to its longer administration time and lower reliability. As the number of digital health technologies increases and evolves, further testing and refinement of tools such as the SCREEN are essential to ensure their efficacy and reliability in real-world clinical settings. This study advocates for continued research in this rapidly advancing field to better serve the aging population.

International Registered Report Identifier (IRRID): RR2-10.2196/25520

Introduction

Mild cognitive impairment (MCI) is a syndrome characterized by a slight but noticeable and measurable deterioration in cognitive abilities, predominantly memory and thinking skills, that is greater than expected for an individual’s age and educational level [ 1 , 2 ]. The functional impairments associated with MCI are subtle and often impair instrumental activities of daily living (ADL). Instrumental ADL include everyday tasks such as managing finances, cooking, shopping, or taking regularly prescribed medications and are considered more complex than ADL such as bathing, dressing, and toileting [ 3 , 4 ]. In cases in which memory impairment is the primary indicator of the disease, MCI is classified as amnesic MCI and when significant impairment of non–memory-related cognitive domains such as visual-spatial or executive functioning is dominant, MCI is classified as nonamnesic [ 5 ].

Cognitive decline, more so than cancer and cardiovascular disease, poses a substantial threat to an individual’s ability to live independently or at home with family caregivers [ 6 ]. The Centers for Disease Control and Prevention reports that 1 in 8 adults aged ≥60 years experiences memory loss and confusion, with 35% reporting functional difficulties with basic ADL [ 7 ]. The American Academy of Neurology estimates that the prevalence of MCI ranges from 13.4% to 42% in people aged ≥65 years [ 8 ], and a 2023 meta-analysis that included 233 studies and 676,974 participants aged ≥50 years estimated that the overall global prevalence of MCI is 19.7% [ 9 ]. Once diagnosed, the prognosis for MCI is variable, whereby the impairment may be reversible; the rate of decline may plateau; or it may progressively worsen and, in some cases, may be a prodromal stage to dementia [ 10 - 12 ]. While estimates vary based on sample (community vs clinical), annual rates of conversion from MCI to dementia range from 5% to 24% [ 11 , 12 ], and those who present with multiple domains of cognitive impairment are at higher risk of conversion [ 5 ].

The risk of developing MCI rises with age, and while there are no drug treatments for MCI, nonpharmacologic interventions may improve cognitive function, alleviate the burden on caregivers, and potentially delay institutionalization should MCI progress to dementia [ 13 ]. To overcome the challenges of early diagnosis, which currently depends on self-detection, family observation, or health care provider (HCP) recognition of symptoms, screening high-risk groups for MCI or dementia is suggested as a solution [ 13 ]. However, the Canadian Task Force on Preventive Health Care recommends against screening adults aged ≥65 years due to a lack of meaningful evidence from randomized controlled trials and the high false-positive rate [ 14 - 16 ]. The main objective of a screening test is to reduce morbidity or mortality in at-risk populations through early detection and intervention, with the anticipated benefits outweighing potential harms. Using brief screening tools in primary care might improve MCI case detection, allowing patients and families to address reversible causes, make lifestyle changes, and access disease-modifying treatments [ 17 ].

There is no agreement among experts as to which tests or groups of tests are most predictive of MCI [ 16 ], and the gold standard approach uses a combination of positive results from neuropsychological assessments, laboratory tests, and neuroimaging to infer a diagnosis [ 8 , 18 ]. The clinical heterogeneity of MCI complicates its diagnosis because it influences not only memory and thinking abilities but also mood, behavior, emotional regulation, and sensorimotor abilities, and patients may present with any combination of symptoms with varying rates of onset and decline [ 4 , 8 ]. For this reason, a collaborative approach between general practitioners and specialists (eg, geriatricians and neurologists) is often required to be confident in the diagnosis of MCI [ 8 , 19 , 20 ].

In Canada, diagnosis often begins with screening for cognitive impairment followed by referral for additional testing; this process takes, on average, 5 months [ 20 ]. The current usual practice screening tools for MCI are the Mini-Mental State Examination (MMSE) [ 21 , 22 ] and the Montreal Cognitive Assessment (MoCA) 8.1 [ 3 ]. Both are paper-and-pencil screens administered in 10 to 15 minutes, scored out of 30, and validated as MCI screening tools across diverse clinical samples [ 23 , 24 ]. Universally, the MMSE is most often used to screen for MCI [ 20 , 25 ] and consists of 20 items that measure orientation, immediate and delayed recall, attention and calculation, visual-spatial skills, verbal fluency, and writing. The MoCA 8.1 was developed to improve on the MMSE’s ability to detect early signs of MCI, placing greater emphasis on evaluating executive function as well as language, memory, visual-spatial skills, abstraction, attention, concentration, and orientation across 30 items [ 24 , 26 ]. Scores of <24 on the MMSE or ≤25 on the MoCA 8.1 signal probable MCI [ 21 , 27 ]. Lower cutoff scores for both screens have been recommended to address evidence that they lack specificity to detect mild and early cases of MCI [ 4 , 28 - 31 ]. The clinical efficacy of both screens for tracking change in cognition over time is limited as they are also subject to practice effects with repeated administration [ 32 ].

Novel screening tools, including the Quick Mild Cognitive Impairment (Qmci) screen, have been developed with the goal of improving the accuracy of detecting MCI [ 33 , 34 ]. The Qmci is a sensitive and specific tool that differentiates normal cognition from MCI and dementia and is more accurate at differentiating MCI from controls than either the MoCA 8.1 (Qmci area under the curve=0.97 vs MoCA 8.1 area under the curve=0.92) [ 25 , 35 ] or the Short MMSE [ 33 , 36 ]. It also demonstrates high test-retest reliability (intraclass correlation coefficient [ICC]=0.88) [ 37 ] and is clinically useful as a rapid screen for MCI as the Qmci mean is 4.5 (SD 1.3) minutes versus 9.5 (SD 2.8) minutes for the MoCA 8.1 [ 25 ].

The COVID-19 pandemic and the necessary shift to virtual health care accelerated the use of digital assessment tools, including MCI screening tools such as the electronic MoCA 8.1 [ 38 , 39 ], and the increased use and adoption of technology (eg, smartphones and tablets) by older adults suggests that a lack of proficiency with technology may not be a barrier to the use of such assessment tools [ 40 , 41 ]. BrainFx is a for-profit firm that creates proprietary software designed to assess cognition and changes in neurofunction that may be caused by neurodegenerative diseases (eg, MCI or dementia), stroke, concussions, or mental illness using ecologically relevant tasks (eg, prioritizing daily schedules and route finding on a map) [ 42 ]. Their assessments are administered via a tablet and stylus. The BrainFx 360 performance assessment (referred to hereafter as the 360) is a 90-minute digitally administered test that was designed to assess cognitive, physical, and psychosocial areas of neurofunction across 26 cognitive domains using 49 tasks that are timed and scored [ 42 ]. The BrainFx SCREEN (referred to hereafter as the SCREEN) is a short digital version of the 360 that includes 7 of the cognitive domains included in the 360, is estimated to take approximately 10 to 15 minutes to complete, and was designed to screen for early detection of cognitive impairment [ 43 , 44 ]. Upon completion of any BrainFx assessment, the results of the 360 or SCREEN are added to the BrainFx Living Brain Bank (LBB), which is an electronic database that stores all completed 360 and SCREEN assessments and is maintained by BrainFx. An electronic report is generated by BrainFx comparing an individual’s results to those of others collected and stored in the LBB. Normative data from the LBB are used to evaluate and compare an individual’s results.

The 360 has been used in clinical settings to assess neurofunction among youth [ 45 ] and anecdotally in other rehabilitation settings (T Milner, personal communication, May 2018). To date, research on the 360 indicates that it has been validated in healthy young adults (mean age 22.9, SD 2.4 years) and that the overall test-retest reliability of the tool is high (ICC=0.85) [ 42 ]. However, only 2 of the 7 tasks selected to be included in the SCREEN produced reliability coefficients of >0.70 (visual-spatial and problem-solving abilities) [ 42 ]. Jones et al [ 43 ] explored the acceptability and perceived usability of the SCREEN with a small sample (N=21) of Canadian Armed Forces veterans living with posttraumatic stress disorder. A structural equation model based on the Unified Theory of Acceptance and Use of Technology suggested that behavioral intent to use the SCREEN was predicted by facilitating conditions such as guidance during the test and appropriate resources to complete the test [ 43 ]. However, the validity, reliability, and sensitivity of the SCREEN for detecting cognitive impairment have not been tested.

McMurray et al [ 44 ] designed a protocol to assess the validity, reliability, and sensitivity of the SCREEN for detecting early signs of MCI in asymptomatic adults aged ≥55 years in a primary care setting (5 Family Health Teams [FHTs]). The protocol also used a series of semistructured interviews and surveys guided by the fit between individuals, task, technology, and environment framework [ 46 ], a health-specific model derived from the Task-Technology Fit model by Goodhue and Thompson [ 47 ], to explore the SCREEN’s acceptability and use by HCPs and patients in primary care settings (manuscript in preparation). This study is a psychometric evaluation of the SCREEN’s validity, reliability, and sensitivity for detecting MCI in asymptomatic adults aged ≥55 years in primary care settings.

Study Location, Design, and Data Collection

This was a concurrent, mixed methods, prospective study using a quasi-experimental design. Participants were recruited from 5 primary care FHTs (characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. FHTs that used a registered occupational therapist on staff were eligible to participate in the study, and participating FHTs received a nominal compensatory payment for the time the HCPs spent in training; collecting data for the study; administering the SCREEN, Qmci, and Geriatric Anxiety Scale–10 (GAS-10); and communicating with the research team. A multipronged recruitment approach was used [ 44 ]. A designated occupational therapist at each location was provided with training and equipment to recruit participants, administer assessment tools, and submit collected data to the research team.

The research protocol describing the methods of both the quantitative and qualitative arms of the study is published elsewhere [ 44 ].

Ethical Considerations

This study was approved by the Wilfrid Laurier University Research Ethics Board (ORE 5820) and was reviewed and approved by each FHT. Participants (HCPs, patients, and administrative executives) read and signed an information and informed consent package in advance of taking part in the study. We complied with recommendations for obtaining informed consent and conducting qualitative interviews with persons with dementia when recruiting patients who may be affected by neurocognitive diseases [ 48 - 50 ]. In addition, at the end of each SCREEN assessment, patients were required to provide their consent (electronic signature) to contribute their anonymized scores to the database of SCREEN results maintained by BrainFx. Upon enrolling in the study, participants were assigned a unique identification number that was used in place of their name on all study documentation to anonymize the data and preserve their confidentiality. A master list matching participant names with their unique identification number was stored in a password-protected file by the administering HCP and principal investigator on the research team. The FHTs received a nominal compensatory payment to account for their HCPs’ time spent administering the SCREEN, collecting data for the study, and communicating with the research team. However, the individual HCPs who volunteered to participate and the patient participants were not financially compensated for taking part in the study.

Participants

Patients who were rostered with the FHT, were aged ≥55 years, and had no history of MCI or dementia diagnoses to better capture the population at risk of early signs of cognitive impairment were eligible to participate [ 51 , 52 ]. It was necessary for the participants to be rostered with the FHTs to ensure that the HCPs could access their electronic medical record to confirm eligibility and record the testing sessions and results and to ensure that there was a responsible physician for referral if indicated. As the SCREEN is administered using a tablet, participants had to be able to read and think in English and discern color, have adequate hearing and vision to interact with the administering HCP, read 12-point font on the tablet, and have adequate hand and arm function to manipulate and hold the tablet. The exclusion criteria used in the study included colorblindness and any disability that might impair the individual’s ability to hold and interact with the tablet. Prospective participants were also excluded based on a diagnosis of conditions that may result in MCI or dementia-like symptoms, including major depression that required hospitalization, psychiatric disorders (eg, schizophrenia and bipolar disorder), psychopathology, epilepsy, substance use disorders, or sleep apnea (without the use of a continuous positive airway pressure machine) [ 52 ]. Patients were required to complete a minimum of 2 screening sessions spaced 3 months apart to participate in the study and, depending on when they enrolled to participate, could complete a maximum of 4 screening sessions over a year.

Data Collection Instruments

Gas-10 instrument.

A standardized protocol was used to collect demographic data, randomly administer the SCREEN and the Qmci (a validated screening tool for MCI), and administer the GAS-10 immediately before and after the completion of the first MCI screen at each visit [ 44 ]. This was to assess participants’ general anxiety as it related to screening for cognitive impairment at the time of the assessment, any change in subjective ratings after completion of the first MCI screen, and change in anxiety between appointments. The GAS-10 is a 10-item, self-report screen for anxiety in older adults [ 53 ] developed for rapid screening of anxiety in clinical settings (the GAS-10 is the short form of the full 30-item Geriatric Anxiety Scale [GAS]) [ 54 ]. While 3 subscales are identified, the GAS is reported to be a unidimensional scale that assesses general anxiety [ 55 , 56 ]. Validation of the GAS-10 suggests that it is optimal for assessing average to moderate levels of anxiety in older adults, with subscale scores that are highly and positively correlated with the GAS and high internal consistency [ 53 ]. Participants were asked to use a 4-point Likert scale (0= not at all , 1= sometimes , 2= most of the time , and 3= all of the time ) to rate how often they had experienced each symptom over the previous week, including on the day the test was administered [ 54 ]. The GAS-10 has a maximum score of 30, with higher scores indicating higher levels of anxiety [ 53 , 54 , 57 ].

HCPs completed the required training to become certified BrainFx SCREEN administrators before the start of the study. To this end, HCPs completed a web-based training program (developed and administered through the BrainFx website) that included 3 self-directed training modules. For the purpose of the study, they also participated in 1 half-day in-person training session conducted by a certified BrainFx administrator (T Milner, BrainFx chief executive officer) at one of the participating FHT locations. The SCREEN (version 0.5; beta) was administered on a tablet (ASUS ZenPad 10.1” IPS WXGA display, 1920 × 1200, powered by a quad-core 1.5 GHz, 64-bit MediaTek MTK 8163A processor with 2 GB RAM and 16-GB storage). The tablet came with a tablet stand for optional use and a dedicated stylus that is recommended for completion of a subset of questions. At the start of the study, HCPs were provided with identical tablets preloaded with the SCREEN software for use in the study. The 7 tasks on the SCREEN are summarized in Table 1 and were taken directly from the 360 based on a clustering and regression analysis of LBB records in 2016 (N=188) [ 58 ]. A detailed description of the study and SCREEN administration procedures was published by McMurray et al [ 44 ].

An activity score is generated for each of the 7 tasks on the SCREEN. It is computed based on a combination of the accuracy of the participant’s response and the processing speed (time in seconds) that it takes to complete the task. The relative contribution of accuracy and processing speed to the final activity score for each task is proprietary to BrainFx and unknown to the research team. The participant’s activity score is compared to the mean activity score for the same task at the time of testing in the LBB. The mean activity score from the LBB may be based on the global reference population (ie, all available SCREEN results in the LBB), or the administering HCP may select a specific reference population by filtering according to factors including but not limited to age, sex, or diagnosis. If the participant’s activity score is >1 SD below the LBB activity score mean for that task, it is labeled as an area of challenge . Each of the 7 tasks on the SCREEN are evaluated independently of each other, producing a report with 7 activity scores showing the participant’s score, the LBB mean score, and the SD. The report also provides an overall performance and processing speed score. The overall performance score is an average of all 7 activity scores; however, the way in which the overall processing speed score is generated remains proprietary to BrainFx and unknown to the research team. Both the overall performance and processing speed scores are similarly evaluated against the LBB and identified as an area of challenge using the criteria described previously. For the purpose of this study, participants’ mean activity scores on the SCREEN were compared to the results of people aged ≥55 years in the LBB.

The Qmci evaluated 6 cognitive domains: orientation (10 points), registration (5 points), clock drawing (15 points), delayed recall (20 points), verbal fluency (20 points), and logical memory (30 points) [ 59 ]. Administering HCPs scored the text manually, with each subtest’s points contributing to the overall score out of 100 points, and the cutoff score to distinguish normal cognition from MCI was ≤67/100 [ 60 ]. Cutoffs to account for age and education have been validated and are recommended as the Qmci is sensitive to these factors [ 60 ]. A 2019 meta-analysis of the diagnostic accuracy of MCI screening tools reported that the sensitivity and specificity of the Qmci for distinguishing MCI from normal cognition is similar to usual standard-of-care tools (eg, the MoCA, Addenbrooke Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease battery total score, and Sunderland Clock Drawing Test) [ 61 ]. The Qmci has also been translated into >15 different languages and has undergone psychometric evaluation across a subset of these languages. While not as broadly adopted as the MoCA 8.1 in Canada, its psychometric properties, administration time, and availability for use suggested that the Qmci was an optimal assessment tool for MCI screening in FHT settings during the study.

Psychometric Evaluation

To date, the only published psychometric evaluation of any BrainFx tool is by Searles et al [ 42 ] in Athletic Training & Sports Health Care ; it assessed the test-retest reliability of the 360 in 15 healthy adults between the ages of 20 and 25 years. This study evaluated the psychometric properties of the SCREEN and included a statistical analysis of the tool’s internal consistency, construct validity, test-retest reliability, and sensitivity and specificity. McMurray et al [ 44 ] provide a detailed description of the data collection procedures for administration of the SCREEN and Qmci completed by participants at each visit.

Validity Testing

Face validity was outside the scope of this study but was implied, and assumptions are reported in the Results section. Construct validity, whether the 7 activities that make up the SCREEN were representative of MCI, was assessed through comparison with a substantive body of literature in the domain and through principal component analysis using varimax rotation. Criterion validity measures how closely the SCREEN results corresponded to the results of the Qmci (used here as an “imperfect gold standard” for identifying MCI in older adults) [ 62 ]. A BrainFx representative hypothesized that the ecological validity of the SCREEN questions (ie, using tasks that reflect real-world activities to detect early signs of cognitive impairment) [ 63 ] makes it a more sensitive tool than other screens (T Milner, personal communication, May 2018) and allows HCPs to equate activity scores on the SCREEN with real-world functional abilities. Criterion validity was explored first using cross-tabulations to calculate the sensitivity and specificity of the SCREEN compared to those of the Qmci. Conventional screens such as the Qmci are scored by taking the sum of correct responses on the screen and a cutoff score derived from normative data to distinguish normal cognition from MCI. The SCREEN used a different method of scoring whereby each of the 7 tasks was scored and evaluated independently of each other and there were no recommended guidelines for distinguishing normal cognition from MCI based on the aggregate areas of challenge identified by the SCREEN. Therefore, to compare the sensitivity and specificity of the SCREEN against those of the Qmci, the results of both screens were coded into a binary format as 1=healthy and 2=unhealthy, where healthy denoted no areas of challenge identified through the SCREEN and a Qmci score of ≥67. Conversely, unhealthy denoted one or more areas of challenge identified through the SCREEN and a Qmci score of <67.

Criterion validity was further explored using discrepant analysis via a resolver test [ 44 ]. Following the administration of the SCREEN and Qmci, screen results were evaluated by the administering HCP. HCPs were instructed to refer the participant for follow-up with their primary care physician if the Qmci result was <67 regardless of whether any areas of challenge were identified on the SCREEN. However, HCPs could use their clinical judgment to refer a participant for physician follow-up based on the results of the SCREEN or the Qmci, and all the referral decisions were charted on the participant’s electronic medical record following each visit and screening. In discrepant analysis, the results of the imperfect gold standard [ 64 ], as was the role of the Qmci in this study, were compared with the SCREEN results. A resolver test (classified as whether the HCP referred the patient to a physician for follow-up based on their performance on the SCREEN and the Qmci) was used on discordant results [ 64 , 65 ] to determine sensitivity and specificity. To this end, a new variable, Referral to a Physician for Cognitive Impairment , was coded as the true status (1=no referral; 2=referral was made) and compared to the Qmci as the imperfect gold standard (1=healthy; 2=unhealthy).

Reliability Testing

The reliability of a screening instrument is its ability to consistently measure an attribute and how well its component measures fit together conceptually. Internal consistency identifies whether the items in a multi-item scale are measuring the same underlying construct; the internal consistency of the SCREEN was assessed using the Cronbach α. Test-retest reliability refers to the ability of a measurement instrument to reproduce results over ≥2 occasions (assuming the underlying conditions have not changed) and was assessed using paired t tests (2-tailed), ICC, and the κ coefficient. In this study, participants completed both the SCREEN and the Qmci in the same sitting in a random sequence on at least 2 different occasions spaced 3 months apart (administration procedures are described elsewhere) [ 44 ]. In some instances, the screens were administered to the same participant on 4 separate occasions spaced 3 months apart each, and this provided up to 3 separate opportunities to conduct test-retest reliability analyses and investigate the effects of repeated practice. There are no clear guidelines on the optimal time between tests [ 66 , 67 ]; however, Streiner and Kottner [ 68 ] and Streiner [ 69 ] recommend longer periods between tests (eg, at least 10-14 days) to avoid recall bias, and greater practice effects have been experienced with shorter test-retest intervals [ 32 ].

Analysis of the quantitative data was completed using Stata (version 17.0; StataCorp). Assumptions of normality were not violated, so parametric tests were used. Collected data were reported using frequencies and percentages and compared using the chi-square or Fisher exact test as necessary. Continuous data were analyzed for central tendency and variability; categoric data were presented as proportions. Normality was tested using the Shapiro-Wilk test, and nonparametric data were tested using the Mann-Whitney U test. A P value of .05 was considered statistically significant, with 95% CIs provided where appropriate. We powered the exploratory analysis to validate the SCREEN using an estimated effect size of 12%—understanding that Canadian prevalence rates of MCI were not available [ 1 ]—and determined that the study required at least 162 participants. For test-retest reliability, using 90% power and a 5% type-I error rate, a minimum of 67 test results was required.

The time taken for participants to complete the SCREEN was recorded by the HCPs at the time of testing; there were 6 missing HCP records of time to complete the SCREEN. For these 6 cases of missing data, we imputed the mean time to complete the SCREEN by all participants who were tested by that HCP and used this to populate the missing cells [ 70 ]. There were 3 cases of missing data related to the SCREEN reports. More specifically, the SCREEN report generated by BrainFx did not include 1 or 2 data points each for the route finding, divided attention, and prioritizing tasks. The clinical notes provided by the HCP at the time of SCREEN administration did not indicate that the participant had not completed those questions, and it was not possible to determine the root cause of the missing data in report generation according to BrainFx (M Milner, personal communication, July 7, 2020). For continuous variables in analyses such as exploratory factor analysis, Cronbach α, and t test, missing values were imputed using the mean. However, for the coded healthy and unhealthy categorical variables, values were not imputed.

Data collection began in January 2019 and was to conclude on May 31, 2020. However, the emergence of the global COVID-19 pandemic resulted in the FHTs and Wilfrid Laurier University prohibiting all in-person research starting on March 16, 2020.

Participant Demographics

A total of 154 participants were recruited for the study, and 20 (13%) withdrew following their first visit to the FHT. The data of 65% (13/20) of the participants who withdrew were included in the final analysis, and the data of the remaining 35% (7/20) were removed, either due to their explicit request (3/7, 43%) or because technical issues at the time of testing rendered their data unusable (4/7, 57%). These technical issues were related to software issues (eg, any instance in which the patient or HCP interacted with the SCREEN software and followed the instructions provided, the software did not work as expected [ie, objects did not move where they were dragged or tapping on objects failed to highlight the object], and the question could not be completed). After attrition, a total of 147 individuals aged ≥55 years with no previous diagnosis of MCI or dementia participated in the study ( Table 2 ). Of the 147 participants, 71 (48.3%) took part in only 1 round of screening on visit 1 (due to COVID-19 restrictions imposed on in-person research that prevented a second visit). The remaining 51.7% (76/147) of the participants took part in ≥2 rounds of screening across multiple visits (76/147, 51.7% participated in 2 rounds; 22/147, 15% participated in 3 rounds; and 13/147, 8.8% participated in 4 rounds of screening).

The sample population was 65.3% (96/147) female (mean 70.2, SD 7.9 years) and 34.7% (51/147) male (mean 72.5, SD 8.1 years), with age ranging from 55 to 88 years; 65.3% (96/147) achieved the equivalent of or higher than a college diploma or certificate ( Table 2 ); and 32.7% (48/147) self-reported living with one or more chronic medical conditions ( Table 3 ). At the time of screening, 73.5% (108/147) of participants were also taking medications with side effects that may include impairments to memory and thinking abilities [ 71 - 75 ]; therefore, medication use was accounted for in a subset of the analyses. Finally, 84.4% (124/147) of participants self-reported regularly using technology (eg, smartphone, laptop, or tablet) with high proficiency. A random sequence generator was used to determine the order for administering the MCI screens; the SCREEN was administered first 51.9% (134/258) of the time.

Construct Validity

Construct validity was assessed through a review of relevant peer-reviewed literature that compared constructs included in the SCREEN with those identified in the literature as 2 of the most sensitive tools for MCI screening: the MoCA 8.1 [ 76 ] and the Qmci [ 25 ]. Memory, language, and verbal skills are assessed in the MoCA and Qmci but are absent from the SCREEN. Tests of verbal fluency and logical memory have been shown to be particularly sensitive to early cognitive changes [ 77 , 78 ] but are similarly absent from the SCREEN.

Exploratory factor analysis was performed to examine the SCREEN’s ability to reliably measure risk of MCI. The Kaiser-Meyer-Olkin measure yielded a value of 0.79, exceeding the commonly accepted threshold of 0.70, indicating that the sample was adequate for factor analysis. The Bartlett test of sphericity returned a chi-square value of χ 2 21 =167.1 ( P <.001), confirming the presence of correlations among variables suitable for factor analysis. A principal component analysis revealed 2 components with eigenvalues of >1, cumulatively accounting for 52.12% of the variance, with the first factor alone explaining 37.8%. After the varimax rotation, the 2 factors exhibited distinct patterns of loadings, with the visual-spatial ability factor loading predominantly on the second factor. The SCREEN tasks, except for visual-spatial ability, loaded substantially on the factors (>0.5), suggesting that the SCREEN possesses good convergent validity for assessing the risk of MCI.

Criterion Validity

The coding of SCREEN scores into a binary healthy and unhealthy outcome standardized the dependent variable to allow for criterion testing. Criterion validity was assessed using cross-tabulations and the analysis of confusion matrices and provided insights into the sensitivity and specificity of the SCREEN when compared to the Qmci. Of the 144 cases considered, 20 (13.9%) were true negatives, and 74 (51.4%) were true positives. The SCREEN’s sensitivity, which reflects its capacity to accurately identify healthy individuals (true positives), was 63.25% (74 correct identifications/117 actual positives). The specificity of the test, indicating its ability to accurately identify unhealthy individuals (true negatives), was 74.07% (20 correct identifications/27 actual negatives). Then, sensitivity and specificity were derived using discrepant analysis and a resolver test previously described (whether the HCP referred the participant to a physician following the screens). The results were identical, the estimate of the SCREEN sensitivity was 63.3% (74/117), and the estimate of the specificity was 74% (20/27).

Internal Reliability

A Cronbach α=0.70 is acceptable, and at least 0.90 is required for clinical instruments [ 79 ]. The estimate of internal consistency for the SCREEN (N=147) was Cronbach α=0.63.

Test-Retest Reliability

Test-retest reliability analyses were conducted using ICC for the SCREEN activity scores and the κ coefficient for the healthy and unhealthy classifications. Guidelines for interpretation of the ICC suggest that anything <0.5 indicates poor reliability and anything between 0.5 and 0.75 suggests moderate reliability [ 80 ]; the ICC for the SCREEN activity scores was 0.54. With respect to the κ coefficient, a κ value of <0.2 is considered to have no level of agreement, a κ value of 0.21 to 0.39 is considered minimal, a κ value of 0.4 to 0.59 is considered weak agreement, and anything >0.8 suggests strong to almost perfect agreement [ 81 ]. The κ coefficient for healthy and unhealthy classifications was 0.15.

Analysis of the Factors Impacting Healthy and Unhealthy Results

The Spearman rank correlation was used to assess the relationships between participants’ overall activity score on the SCREEN and their total time to complete the SCREEN; age, sex, and self-reported levels of education; technology use; medication use; amount of sleep; and level of anxiety (as measured using the GAS-10) at the time of SCREEN administration. Lower overall activity scores were moderately correlated with being older ( r s142 =–0.57; P <.001) and increased total time to complete the SCREEN ( r s142 =0.49; P <.001). There was also a moderate inverse relationship between overall activity score and total time to compete the SCREEN ( r s142 =–0.67; P <.001) whereby better performance was associated with quicker task completion. There were weak positive associations between overall activity score and increased technology use ( r s142 =0.34; P <.001) and higher level of education ( r s142 =0.21; P =.01).

A logistic regression model was used to predict the SCREEN result using data from 144 observations. The model’s predictors explain approximately 21.33% of the variance in the outcome variable. The likelihood ratio test indicates that the model provides a significantly better fit to the data than a model without predictors ( P <.001).

The SCREEN outcome variable ( healthy vs unhealthy ) was associated with the predictor variables sex and total time to complete the SCREEN. More specifically, female participants were more likely to obtain healthy SCREEN outcomes ( P =.007; 95% CI 0.32-2.05). For all participants, the longer it took to complete the SCREEN, the less likely they were to achieve a healthy SCREEN outcome ( P =.002; 95% CI –0.33 to –0.07). Age ( P =.25; 95% CI –0.09 to 0.02), medication use ( P =.96; 95% CI –0.9 to 0.94), technology use ( P =.44; 95% CI –0.28 to 0.65), level of education ( P =.14; 95% CI –0.09 to 0.64), level of anxiety ( P =.26; 95% CI –1.13 to 0.3), and hours of sleep ( P =.08; 95% CI –0.06 to 0.93) were not significant.

Impact of Practice Effects

The SCREEN was administered approximately 3 months apart, and separate, paired-sample t tests were performed to compare SCREEN outcomes between visits 1 and 2 (76/147, 51.7%; Table 4 ), visits 2 and 3 (22/147, 15%), and visits 3 and 4 (13/147, 8.8%). Declining visits were partially attributable to the early shutdown of data collection due to the COVID-19 pandemic, and therefore, comparisons between visits 2 and 3 or visits 3 and 4 were not reported. Compared to participants’ SCREEN performance on visit 1, their overall mean activity score and overall processing time improved on their second administration of the SCREEN (score: t 75 =–2.86 and P =.005; processing time: t 75 =–2.98 and P =.004). Even though the 7 task-specific activity scores on the SCREEN also increased between visits 1 and 2, these improvements were not significant, indicating that the difference in overall activity scores was cumulative and not attributable to a specific task ( Table 4 ).

Principal Findings

Our study aimed to evaluate the effectiveness and reliability of the BrainFx SCREEN in detecting MCI in primary care settings. The research took place during the COVID-19 pandemic, which influenced the study’s execution and timeline. Despite these challenges, the findings offer valuable insights into cognitive impairment screening.

Brief MCI screening tools help time-strapped primary care physicians determine whether referral for a definitive battery of more time-consuming and expensive tests is warranted. These tools must optimize and balance the need for time efficiency while also being psychometrically valid and easily administered [ 82 ]. The importance of brevity is determined by a number of factors, including the clinical setting. Screens that can be completed in approximately ≤5 minutes [ 13 ] are recommended for faster-paced clinical settings (eg, emergency rooms and preoperative screens), whereas those that can be completed in 5 to 10 minutes or less are better suited to primary care settings [ 82 - 84 ]. Identifying affordable, psychometrically tested screening tests for MCI that integrate into clinical workflows and are easy to consistently administer and complete may help with the following:

  • Initiating appropriate diagnostic tests for signs and symptoms at an earlier stage
  • Normalizing and destigmatizing cognitive testing for older adults
  • Expediting referrals
  • Allowing for timely access to programs and services that can support aging in place or delay institutionalization
  • Reducing risk
  • Improving the psychosocial well-being of patients and their care partners by increasing access to information and resources that aid with future planning and decision-making [ 85 , 86 ]

Various cognitive tests are commonly used for detecting MCI. These include the Addenbrook Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease, Sunderland Clock Drawing Test, Informant Questionnaire on Cognitive Decline in the Elderly, Memory Alternation Test, MMSE, MoCA 8.1, and Qmci [ 61 , 87 ]. The Addenbrook Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease, MoCA 8.1, Qmci, and Memory Alternation Test are reported to have similar diagnostic accuracy [ 61 , 88 ]. The HCPs participating in this study reported using the MoCA 8.1 as their primary screening tool for MCI along with other assessments such as the MMSE and Trail Making Test parts A and B.

Recent research highlights the growing use of digital tools [ 51 , 89 , 90 ], mobile technology [ 91 , 92 ], virtual reality [ 93 , 94 ], and artificial intelligence [ 95 ] to improve early identification of MCI. Demeyere et al [ 51 ] developed the tablet-based, 10-item Oxford Cognitive Screen–Plus to detect slight changes in cognitive impairment across 5 domains of cognition (memory, attention, number, praxis, and language), which has been validated among neurologically healthy older adults. Statsenko et al [ 96 ] have explored improvement of the predictive capabilities of tests using artificial intelligence. Similarly, there is an emerging focus on the use of machine learning techniques to detect dementia leveraging routinely collected clinical data [ 97 , 98 ]. This progression signifies a shift toward more technologically advanced, efficient, and potentially more accurate diagnostic approaches in the detection of MCI.

Whatever the modality, screening tools should be quick to administer, demonstrate consistent results over time and between different evaluators, cover all major cognitive areas, and be straightforward to both administer and interpret [ 99 ]. However, highly sensitive tests such as those suggested for screening carry a significant risk of false-positive diagnoses [ 15 ]. Given the high potential for harm of false positives, it is important to validate the psychometric properties of screening tests across different populations and understand how factors such as age and education can influence the results [ 99 ].

Our study did not assess the face validity of the SCREEN, but participating occupational therapists were comfortable with the test regimen. Nonetheless, the research team noted the absence of verbal fluency and memory tests in the SCREEN, both of which McDonnell et al [ 100 ] identified as being more sensitive to the more commonly seen amnesic MCI. Two of the most sensitive tools for MCI screening, the MoCA 8.1 [ 76 ] and Qmci [ 25 ], assess memory, language, and verbal skills, and tests of verbal fluency and logical memory have been shown to be particularly sensitive to early cognitive changes [ 77 , 78 ].

The constructs included in the SCREEN ( Table 1 ) were selected based on a single non–peer-reviewed study [ 58 ] using the 360 and traumatic brain injury data (N=188) that identified the constructs as predictive of brain injury. The absence of tasks that measure verbal fluency or logical memory in the SCREEN appears to weaken claims of construct validity. The principal component analysis of the SCREEN assessment identified 2 components accounting for 52.12% of the total variance. The first component was strongly associated with abstract reasoning, constructive ability, and divided attention, whereas the second was primarily influenced by visual-spatial abilities. This indicates that constructs related to perception, attention, and memory are central to the SCREEN scores.

The SCREEN’s binary outcome (healthy or unhealthy) created by the research team was based on comparisons with the Qmci. However, the method of identifying areas of challenge in the SCREEN by comparing the individual’s mean score on each of the 7 tasks with the mean scores of a global or filtered cohort in the LBB introduces potential biases or errors. These could arise from a surge in additions to the LBB from patients with specific characteristics, self-selection of participants, poorly trained SCREEN administrators, inclusion of nonstandard test results, underuse of appropriate filters, and underreporting of clinical conditions or factors such as socioeconomic status that impact performance in standardized cognitive tests.

The proprietary method of analyzing and reporting SCREEN results complicates traditional sensitivity and specificity measurement. Our testing indicated a sensitivity of 63.25% and specificity of 74.07% for identifying healthy (those without MCI) and unhealthy (those with MCI) individuals. The SCREEN’s Cronbach α=.63, slightly below the threshold for clinical instruments, and reliability scores that were lower than the ideal standards suggest a higher-than-acceptable level of random measurement error in its constructs. The lower reliability may also stem from an inadequate sample size or a limited number of scale items.

The SCREEN’s results are less favorable compared to those of other digital MCI screening tools that similarly enable evaluation of specific cognitive domains but also provide validated, norm-referenced cutoff scores and methods for cumulative scoring in clinical settings (Oxford Cognitive Screen–Plus) [ 51 ] or of validated MCI screening tools used in primary care (eg, MoCA 8.1, Qmci, and MMSE) [ 51 , 87 ]. The SCREEN’s unique scoring algorithm and the dynamic denominator in data analysis necessitate caution in comparing these results to those of other tools with fixed scoring algorithms and known sensitivities [ 101 , 102 ]. We found the SCREEN to have lower-than-expected internal reliability, suggesting significant random measurement error. Test-retest reliability was weak for the healthy or unhealthy outcome but stronger for overall activity scores between tests. The variability in identifying areas of challenge could relate to technological difficulties or variability from comparisons with a growing database of test results.

Potential reasons for older adults’ poorer scores on timed tests include the impact of sensorimotor decline on touch screen sensation and reaction time [ 38 , 103 ], anxiety related to taking a computer-enabled test [ 104 - 106 ], or the anticipated consequences of a negative outcome [ 107 ]. However, these effects were unlikely to have influenced the results of this study. Practice effects were observed [ 29 , 108 ], but the SCREEN’s novelty suggests that familiarity is not gained through prepreparation or word of mouth as this sample was self-selected and not randomized. Future research might also explore the impact of digital literacy and cultural differences in the interpretation of software constructs or icons on MCI screening in a randomized, older adult sample.

Limitations

This study had methodological limitations that warrant attention. The small sample size and the demographic distribution of the 147 participants aged ≥55 years, with most (96/147, 65.3%) being female and well educated, limits the generalizability of the findings to different populations. The study’s design, aiming to explore the sensitivity of the SCREEN for early detection of MCI, necessitated the exclusion of individuals with a previous diagnosis of MCI or dementia. This exclusion criterion might have impacted the study’s ability to thoroughly assess the SCREEN’s effectiveness in a more varied clinical context. The requirement for participants to read and comprehend English introduced another limitation to our study. This criterion potentially limited the SCREEN tool’s applicability across diverse linguistic backgrounds as individuals with language-based impairments or those not proficient in English may face challenges in completing the assessment [ 51 ]. Such limitations could impact the generalizability of our findings to non–English-speaking populations or to those with language impairments, underscoring the need for further research to evaluate the SCREEN tool’s effectiveness in broader clinical and linguistic contexts.

Financial constraints played a role in limiting the study’s scope. Due to funding limitations, it was not possible to include specialist assessments and a battery of neuropsychiatric tests generally considered the gold standard to confirm or rule out an MCI diagnosis. Therefore, the study relied on differential verification through 2 imperfect reference standards: a comparison with the Qmci (the tool with the highest published sensitivity to MCI in 2019, when the study was designed) and the clinical judgment of the administering HCP, particularly in decisions regarding referrals for further clinical assessment. Furthermore, while an economic feasibility assessment was considered, the research team determined that it should follow, not precede, an evaluation of the SCREEN’s validity and reliability.

The proprietary nature of the algorithm used for scoring the SCREEN posed another challenge. Without access to this algorithm, the research team had to use a novel comparative statistical approach, coding patient results into a binary variable: healthy (SCREEN=no areas of challenge OR Qmci≥67 out of 100) or unhealthy (SCREEN=one or more areas of challenge OR Qmci<67 out of 100). This may have introduced a higher level of error into our statistical analysis. Furthermore, the process for determining areas of challenge on the SCREEN involves comparing a participant’s result to the existing SCREEN results in the LBB at the time of testing. By the end of this study, the LBB contained 632 SCREEN results for adults aged ≥55 years, with this study contributing 258 of those results. The remaining 366 original SCREEN results, 64% of which were completed by individuals who self-identified as having a preexisting diagnosis or conditions associated with cognitive impairment (eg, traumatic brain injury, concussion, or stroke), could have led to an overestimation of the means and SDs of the study participants’ results at the outset of the study.

Unlike other cognitive screening tools, the SCREEN allows for filtering of results to compare different patient cohorts in the LBB using criteria such as age and education. However, at this stage of the LBB’s development, using such filters can significantly reduce the reliability of the results due to a smaller comparator population (ie, the denominator used to calculate the mean and SD). This, in turn, affects the significance of the results. Moreover, the constantly changing LBB data set makes it challenging to meaningfully compare an individual’s results over time as the evolving denominator affects the accuracy and relevance of these comparisons. Finally, the significant improvement in SCREEN scores between the first and second visits suggests the presence of practice effects, which could have influenced the reliability and validity of the findings.

Conclusions

In a primary care setting, where MCI screening tools are essential and recommended for those with concerns [ 85 ], certain criteria are paramount: time efficiency, ease of administration, and robust psychometric properties [ 82 ]. Our analysis of the BrainFx SCREEN suggests that, despite its innovative approach and digital delivery, it currently falls short in meeting these criteria. The SCREEN’s comparatively longer administration time and lower-than-expected reliability scores suggest that it may not be the most effective tool for MCI screening of older adults in a primary care setting at this time.

It is important to note that, in the wake of the COVID-19 pandemic, and with an aging population living and aging by design or necessity in a community setting, there is growing interest in digital solutions, including web-based applications and platforms to both collect digital biomarkers and deliver cognitive training and other interventions [ 109 , 110 ]. However, new normative standards are required when adapting cognitive tests to digital formats [ 92 ] as the change in medium can significantly impact test performance and results interpretation. Therefore, we recommend caution when interpreting our study results and encourage continued research and refinement of tools such as the SCREEN. This ongoing process will ensure that current and future MCI screening tools are effective, reliable, and relevant in meeting the needs of our aging population, particularly in primary care settings where early detection and intervention are key.

Acknowledgments

The researchers gratefully acknowledge the Ontario Centres of Excellence Health Technologies Fund for their financial support of this study; the executive directors and clinical leads in each of the Family Health Team study locations; the participants and their friends and families who took part in the study; and research assistants Sharmin Sharker, Kelly Zhu, and Muhammad Umair for their contributions to data management and statistical analysis.

Data Availability

The data sets generated during and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

JM contributed to the conceptualization, methodology, validation, formal analysis, data curation, writing—original draft, writing—review and editing, visualization, supervision, and funding acquisition. AML contributed to the conceptualization, methodology, validation, investigation, formal analysis, data curation, writing—original draft, writing—review and editing, visualization, and project administration. WP contributed to the validation, formal analysis, data curation, writing—original draft, writing—review and editing, and visualization. Finally, PH contributed to conceptualization, methodology, writing—review and editing, supervision, and funding acquisition.

Conflicts of Interest

None declared.

  • Casagrande M, Marselli G, Agostini F, Forte G, Favieri F, Guarino A. The complex burden of determining prevalence rates of mild cognitive impairment: a systematic review. Front Psychiatry. 2022;13:960648. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Petersen RC, Caracciolo B, Brayne C, Gauthier S, Jelic V, Fratiglioni L. Mild cognitive impairment: a concept in evolution. J Intern Med. Mar 2014;275(3):214-228. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Knopman DS, Petersen RC. Mild cognitive impairment and mild dementia: a clinical perspective. Mayo Clin Proc. Oct 2014;89(10):1452-1459. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Anderson ND. State of the science on mild cognitive impairment (MCI). CNS Spectr. Feb 2019;24(1):78-87. [ CrossRef ] [ Medline ]
  • Tangalos EG, Petersen RC. Mild cognitive impairment in geriatrics. Clin Geriatr Med. Nov 2018;34(4):563-589. [ CrossRef ] [ Medline ]
  • Ng R, Maxwell C, Yates E, Nylen K, Antflick J, Jette N, et al. Brain disorders in Ontario: prevalence, incidence and costs from health administrative data. Institute for Clinical Evaluative Sciences. 2015. URL: https:/​/www.​ices.on.ca/​publications/​research-reports/​brain-disorders-in-ontario-prevalence-incidence-and-costs-from-health-administrative-data/​ [accessed 2024-04-01]
  • Centers for Disease ControlPrevention (CDC). Self-reported increased confusion or memory loss and associated functional difficulties among adults aged ≥ 60 years - 21 states, 2011. MMWR Morb Mortal Wkly Rep. May 10, 2013;62(18):347-350. [ FREE Full text ] [ Medline ]
  • Petersen RC, Lopez O, Armstrong MJ, Getchius TS, Ganguli M, Gloss D, et al. Practice guideline update summary: mild cognitive impairment: report of the guideline development, dissemination, and implementation subcommittee of the American Academy of Neurology. Neurology. Jan 16, 2018;90(3):126-135. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Song WX, Wu WW, Zhao YY, Xu HL, Chen GC, Jin SY, et al. Evidence from a meta-analysis and systematic review reveals the global prevalence of mild cognitive impairment. Front Aging Neurosci. 2023;15:1227112. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen Y, Denny KG, Harvey D, Farias ST, Mungas D, DeCarli C, et al. Progression from normal cognition to mild cognitive impairment in a diverse clinic-based and community-based elderly cohort. Alzheimers Dement. Apr 2017;13(4):399-405. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Langa KM, Levine DA. The diagnosis and management of mild cognitive impairment: a clinical review. JAMA. Dec 17, 2014;312(23):2551-2561. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang Y, Natale G, Clouston S. Incidence of mild cognitive impairment, conversion to probable dementia, and mortality. Am J Alzheimers Dis Other Demen. 2021;36:15333175211012235. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Prince M, Bryce R, Ferri CP. World Alzheimer report 2011: the benefits of early diagnosis and intervention. Alzheimer’s Disease International. 2011. URL: https://www.alzint.org/u/WorldAlzheimerReport2011.pdf [accessed 2024-04-01]
  • Patnode CD, Perdue LA, Rossom RC, Rushkin MC, Redmond N, Thomas RG, et al. Screening for cognitive impairment in older adults: updated evidence report and systematic review for the US preventive services task force. JAMA. Feb 25, 2020;323(8):764-785. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Canadian Task Force on Preventive Health Care, Pottie K, Rahal R, Jaramillo A, Birtwhistle R, Thombs BD, et al. Recommendations on screening for cognitive impairment in older adults. CMAJ. Jan 05, 2016;188(1):37-46. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tahami Monfared AA, Phan NT, Pearson I, Mauskopf J, Cho M, Zhang Q, et al. A systematic review of clinical practice guidelines for Alzheimer's disease and strategies for future advancements. Neurol Ther. Aug 2023;12(4):1257-1284. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mattke S, Jun H, Chen E, Liu Y, Becker A, Wallick C. Expected and diagnosed rates of mild cognitive impairment and dementia in the U.S. medicare population: observational analysis. Alzheimers Res Ther. Jul 22, 2023;15(1):128. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Manly JJ, Tang MX, Schupf N, Stern Y, Vonsattel JP, Mayeux R. Frequency and course of mild cognitive impairment in a multiethnic community. Ann Neurol. Apr 2008;63(4):494-506. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Black CM, Ambegaonkar BM, Pike J, Jones E, Husbands J, Khandker RK. The diagnostic pathway from cognitive impairment to dementia in Japan: quantification using real-world data. Alzheimer Dis Assoc Disord. 2019;33(4):346-353. [ CrossRef ] [ Medline ]
  • Ritchie CW, Black CM, Khandker RK, Wood R, Jones E, Hu X, et al. Quantifying the diagnostic pathway for patients with cognitive impairment: real-world data from seven European and north American countries. J Alzheimers Dis. 2018;62(1):457-466. [ CrossRef ] [ Medline ]
  • Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. Nov 1975;12(3):189-198. [ CrossRef ] [ Medline ]
  • Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC. Cognitive tests to detect dementia: a systematic review and meta-analysis. JAMA Intern Med. Sep 2015;175(9):1450-1458. [ CrossRef ] [ Medline ]
  • Lopez MN, Charter RA, Mostafavi B, Nibut LP, Smith WE. Psychometric properties of the Folstein mini-mental state examination. Assessment. Jun 2005;12(2):137-144. [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal cognitive assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. Apr 2005;53(4):695-699. [ CrossRef ] [ Medline ]
  • O'Caoimh R, Timmons S, Molloy DW. Screening for mild cognitive impairment: comparison of "MCI specific" screening instruments. J Alzheimers Dis. 2016;51(2):619-629. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Trzepacz PT, Hochstetler H, Wang S, Walker B, Saykin AJ, Alzheimer’s Disease Neuroimaging Initiative. Relationship between the Montreal cognitive assessment and mini-mental state examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. Sep 07, 2015;15:107. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Phillips N, Chertkow H. Normative data for the Montreal Cognitive Assessment (MoCA) in a population-based sample. Neurology. Mar 06, 2012;78(10):765-766. [ CrossRef ] [ Medline ]
  • Monroe T, Carter M. Using the Folstein Mini Mental State Exam (MMSE) to explore methodological issues in cognitive aging research. Eur J Ageing. Sep 2012;9(3):265-274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Damian AM, Jacobson SA, Hentz JG, Belden CM, Shill HA, Sabbagh MN, et al. The Montreal cognitive assessment and the mini-mental state examination as screening instruments for cognitive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011;31(2):126-131. [ CrossRef ] [ Medline ]
  • Kaufer DI, Williams CS, Braaten AJ, Gill K, Zimmerman S, Sloane PD. Cognitive screening for dementia and mild cognitive impairment in assisted living: comparison of 3 tests. J Am Med Dir Assoc. Oct 2008;9(8):586-593. [ CrossRef ] [ Medline ]
  • Gagnon C, Saillant K, Olmand M, Gayda M, Nigam A, Bouabdallaoui N, et al. Performances on the Montreal cognitive assessment along the cardiovascular disease continuum. Arch Clin Neuropsychol. Jan 17, 2022;37(1):117-124. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cooley SA, Heaps JM, Bolzenius JD, Salminen LE, Baker LM, Scott SE, et al. Longitudinal change in performance on the Montreal cognitive assessment in older adults. Clin Neuropsychol. 2015;29(6):824-835. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Caoimh R, Gao Y, McGlade C, Healy L, Gallagher P, Timmons S, et al. Comparison of the quick mild cognitive impairment (Qmci) screen and the SMMSE in screening for mild cognitive impairment. Age Ageing. Sep 2012;41(5):624-629. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Caoimh R, Molloy DW. Comparing the diagnostic accuracy of two cognitive screening instruments in different dementia subtypes and clinical depression. Diagnostics (Basel). Aug 08, 2019;9(3):93. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Clarnette R, O'Caoimh R, Antony DN, Svendrovski A, Molloy DW. Comparison of the Quick Mild Cognitive Impairment (Qmci) screen to the Montreal Cognitive Assessment (MoCA) in an Australian geriatrics clinic. Int J Geriatr Psychiatry. Jun 2017;32(6):643-649. [ CrossRef ] [ Medline ]
  • Glynn K, Coen R, Lawlor BA. Is the Quick Mild Cognitive Impairment screen (QMCI) more accurate at detecting mild cognitive impairment than existing short cognitive screening tests? A systematic review of the current literature. Int J Geriatr Psychiatry. Dec 2019;34(12):1739-1746. [ CrossRef ] [ Medline ]
  • Lee MT, Chang WY, Jang Y. Psychometric and diagnostic properties of the Taiwan version of the quick mild cognitive impairment screen. PLoS One. 2018;13(12):e0207851. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wallace SE, Donoso Brown EV, Simpson RC, D'Acunto K, Kranjec A, Rodgers M, et al. A comparison of electronic and paper versions of the Montreal cognitive assessment. Alzheimer Dis Assoc Disord. 2019;33(3):272-278. [ CrossRef ] [ Medline ]
  • Gagnon C, Olmand M, Dupuy EG, Besnier F, Vincent T, Grégoire CA, et al. Videoconference version of the Montreal cognitive assessment: normative data for Quebec-French people aged 50 years and older. Aging Clin Exp Res. Jul 2022;34(7):1627-1633. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Friemel TN. The digital divide has grown old: determinants of a digital divide among seniors. New Media & Society. Jun 12, 2014;18(2):313-331. [ CrossRef ]
  • Ventola CL. Mobile devices and apps for health care professionals: uses and benefits. P T. May 2014;39(5):356-364. [ FREE Full text ] [ Medline ]
  • Searles C, Farnsworth JL, Jubenville C, Kang M, Ragan B. Test–retest reliability of the BrainFx 360® performance assessment. Athl Train Sports Health Care. Jul 2019;11(4):183-191. [ CrossRef ]
  • Jones C, Miguel-Cruz A, Brémault-Phillips S. Technology acceptance and usability of the BrainFx SCREEN in Canadian military members and veterans with posttraumatic stress disorder and mild traumatic brain injury: mixed methods UTAUT study. JMIR Rehabil Assist Technol. May 13, 2021;8(2):e26078. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McMurray J, Levy A, Holyoke P. Psychometric evaluation and workflow integration study of a tablet-based tool to detect mild cognitive impairment in older adults: protocol for a mixed methods study. JMIR Res Protoc. May 21, 2021;10(5):e25520. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wilansky P, Eklund JM, Milner T, Kreindler D, Cheung A, Kovacs T, et al. Cognitive behavior therapy for anxious and depressed youth: improving homework adherence through mobile technology. JMIR Res Protoc. Nov 10, 2016;5(4):e209. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak. Jan 09, 2006;6:3. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Goodhue DL, Thompson RL. Task-technology fit and individual performance. MIS Q. Jun 1995;19(2):213-236. [ CrossRef ]
  • Beuscher L, Grando VT. Challenges in conducting qualitative research with individuals with dementia. Res Gerontol Nurs. Jan 2009;2(1):6-11. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Howe E. Informed consent, participation in research, and the Alzheimer's patient. Innov Clin Neurosci. May 2012;9(5-6):47-51. [ FREE Full text ] [ Medline ]
  • Thorogood A, Mäki-Petäjä-Leinonen A, Brodaty H, Dalpé G, Gastmans C, Gauthier S, et al. Global Alliance for GenomicsHealth‚ AgeingDementia Task Team. Consent recommendations for research and international data sharing involving persons with dementia. Alzheimers Dement. Oct 2018;14(10):1334-1343. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Demeyere N, Haupt M, Webb SS, Strobel L, Milosevich ET, Moore MJ, et al. Introducing the tablet-based Oxford Cognitive Screen-Plus (OCS-Plus) as an assessment tool for subtle cognitive impairments. Sci Rep. Apr 12, 2021;11(1):8000. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Patel BB. Validation of Montreal cognitive assessment, MoCA, alternate French versions. Can J Neurol Sci. Sep 2016;43(5):665-671. [ CrossRef ] [ Medline ]
  • Mueller AE, Segal DL, Gavett B, Marty MA, Yochim B, June A, et al. Geriatric anxiety scale: item response theory analysis, differential item functioning, and creation of a ten-item short form (GAS-10). Int Psychogeriatr. Jul 2015;27(7):1099-1111. [ CrossRef ] [ Medline ]
  • Segal DL, June A, Payne M, Coolidge FL, Yochim B. Development and initial validation of a self-report assessment tool for anxiety among older adults: the Geriatric Anxiety Scale. J Anxiety Disord. Oct 2010;24(7):709-714. [ CrossRef ] [ Medline ]
  • Balsamo M, Cataldi F, Carlucci L, Fairfield B. Assessment of anxiety in older adults: a review of self-report measures. Clin Interv Aging. 2018;13:573-593. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gatti A, Gottschling J, Brugnera A, Adorni R, Zarbo C, Compare A, et al. An investigation of the psychometric properties of the Geriatric Anxiety Scale (GAS) in an Italian sample of community-dwelling older adults. Aging Ment Health. Sep 2018;22(9):1170-1178. [ CrossRef ] [ Medline ]
  • Yochim BP, Mueller AE, June A, Segal DL. Psychometric properties of the Geriatric Anxiety Scale: comparison to the beck anxiety inventory and geriatric anxiety inventory. Clin Gerontol. Dec 06, 2010;34(1):21-33. [ CrossRef ]
  • Recent concussion (< 6 months ago) analysis result. Daisy Intelligence. 2016. URL: https://www.daisyintelligence.com/retail-solutions/ [accessed 2024-04-01]
  • Malloy DW, O'Caoimh R. The Quick Guide: Scoring and Administration Instructions for The Quick Mild Cognitive Impairment (Qmci) Screen. Waterford, Ireland. Newgrange Press; 2017.
  • O'Caoimh R, Gao Y, Svendovski A, Gallagher P, Eustace J, Molloy DW. Comparing approaches to optimize cut-off scores for short cognitive screening instruments in mild cognitive impairment and dementia. J Alzheimers Dis. 2017;57(1):123-133. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Breton A, Casey D, Arnaoutoglou NA. Cognitive tests for the detection of mild cognitive impairment (MCI), the prodromal stage of dementia: meta-analysis of diagnostic accuracy studies. Int J Geriatr Psychiatry. Feb 2019;34(2):233-242. [ CrossRef ] [ Medline ]
  • Umemneku Chikere CM, Wilson K, Graziadio S, Vale L, Allen AJ. Diagnostic test evaluation methodology: a systematic review of methods employed to evaluate diagnostic tests in the absence of gold standard - An update. PLoS One. 2019;14(10):e0223832. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Espinosa A, Alegret M, Boada M, Vinyes G, Valero S, Martínez-Lage P, et al. Ecological assessment of executive functions in mild cognitive impairment and mild Alzheimer's disease. J Int Neuropsychol Soc. Sep 2009;15(5):751-757. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hawkins DM, Garrett JA, Stephenson B. Some issues in resolution of diagnostic tests using an imperfect gold standard. Stat Med. Jul 15, 2001;20(13):1987-2001. [ CrossRef ] [ Medline ]
  • Hadgu A, Dendukuri N, Hilden J. Evaluation of nucleic acid amplification tests in the absence of a perfect gold-standard test: a review of the statistical and epidemiologic issues. Epidemiology. Sep 2005;16(5):604-612. [ CrossRef ] [ Medline ]
  • Marx RG, Menezes A, Horovitz L, Jones EC, Warren RF. A comparison of two time intervals for test-retest reliability of health status instruments. J Clin Epidemiol. Aug 2003;56(8):730-735. [ CrossRef ] [ Medline ]
  • Paiva CE, Barroso EM, Carneseca EC, de Pádua Souza C, Dos Santos FT, Mendoza López RV, et al. A critical analysis of test-retest reliability in instrument validation studies of cancer patients under palliative care: a systematic review. BMC Med Res Methodol. Jan 21, 2014;14:8. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. Sep 2014;70(9):1970-1979. [ CrossRef ] [ Medline ]
  • Streiner DL. A checklist for evaluating the usefulness of rating scales. Can J Psychiatry. Mar 1993;38(2):140-148. [ CrossRef ] [ Medline ]
  • Peyre H, Leplège A, Coste J. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey. Qual Life Res. Mar 2011;20(2):287-300. [ CrossRef ] [ Medline ]
  • Nevado-Holgado AJ, Kim CH, Winchester L, Gallacher J, Lovestone S. Commonly prescribed drugs associate with cognitive function: a cross-sectional study in UK Biobank. BMJ Open. Nov 30, 2016;6(11):e012177. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moore AR, O'Keeffe ST. Drug-induced cognitive impairment in the elderly. Drugs Aging. Jul 1999;15(1):15-28. [ CrossRef ] [ Medline ]
  • Rogers J, Wiese BS, Rabheru K. The older brain on drugs: substances that may cause cognitive impairment. Geriatr Aging. 2008;11(5):284-289. [ FREE Full text ]
  • Marvanova M. Drug-induced cognitive impairment: effect of cardiovascular agents. Ment Health Clin. Jul 2016;6(4):201-206. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Espeland MA, Rapp SR, Manson JE, Goveas JS, Shumaker SA, Hayden KM, et al. WHIMSYWHIMS-ECHO Study Groups. Long-term effects on cognitive trajectories of postmenopausal hormone therapy in two age groups. J Gerontol A Biol Sci Med Sci. Jun 01, 2017;72(6):838-845. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Luis CA, Keegan AP, Mullan M. Cross validation of the Montreal cognitive assessment in community dwelling older adults residing in the Southeastern US. Int J Geriatr Psychiatry. Feb 2009;24(2):197-201. [ CrossRef ] [ Medline ]
  • Cunje A, Molloy DW, Standish TI, Lewis DL. Alternate forms of logical memory and verbal fluency tasks for repeated testing in early cognitive changes. Int Psychogeriatr. Feb 2007;19(1):65-75. [ CrossRef ] [ Medline ]
  • Molloy DW, Standish TI, Lewis DL. Screening for mild cognitive impairment: comparing the SMMSE and the ABCS. Can J Psychiatry. Jan 2005;50(1):52-58. [ CrossRef ] [ Medline ]
  • Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 4th edition. Oxford, UK. Oxford University Press; 2008.
  • Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. Jun 2016;15(2):155-163. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276-282. [ FREE Full text ] [ Medline ]
  • Zhuang L, Yang Y, Gao J. Cognitive assessment tools for mild cognitive impairment screening. J Neurol. May 2021;268(5):1615-1622. [ CrossRef ] [ Medline ]
  • Zhang J, Wang L, Deng X, Fei G, Jin L, Pan X, et al. Five-minute cognitive test as a new quick screening of cognitive impairment in the elderly. Aging Dis. Dec 2019;10(6):1258-1269. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feldman HH, Jacova C, Robillard A, Garcia A, Chow T, Borrie M, et al. Diagnosis and treatment of dementia: 2. Diagnosis. CMAJ. Mar 25, 2008;178(7):825-836. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sabbagh MN, Boada M, Borson S, Chilukuri M, Dubois B, Ingram J, et al. Early detection of mild cognitive impairment (MCI) in primary care. J Prev Alzheimers Dis. 2020;7(3):165-170. [ CrossRef ] [ Medline ]
  • Milne A. Dementia screening and early diagnosis: the case for and against. Health Risk Soc. Mar 05, 2010;12(1):65-76. [ CrossRef ]
  • Screening tools to identify adults with cognitive impairment associated with dementia: diagnostic accuracy. Canadian Agency for Drugs and Technologies in Health. 2014. URL: https:/​/www.​cadth.ca/​sites/​default/​files/​pdf/​htis/​nov-2014/​RB0752%20Cognitive%20Assessments%20for%20Dementia%20Final.​pdf [accessed 2024-04-01]
  • Chehrehnegar N, Nejati V, Shati M, Rashedi V, Lotfi M, Adelirad F, et al. Early detection of cognitive disturbances in mild cognitive impairment: a systematic review of observational studies. Psychogeriatrics. Mar 2020;20(2):212-228. [ CrossRef ] [ Medline ]
  • Chan JY, Yau ST, Kwok TC, Tsoi KK. Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: a systematic review. Ageing Res Rev. Dec 2021;72:101506. [ CrossRef ] [ Medline ]
  • Cubillos C, Rienzo A. Digital cognitive assessment tests for older adults: systematic literature review. JMIR Ment Health. Dec 08, 2023;10:e47487. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen R, Foschini L, Kourtis L, Signorini A, Jankovic F, Pugh M, et al. Developing measures of cognitive impairment in the real world from consumer-grade multimodal sensor streams. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019. Presented at: KDD '19; August 4-8, 2019;2145; Anchorage, AK. URL: https://dl.acm.org/doi/10.1145/3292500.3330690 [ CrossRef ]
  • Koo BM, Vizer LM. Mobile technology for cognitive assessment of older adults: a scoping review. Innov Aging. Jan 2019;3(1):igy038. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zygouris S, Ntovas K, Giakoumis D, Votis K, Doumpoulakis S, Segkouli S, et al. A preliminary study on the feasibility of using a virtual reality cognitive training application for remote detection of mild cognitive impairment. J Alzheimers Dis. 2017;56(2):619-627. [ CrossRef ] [ Medline ]
  • Liu Q, Song H, Yan M, Ding Y, Wang Y, Chen L, et al. Virtual reality technology in the detection of mild cognitive impairment: a systematic review and meta-analysis. Ageing Res Rev. Jun 2023;87:101889. [ CrossRef ] [ Medline ]
  • Fayemiwo MA, Olowookere TA, Olaniyan OO, Ojewumi TO, Oyetade IS, Freeman S, et al. Immediate word recall in cognitive assessment can predict dementia using machine learning techniques. Alzheimers Res Ther. Jun 15, 2023;15(1):111. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Statsenko Y, Meribout S, Habuza T, Almansoori TM, van Gorkom KN, Gelovani JG, et al. Patterns of structure-function association in normal aging and in Alzheimer's disease: screening for mild cognitive impairment and dementia with ML regression and classification models. Front Aging Neurosci. 2022;14:943566. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roebuck-Spencer TM, Glen T, Puente AE, Denney RL, Ruff RM, Hostetter G, et al. Cognitive screening tests versus comprehensive neuropsychological test batteries: a national academy of neuropsychology education paper†. Arch Clin Neuropsychol. Jun 01, 2017;32(4):491-498. [ CrossRef ] [ Medline ]
  • Jammeh EA, Carroll CB, Pearson SW, Escudero J, Anastasiou A, Zhao P, et al. Machine-learning based identification of undiagnosed dementia in primary care: a feasibility study. BJGP Open. Jul 2018;2(2):bjgpopen18X101589. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Riello M, Rusconi E, Treccani B. The role of brief global cognitive tests and neuropsychological expertise in the detection and differential diagnosis of dementia. Front Aging Neurosci. 2021;13:648310. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McDonnell M, Dill L, Panos S, Amano S, Brown W, Giurgius S, et al. Verbal fluency as a screening tool for mild cognitive impairment. Int Psychogeriatr. Sep 2020;32(9):1055-1062. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wojtowicz A, Larner AJ. Diagnostic test accuracy of cognitive screeners in older people. Prog Neurol Psychiatry. Mar 20, 2017;21(1):17-21. [ CrossRef ]
  • Larner AJ. Cognitive screening instruments for the diagnosis of mild cognitive impairment. Prog Neurol Psychiatry. Apr 07, 2016;20(2):21-26. [ CrossRef ]
  • Heintz BD, Keenan KG. Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction. PLoS One. 2018;13(2):e0191309. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Laguna K, Babcock RL. Computer anxiety in young and older adults: implications for human-computer interactions in older populations. Comput Human Behav. Aug 1997;13(3):317-326. [ CrossRef ]
  • Wild KV, Mattek NC, Maxwell SA, Dodge HH, Jimison HB, Kaye JA. Computer-related self-efficacy and anxiety in older adults with and without mild cognitive impairment. Alzheimers Dement. Nov 2012;8(6):544-552. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wiechmann D, Ryan AM. Reactions to computerized testing in selection contexts. Int J Sel Assess. Jul 30, 2003;11(2-3):215-229. [ CrossRef ]
  • Gass CS, Curiel RE. Test anxiety in relation to measures of cognitive and intellectual functioning. Arch Clin Neuropsychol. Aug 2011;26(5):396-404. [ CrossRef ] [ Medline ]
  • Barbic D, Kim B, Salehmohamed Q, Kemplin K, Carpenter CR, Barbic SP. Diagnostic accuracy of the Ottawa 3DY and short blessed test to detect cognitive dysfunction in geriatric patients presenting to the emergency department. BMJ Open. Mar 16, 2018;8(3):e019652. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Owens AP, Ballard C, Beigi M, Kalafatis C, Brooker H, Lavelle G, et al. Implementing remote memory clinics to enhance clinical care during and after COVID-19. Front Psychiatry. 2020;11:579934. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Geddes MR, O'Connell ME, Fisk JD, Gauthier S, Camicioli R, Ismail Z, et al. Alzheimer Society of Canada Task Force on Dementia Care Best Practices for COVID‐19. Remote cognitive and behavioral assessment: report of the Alzheimer Society of Canada task force on dementia care best practices for COVID-19. Alzheimers Dement (Amst). 2020;12(1):e12111. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by G Eysenbach, T de Azevedo Cardoso; submitted 29.01.24; peer-reviewed by J Gao, MJ Moore; comments to author 20.02.24; revised version received 05.03.24; accepted 19.03.24; published 19.04.24.

©Josephine McMurray, AnneMarie Levy, Wei Pang, Paul Holyoke. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 19.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 November 2023

Impact of industrial robots on environmental pollution: evidence from China

  • Yanfang Liu 1  

Scientific Reports volume  13 , Article number:  20769 ( 2023 ) Cite this article

3622 Accesses

1 Citations

2 Altmetric

Metrics details

  • Environmental sciences
  • Environmental social sciences

The application of industrial robots is considered a significant factor affecting environmental pollution. Selecting industrial wastewater discharge, industrial SO 2 emissions and industrial soot emissions as the evaluation indicators of environmental pollution, this paper uses the panel data model and mediation effect model to empirically examine the impact of industrial robots on environmental pollution and its mechanisms. The conclusions are as follows: (1) Industrial robots can significantly reduce environmental pollution. (2) Industrial robots can reduce environmental pollution by improving the level of green technology innovation and optimizing the structure of employment skills. (3) With the increase in emissions of industrial wastewater, industrial SO 2 , and industrial dust, the impacts generated by industrial robots are exhibiting trends of a “W” shape, gradual intensification, and progressive weakening. (4) Regarding regional heterogeneity, industrial robots in the eastern region have the greatest negative impact on environmental pollution, followed by the central region, and the western region has the least negative impact on environmental pollution. Regarding time heterogeneity, the emission reduction effect of industrial robots after 2013 is greater than that before 2013. Based on the above conclusions, this paper suggests that the Chinese government and enterprises should increase investment in the robot industry. Using industrial robots to drive innovation in green technology and optimize employment skill structures, reducing environmental pollution.

Similar content being viewed by others

research paper is based on

The impact of industrial robot adoption on corporate green innovation in China

Lin Liang, Liujie Lu & Ling Su

research paper is based on

Air pollution reduction and climate co-benefits in China’s industries

Haoqi Qian, Shaodan Xu, … Libo Wu

research paper is based on

The impact of smart city construction (SCC) on pollution emissions (PE): evidence from China

GuoWei Zhang, XianMin Sun & Shen Zhong

Introduction

Since the reform and opening up, China’s rapid economic growth has created a world-renowned “economic growth miracle” 1 . With the rapid economic growth, China’s environmental pollution problem is becoming more and more serious 2 . According to the “ Global Environmental Performance Index Report ” released by Yale University in the United States in 2022, China’s environmental performance index scores 28.4 points, ranking 160th out of 180 participating countries. The aggravation of environmental pollution not only affects residents’ health 3 , but also affects the efficiency of economic operation 4 . According to calculation of the General Administration of Environmental Protection, the World Bank and the Chinese Academy of Sciences, China’s annual losses caused by environmental pollution account for about 10% of GDP. Exploring the factors that affect environmental pollution and seeking ways to reduce environmental pollution are conducive to the development of economy within the scope of environment.

Industrial robots are machines that can be automatically controlled, repeatedly programmed, and multi-purpose 5 . They replace the low-skilled labor force engaged in procedural work 6 , reducing the raw materials required for manual operation. Industrial robots improve the clean technology level and energy efficiency of coal combustion, reducing pollutant emissions in front-end production. Industrial robots also monitor the energy consumption and sewage discharge in the production process in real time. The excessive discharge behavior of enterprises in the production process is regulated, reducing the emission of pollutants in the end treatment. Based on the selection and coding of literature (Appendix A ), this paper uses the meta-analysis method to compare the impacts of multiple factors such as economics, population, technology, and policy on environmental pollution. As shown in Table 1 , compared to other factors, industrial robots demonstrate greater advantages in reducing environmental pollution. There is a lack of research on the relationship between industrial robots and environmental pollution in China. With the advent of artificial intelligence era, China’s industrial robot industry has developed rapidly. According to data released by the International Federation of Robotics (IFR), from 1999 to 2019, China’s industrial robot ownership and installation shows an increasing trend year by year (Fig.  1 ). In 2013 and 2016, China’s industrial robot installation (36,560) and ownership (349,470) exceeds Japan for the first time, becoming the world’s largest country in terms of installation and ownership of industrial robots. Whether the application of industrial robots in China contributes to the reduction of environmental pollution? What is the mechanism of the impact of China’s industrial robots on environmental pollution? Researching this issue is crucial for filling the gaps in existing research and providing a reference for other countries to achieve emission reduction driven by robots.

figure 1

Industrial robot installations in the world’s top five industrial robot markets from 1999 to 2019.

Based on the above analysis, this paper innovatively incorporates industrial robots and environmental pollution into a unified framework. Based on the panel data of 30 provinces in China from 2006 to 2019, this paper uses the ordinary panel model and mediating effect model to empirically test the impact of industrial robots on China’s environmental pollution and its transmission channels. The panel quantile model is used to empirically analyze the heterogeneous impact of industrial robots on environmental pollution under different environmental pollution levels.

Literature review

A large number of scholars have begun to study the problem of environmental pollution. Its research content mainly includes two aspects: The measurement of environmental pollution and its influencing factors. Regarding the measurement, some scholars have used SO 2 emissions 7 , industrial soot emissions 8 and PM2.5 concentration 9 and other single indicators to measure the degree of environmental pollution. The single indicator cannot fully and scientifically reflect the degree of environmental pollution. To make up for this defect, some scholars have included industrial SO 2 emissions, industrial wastewater discharge and industrial soot emissions into the environmental pollution evaluation system, and used the entropy method to measure environmental pollution level 10 . This method ignores the different characteristics and temporal and spatial trends of different pollutants, which makes the analysis one-sided. Regarding the influencing factors, economic factors such as economic development level 11 , foreign direct investment 12 and income 13 , population factors such as population size 14 and urbanization level 15 , energy consumption 16 all have an impact on environmental pollution. Specifically, economic development and technological innovation can effectively reduce environmental pollution 17 . The expansion of population size can aggravate environmental pollution. Income inequality can reduce environmental pollution, but higher income inequality may aggravate environmental pollution 18 . There are “pollution heaven hypothesis” and “pollution halo hypothesis” between foreign direct investment and environmental pollution 19 . Technological factors also have a non-negligible impact on environmental pollution 20 .

With continuous deepening of research, scholars have begun to focus on the impact of automation technology, especially industrial robot technology, on the environment. Ghobakhloo et al. 21 theoretically analyzed the impact of industrial robots on energy sustainability, contending that the application of industrial robots could foster sustainable development of energy. Using data from multiple countries, a few scholars have empirically analyzed the effect of industrial robots on environmental pollution (Table 2 ). Luan et al. 22 used panel data from 73 countries between 1993 and 2019 to empirically analyze the impact of industrial robots on air pollution, finding that the use of industrial robots intensifies environmental pollution. Using panel data from 66 countries from 1993 to 2018, Wang et al. 23 analyzed the impact of industrial robots on carbon intensity and found that industrial robots can reduce carbon intensity. On the basis of analyzing the overall impact of industrial robots on environmental pollution, some scholars conducted in-depth exploration of its mechanism. Based on data from 72 countries between 1993 and 2019, Chen et al. 5 explored the impact of industrial robots on the ecological footprint, discovering that industrial robots can reduce the ecological footprint through time saving effect, green employment effect and energy upgrading effect. Using panel data from 35 countries between 1993 and 2017, Li et al. 24 empirically examined the carbon emission reduction effect of industrial robots, finding that industrial robots can effectively reduce carbon emissions by increasing green total factor productivity and reducing energy intensity. Although the above studies have successfully estimated the overall impact of industrial robots on environmental pollution and its mechanisms, they have not fully considered the role of technological progress, labor structure and other factors in the relationship between the two. These studies all chose data from multiple countries as research samples and lack research on the relationship between industrial robots and environmental pollution in China, an emerging country.

The above literature provides inspiration for this study, but there are still shortcomings in the following aspects: Firstly, there is a lack of research on the relationship between industrial robots and environmental pollution in emerging countries. There are significant differences between emerging and developed countries in terms of institutional background and the degree of environmental pollution. As a representative emerging country, research on the relationship between industrial robots and environmental pollution in China can provide reliable references for other emerging countries. Secondly, theoretically, the study of the impact of industrial robots on environmental pollution is still in its initial stage. There are few studies that deeply explore its impact mechanism, and there is a lack of analysis of the role of technological progress and labor structure in the relationship between the two.

The innovations of this paper are as follows: (1) In terms of sample selection, this paper selects panel data from 30 provinces in China from 2006 to 2019 as research samples to explore the relationship between industrial robots and environmental pollution in China, providing references for other emerging countries to improve environmental quality using industrial robots. (2) In terms of theory, this paper is not limited to revealing the superficial relationship between industrial robots and environmental pollution. it starts from a new perspective and provides an in-depth analysis of how industrial robots affect environmental pollution through employment skill structure and green technology innovation. This not only enriches research in the fields of industrial robots and the environment, but is also of great significance in guiding the direction of industrial policy and technology research and development.

Theoretical analysis and hypothesis

Industrial robots and environmental pollution.

As shown in Fig.  2 , the impact of industrial robots on environmental pollution is mainly reflected in two aspects: Front-end production and end treatment. In front-end production, industrial robots enable artificial substitution effects 25 . Manual operation is replaced by machine operation, reducing the raw materials needed for manual operation. Through the specific program setting of industrial robots, clean energy is applied to industrial production 26 . The use of traditional fuels such as coal and oil is reduced. In terms of end treatment, the traditional pollutant concentration tester only measures a single type of pollutant. Its data cannot be obtained in time. It is easy to cause pollution incidents. Industrial robots can measure a variety of pollutants, and have the function of remote unmanned operation and warning. It reflects the pollution situation in time, reducing the probability of pollution incidents. The use of robots can upgrade sewage treatment equipment and improve the accuracy of pollution treatment, reducing pollutant emissions. Based on the above analysis, this paper proposes hypothesis 1.

figure 2

The impact of industrial robots on environmental pollution.

Hypothesis 1

The use of industrial robots can reduce environmental pollution.

Mediating effect of green technology innovation

Industrial robots can affect environmental pollution by promoting green technology innovation. The transmission path of “industrial robots-green technology innovation-environmental pollution” is formed. Industrial robots are the materialization of technological progress in the field of enterprise R&D. Its impact on green technology innovation is mainly manifested in the following two aspects: Firstly, industrial robots classify known knowledge, which helps enterprises to integrate internal and external knowledge 27 . The development of green technology innovation activities of enterprises is promoted. Secondly, enterprises can simulate existing green technologies through industrial robots. The shortcomings of green technology in each link are found. Based on this, enterprises can improve and perfect green technology in a targeted manner. Industrial robots can collect and organize data, which enables enterprises to predict production costs and raw material consumption. Excessive procurement by enterprises can occupy working capital. Inventory backlog leads to warehousing, logistics and other expenses, increasing storage costs 28 . Forecasting the consumption of raw materials allows enterprises to purchase precisely, preventing over-procurement and inventory backlog, thereby reducing the use of working capital and storage costs 29 . The production cost of enterprises is reduced. Enterprises have more funds for green technology research and development.

The continuous innovation of green technology is helpful to solve the problem of environmental pollution. Firstly, green technology innovation helps use resources better 30 , lowers dependence on old energy, and reduces environmental damage. Secondly, green technology innovation promotes the greening of enterprises in manufacturing, sales and after-sales 31 . The emission of pollutants in production process is reduced. Finally, green technology innovation improves the advantages of enterprises in market competition 32 . The production possibility curve expands outward, which encourages enterprises to carry out intensive production. Based on the above analysis, this paper proposes hypothesis 2.

Hypothesis 2

Industrial robots can reduce environmental pollution through green technology innovation.

Mediating effect of employment skill structure

Industrial robots can affect environmental pollution through employment skill structure. The transmission path of “industrial robots-employment skill structure-environmental pollution” is formed. Industrial robots have substitution effect and creation effect on the labor force, improving the employment skill structure. Regarding the substitution effect, enterprises use industrial robots to complete simple and repetitive tasks to improve production efficiency, which crowds out low-skilled labor 6 . Regarding the creation effect, industrial robots create a demand for new job roles that matches automation, such as robot engineers, data analysts, machine repairers, which increases the number of highly skilled labor 33 . The reduction of low-skilled labor and increase of high-skilled labor improve employment skill structure.

High-skilled labor is reflected in the level of education 34 . Its essence is to have a higher level of skills and environmental awareness, which is the key to reducing environmental pollution. Compared with low-skilled labor, high-skilled labor has stronger ability to acquire knowledge and understand skills, which improves the efficiency of cleaning equipment and promotes emission reduction. The interaction and communication between highly skilled labor is also crucial for emission reduction. The excessive wage gap between employees brings high communication costs, which hinders the exchange of knowledge and technology between different employees. The increase in the proportion of high-skilled labor can solve this problem and improve the production efficiency of enterprises 35 . The improvement of production efficiency enables more investment in emission reduction research, decreasing pollutant emissions. Based on the above analysis, this paper proposes hypothesis 3.

Hypothesis 3

Industrial robots can reduce environmental pollution by optimizing employment skills structure.

Model construction and variable selection

Model construction, panel data model.

The panel data model is a significant statistical method, first introduced by Mundlak 36 . Subsequently, numerous scholars have used this model to examine the baseline relationships between core explanatory variables and explained variables 37 . To test the impact of industrial robots on environmental pollution, this paper sets the following panel data model:

In formula ( 1 ), Y it is the explained variable, indicating the degree of environmental pollution in region i in year t . IR it is the core explanatory variable, indicating the installation density of industrial robots in region i in year t . X it is a series of control variables, including economic development level (GDP), urbanization level (URB), industrial structure (EC), government intervention (GOV) and environmental regulation (ER). \(\lambda i\) is the regional factor. \(\varphi t\) is the time factor. \(\varepsilon it\) is the disturbance term.

Mediating effect model

To test the transmission mechanism of industrial robots affecting environmental pollution, this paper sets the following mediating effect model:

In formula ( 2 ), M is the mediating variable, which mainly includes green technology innovation and employment skill structure. Formula ( 2 ) measures the impact of industrial robots on mediating variables. Formula ( 3 ) measures the impact of intermediary variables on environmental pollution. According to the principle of mediating effect 38 , the direct effect \(\theta 1\) , mediating effect \(\beta 1 \times \theta 2\) and total effect \(\alpha 1\) satisfy \(\alpha 1 = \theta 1 + \beta 1 \times \theta 2\) .

Panel quantile model

The panel quantile model was first proposed by Koenke and Bassett 39 . It is mainly used to analyze the impact of core explanatory variables on the explained variables under different quantiles 40 . To empirically test the heterogeneous impact of industrial robots on environmental pollution under different levels of environmental pollution, this paper sets the following panel quantile model:

In formula ( 4 ), \(\tau\) represents the quantile value. \(\gamma 1\) reflects the difference in the impact of industrial robots on environmental pollution at different quantiles. \(\gamma 2\) indicates the different effects of control variables at different quantiles.

Variable selection

Explained variable.

The explained variable is environmental pollution. Considering the timeliness and availability of data, this paper selects industrial wastewater discharge, industrial SO 2 emissions and industrial soot emissions as indicators of environmental pollution.

Explanatory variable

According to production theory, industrial robots can enhance production efficiency 41 . Efficient production implies reduced energy wastage, which in turn decreases the emission of pollutants. Industrial robots can upgrade pollution control equipment, heightening the precision in pollution treatment and reducing pollutant discharge. Referring to Acemoglu and Restrepo 25 , this paper selects the installation density of industrial robots as a measure. The specific formula is as follows:

In formula ( 5 ), Labor ji is the number of labor force in industry j in region i . IR jt is the stock of industrial robot use in industry j in the year t .

Mediating variable

Green technology innovation. Industrial robots can increase the demand for highly-skilled labor 42 , subsequently influencing green technology innovation. Compared to ordinary labor, highly-skilled labor possesses a richer knowledge base and technological learning capability, improving the level of green technology innovation. Green technology innovation can improve energy efficiency 43 , reducing pollution generated by energy consumption. The measurement methods of green technology innovation mainly include three kinds: The first method is to use simple technology invention patents as measurement indicators. Some of technical invention patents are not applied to the production process of enterprise, they cannot fully reflect the level of technological innovation. The second method is to use green product innovation and green process innovation as measurement indicators. The third method is to use the number of green patent applications or authorizations as a measure 44 . This paper selects the number of green patent applications as a measure of green technology innovation.

Employment skill structure. The use of industrial robots reduces the demand for labor performing simple repetitive tasks and increases the need for engineers, technicians, and other specialized skilled personnel, improving the employment skill structure 45 . Compared to ordinary workers, highly-skilled laborers typically have a stronger environmental awareness 46 . Such environmental consciousness may influence corporate decisions, prompting companies to adopt eco-friendly production methods, thus reducing environmental pollution. There are two main methods to measure the structure of employment skills: One is to use the proportion of employees with college degree or above in the total number of employees as a measure. The other is to use the proportion of researchers as a measure. The educational level can better reflect the skill differences of workers. This paper uses the first method to measure the employment skill structure.

Control variable

Economic development level. According to the EKC hypothesis 47 , in the initial stage of economic development, economic development mainly depends on input of production factors, which aggravates environmental pollution. With the continuous development of economy, people begin to put forward higher requirements for environmental quality. The government also begins to adopt more stringent policies to control environmental pollution, which can reduce the level of environmental pollution. According to Liu and Lin 48 , This paper uses per capita GDP to measure economic development level.

Urbanization level. The improvement of urbanization level has both positive and negative effects on pollution. Urbanization can improve the agglomeration effect of cities. The improvement of agglomeration effect can not only promote the sharing of public resources such as infrastructure, health care, but also facilitate the centralized treatment of pollution. The efficiency of environmental governance is improved 49 . The acceleration of urbanization can increase the demand for housing, home appliances and private cars, which increases pollutant emissions 50 . This paper uses the proportion of urban population to total population to measure the level of urbanization.

Industrial structure. Industrial structure is one of the key factors that determine the quality of a country’s environmental conditions 51 . The increase in the proportion of capital and technology-intensive industries can effectively improve resource utilization efficiency and improve resource waste 52 . This paper selects the ratio of the added value of the tertiary industry to the secondary industry to measure industrial structure.

Government intervention. Government intervention mainly affects environmental pollution from the following two aspects: Firstly, the government can give high-tech, energy-saving and consumption-reducing enterprises relevant preferential policies, which promotes the development of emission reduction technologies for these enterprises 53 . Secondly, the government strengthens environmental regulation by increasing investment in environmental law enforcement funds, thus forcing enterprises to save energy and reduce emissions 54 . This paper selects the proportion of government expenditure in GDP to measure government intervention.

Environmental regulation. The investment in environmental pollution control is conducive to the development of clean and environmental protection technology, optimizing the process flow and improving the green production efficiency of enterprises 55 . Pollutant emissions are reduced. This paper selects the proportion of investment in pollution control to GDP to measure environmental regulation.

Data sources and descriptive statistics

This paper selects the panel data of 30 provinces in China from 2006 to 2019 as the research sample. Among them, the installation data of industrial robots are derived from International Federation of Robotics (IFR). The data of labor force and employees with college degree or above are from China Labor Statistics Yearbook . Other data are from the China Statistical Yearbook . The descriptive statistics of variables are shown in Table 3 . Considering the breadth of application and the reliability of analysis capabilities, this paper uses Stata 16 for regression analysis.

Results analysis

Spatial and temporal characteristics of environmental pollution and industrial robots in china, environmental pollution.

Figure  3 a shows the overall trend of average industrial wastewater discharge in China from 2006 to 2019. From 2006 to 2019, the discharge of industrial wastewater shows a fluctuating downward trend, mainly due to the improvement of wastewater treatment facilities and the improvement of treatment capacity. Figure  3 b shows the changing trend of average industrial wastewater discharge in 30 provinces of China from 2006 to 2019. Industrial wastewater discharge in most provinces has declined. There are also some provinces such as Fujian, Guizhou and Qinghai, which have increased industrial wastewater discharge. Their emission reduction task is very arduous.

figure 3

Industrial wastewater discharge from 2006 to 2019.

Figure  4 a shows the overall trend of average industrial SO 2 emissions in China from 2006 to 2019. From 2006 to 2019, industrial SO 2 emissions shows a fluctuating downward trend, indicating that air pollution control and supervision are effective. Figure  4 b shows the trend of average industrial SO 2 emissions in 30 provinces of China from 2006 to 2019. Similar to industrial wastewater, industrial SO 2 emissions decrease in most provinces.

figure 4

Industrial SO 2 emissions from 2006 to 2019.

Figure  5 a shows the overall trend of average industrial soot emissions in China from 2006 to 2019. Different from industrial wastewater and industrial SO 2 , the emission of industrial soot is increasing year by year. From the perspective of governance investment structure, compared with industrial wastewater and industrial SO 2 , the investment proportion of industrial soot is low. From the perspective of source, industrial soot mainly comes from urban operation, industrial manufacturing and so on. The acceleration of urbanization and the expansion of manufacturing scale have led to an increase in industrial soot emissions. Figure  5 b shows the trend of industrial soot emissions in 30 provinces in China from 2006 to 2019. The industrial soot emissions in most provinces have increased.

figure 5

Industrial soot emissions from 2006 to 2019.

Figure  6 shows the spatial distribution characteristics of industrial wastewater, industrial SO 2 and industrial soot emissions. The three types of pollutant emissions in the central region are the largest, followed by the eastern region, and the three types of pollutant emissions in the western region are the smallest. Due to resource conditions and geographical location, the central region is mainly dominated by heavy industry. The extensive development model of high input and consumption makes its pollutant emissions higher than the eastern and western regions. The eastern region is mainly capital-intensive and technology-intensive industries, which makes its pollutant emissions lower than the central region. Although the leading industry in the western region is heavy industry, its factory production and transportation scale are not large, which produces less pollutants.

figure 6

Spatial distribution characteristics of industrial wastewater, industrial SO 2 and industrial soot.

Industrial robots

Figure  7 a shows the overall trend of installation density of industrial robots in China from 2006 to 2019. From 2006 to 2019, the installation density of industrial robots in China shows an increasing trend year by year. The increase of labor cost and the decrease of industrial robot cost make enterprises use more industrial robots, which has a substitution effect on labor force. The installation density of industrial robots is increased. Figure  7 b shows the trend of installation density of industrial robots in 30 provinces of China from 2006 to 2019. The installation density of industrial robots in most provinces has increased. Among them, the installation density of industrial robots in Guangdong Province has the largest growth rate. The installation density of industrial robots in Heilongjiang Province has the smallest growth rate.

figure 7

Installation density of industrial robots from 2006 to 2019.

Figure  8 shows the spatial distribution characteristics of installation density of industrial robots. The installation density of industrial robots in the eastern region is the largest, followed by the central region, and the installation density of industrial robots in the western region is the smallest. The eastern region is economically developed and attracts lots of talents to gather here, which provides talent support for the development of industrial robots. Advanced technology also leads to the rapid development of industrial robots in the eastern region. The economy of western region is backward, which inhibits the development of industrial robots.

figure 8

Spatial distribution characteristics of industrial robots.

Benchmark regression results

Table 4 reports the estimation results of the ordinary panel model. Among them, the F test and LM test show that the mixed OLS model should not be used. The Hausman test shows that the fixed effect model should be selected in the fixed effect model and random effect model. This paper selects the estimation results of the fixed effect model to explain.

Regarding the core explanatory variable, industrial robots have a significant negative impact on the emissions of industrial wastewater, industrial SO 2 and industrial soot. Specifically, industrial robots have the greatest negative impact on industrial soot emissions, with a coefficient of -0.277 and passing the 1% significance level. The negative impact of industrial robots on industrial wastewater discharge is second, with an estimated coefficient of -0.242, which also passes the 1% significance level. The negative impact of industrial robots on industrial SO 2 emissions is the smallest, with an estimated coefficient of -0.0875 and passing the 10% significant level. Compared with industrial wastewater and SO 2 , industrial robots have some unique advantages in reducing industrial soot emissions. Firstly, in terms of emission sources, industrial soot emissions mainly come from physical processes such as cutting. These processes can be significantly improved through precise control of industrial robots. Industrial SO 2 comes from the combustion process. Industrial wastewater originates from various industrial processes. It is difficult for industrial robots to directly control these processes. Secondly, in terms of source control and terminal treatment, industrial robots can reduce excessive processing and waste of raw materials, thereby controlling industrial soot emissions at the source. For industrial SO 2 and industrial wastewater, industrial robots mainly play a role in terminal treatment. Since the terminal treatment of industrial SO 2 and industrial wastewater often involves complex chemical treatment processes, it is difficult for industrial robot technology to fully participate in these processes. This makes the impact of industrial robots in the field of industrial SO 2 and industrial wastewater more limited than that in the field of industrial soot.

Regarding the control variables, the level of economic development has a significant inhibitory effect on industrial SO 2 emissions. The higher the level of economic development, the stronger the residents’ awareness of environmental protection, which constrains the pollution behavior of enterprises. The government also adopts strict policies to control pollutant emissions. The impact of urbanization level on the discharge of industrial wastewater, industrial SO 2 and industrial soot is significantly negative. The improvement of urbanization level can improve the efficiency of resource sharing and the centralized treatment of pollutants, reducing environmental pollution. The industrial structure significantly reduces industrial SO 2 and industrial soot emissions. The upgrading of industrial structure not only reduces the demand for energy, but also improves the efficiency of resource utilization. The degree of government intervention only significantly reduces the discharge of industrial wastewater. The possible reason is that to promote economic development, the government invests more money in high-yield areas, which crowds out investment in the environmental field. Similar to the degree of government intervention, environmental regulation has a negative impact on industrial wastewater discharge. The government’s environmental governance investment has not given some support to the enterprise’s clean technology research, which makes the pollution control investment not produce good emission reduction effect.

Mediation effect regression results

Green technology innovation.

Table 5 reports the results of intermediary effect model when green technology innovation is used as an intermediary variable. Industrial robots can have a positive impact on green technology innovation. For every 1% increase in the installation density of industrial robots, the level of green technology innovation increases by 0.722%. After adding the green technology innovation, the estimated coefficient of industrial robots has decreased, which shows that the intermediary variable is effective.

In the impact of industrial robots on industrial wastewater discharge, the mediating effect of green technology innovation accounts for 8.17% of the total effect. In the impact of industrial robots on industrial SO 2 emissions, the mediating effect of green technology innovation accounts for 11.8% of the total effect. In the impact of industrial robots on industrial soot emissions, the mediating effect of green technology innovation accounts for 3.72% of the total effect.

Employment skill structure

Table 6 reports the results of intermediary effect model when the employment skill structure is used as an intermediary variable. Industrial robots have a positive impact on the employment skill structure. For every 1% increase in the installation density of industrial robots, the employment skill structure is improved by 0.0837%. Similar to green technology innovation, the intermediary variable of employment skill structure is also effective.

In the impact of industrial robots on industrial wastewater discharge, the mediating effect of employment skill structure accounts for 6.67% of the total effect. In the impact of industrial robots on industrial SO 2 emissions, the mediating effect of employment skill structure accounts for 20.66% of the total effect. In the impact of industrial robots on industrial soot emissions, the mediating effect of employment skill structure accounts for 15.53% of the total effect.

Robustness test and endogeneity problem

Robustness test.

To ensure the robustness of the regression results, this paper tests the robustness by replacing core explanatory variables, shrinking tail and replacing sample. Regarding the replacement of core explanatory variables, in the benchmark regression, the installation density of industrial robots is measured by the stock of industrial robots. Replacing the industrial robot stock with the industrial robot installation quantity, this paper re-measures the industrial robot installation density. Regarding the tail reduction processing, this paper reduces the extreme outliers of all variables in the upper and lower 1% to eliminate the influence of extreme outliers. Regarding the replacement of samples, this paper removes the four municipalities from the sample. The estimation results are shown in Table 7 . Industrial robots still have a significant negative impact on environmental pollution, which confirms the robustness of benchmark regression results.

Endogeneity problem

Logically speaking, although the use of industrial robots can reduce environmental pollution, there may be reverse causality. Enterprises may increase the use of industrial robots to meet emission reduction standards, which increases the use of industrial robots in a region. Due to the existence of reverse causality, there is an endogenous problem that cannot be ignored between industrial robots and environmental pollution.

To solve the impact of endogenous problems on the estimation results, this paper uses the instrumental variable method to estimate. According to the selection criteria of instrumental variables, this paper selects the installation density of industrial robots in the United States as the instrumental variable. The trend of the installation density of industrial robots in the United States during the sample period is similar to that of China, which is consistent with the correlation characteristics of instrumental variables. The application of industrial robots in the United States is rarely affected by China’s economic and social factors, and cannot affect China’s environmental pollution, which is in line with the exogenous characteristics of instrumental variables.

Table 8 reports the estimation results of instrumental variable method. Among them, the column (1) is listed as the first stage regression result. The estimated coefficient of instrumental variable is significantly positive, which is consistent with the correlation. Column (2), column (3) and column (4) of Table 8 are the second stage regression results of industrial wastewater, industrial SO 2 and industrial soot emissions as explanatory variables. The estimated coefficients of industrial robots are significantly negative, which again verifies the hypothesis that industrial robots can reduce environmental pollution. Compared with Table 4 , the absolute value of estimated coefficient of industrial robots is reduced, which indicates that the endogenous problems caused by industrial robots overestimate the emission reduction effect of industrial robots. The test results prove the validity of the instrumental variables.

Panel quantile regression results

Traditional panel data models might obscure the differential impacts of industrial robots at specific pollution levels. To address this issue, this paper uses a panel quantile regression model to empirically analyze the effects of industrial robots across different environmental pollution levels.

Table 9 shows that industrial robots have a negative impact on industrial wastewater discharge. With the increase of the quantile of industrial wastewater discharge, the regression coefficient of industrial robots shows a W-shaped change. Specifically, when the industrial wastewater discharge is in the 0.1 quantile, the regression coefficient of industrial robot is − 0.229, and it passes the 1% significant level. When the industrial wastewater discharge is in the 0.25 quantile, the impact of industrial robots on industrial wastewater discharge is gradually enhanced. Its regression coefficient decreases from − 0.229 to − 0.256. When the industrial wastewater discharge is in the 0.5 quantile, the regression coefficient of industrial robot increases from − 0.256 to − 0.152. When the industrial wastewater discharge is at the 0.75 quantile, the regression coefficient of industrial robot decreases from − 0.152 to − 0.211. When the industrial wastewater discharge is in the 0.9 quantile, the regression coefficient of industrial robot increases from − 0.211 to − 0.188. For every 1% increase in the installation density of industrial robots, the discharge of industrial wastewater is reduced by 0.188%.

When industrial wastewater discharge is at a low percentile, the use of industrial robots can replace traditional production methods, reducing energy waste and wastewater discharge. As industrial wastewater discharge increases, the production process becomes more complex. Industrial robots may be involved in high-pollution, high-emission productions, diminishing the robots’ emission-reducing effects. When industrial wastewater discharge reaches high levels, pressured enterprises seek environmentally friendly production methods and use eco-friendly industrial robots to reduce wastewater discharge. As wastewater discharge continues to rise, enterprises tend to prioritize production efficiency over emission control, weakening the negative impact of industrial robots on wastewater discharge. When wastewater discharge is at a high percentile, enterprises should balance production efficiency and environmental protection needs, by introducing eco-friendly industrial robots to reduce wastewater discharge.

Table 10 shows that with the increase of industrial SO 2 emission quantile level, the negative impact of industrial robots on industrial SO 2 emissions gradually increases. Specifically, when industrial SO 2 emissions are below 0.5 quantile, the impact of industrial robots on industrial SO 2 emissions is not significant. When the industrial SO 2 emissions are above 0.5 quantile, the negative impact of industrial robots on industrial SO 2 emissions gradually appears.

When industrial SO 2 emissions are at a low percentile, the application of industrial robots primarily aims to enhance production efficiency, not to reduce SO 2 emissions. Enterprises should invest in the development of eco-friendly industrial robots, ensuring they are readily available for deployment when a reduction in industrial SO 2 emissions is necessary. As industrial SO 2 emissions continue to rise, both the government and the public pay increasing attention to the issue of SO 2 emissions. To meet stringent environmental standards, enterprises begin to use industrial robots to optimize the production process, reduce reliance on sulfur fuels, and consequently decrease SO 2 emissions. Enterprises should regularly evaluate the emission reduction effectiveness of industrial robots, using the assessment data to upgrade and modify the robots’ emission reduction technologies.

Table 11 shows that with the increase of industrial soot emissions quantile level, the negative impact of industrial robots on industrial soot emissions gradually weakens. Specifically, when industrial soot emissions are below 0.75 quantile, industrial robots have a significant negative impact on industrial soot emissions. This negative effect decreases with the increase of industrial soot emissions. When the industrial soot emissions are above 0.75 quantile, the negative impact of industrial robots on industrial soot emissions gradually disappears.

When industrial soot emissions are at a low percentile, they come from a few sources easily managed by industrial robots. As industrial soot emissions increase, the sources become more diverse and complex, making it harder for industrial robots to control. Even with growing environmental awareness, it may take time to effectively use robots in high-emission production processes and control industrial soot emissions. Enterprises should focus on researching how to better integrate industrial robot technology with production processes that have high soot emission levels. The government should provide financial and technical support to enterprises, assisting them in using industrial robots more effectively for emission reduction.

Figure  9 intuitively reflects the trend of the regression coefficient of industrial robots with the changes of industrial wastewater, industrial SO 2 and industrial soot emissions. Figure  9 a shows that with the increase of industrial wastewater discharge, the regression coefficient of industrial robots shows a W-shaped trend. Figure  9 b shows that with the increase of industrial SO 2 emissions, the regression coefficient of industrial robots gradually decreases. The negative impact of industrial robots on industrial SO 2 emissions is gradually increasing. Figure  9 c shows that with the increase of industrial soot emissions, the regression coefficient of industrial robots shows a gradual increasing trend. The negative impact of industrial robots on industrial soot emissions has gradually weakened. Figure  9 a, b and c confirm the estimation results of Tables 9 , 10 and 11 .

figure 9

Change of quantile regression coefficient.

Heterogeneity analysis

Regional heterogeneity.

This paper divides China into three regions: Eastern, central and western regions according to geographical location. The estimated results are shown in Table 12 . The industrial robots in eastern region have the greatest negative impact on three pollutants, followed by central region, and the industrial robots in western region have the least negative impact on three pollutants. The use of industrial robots in eastern region far exceeds that in central and western regions. The eastern region is far more than central and western regions in terms of human capital, technological innovation and financial support. Compared with central and western regions, the artificial substitution effect, upgrading of sewage treatment equipment and improvement of energy utilization efficiency brought by industrial robots in eastern region are more obvious.

Time heterogeneity

The development of industrial robots is closely related to policy support 56 . In 2013, the Ministry of Industry and Information Technology issued the “ Guiding Opinions on Promoting the Development of Industrial Robot Industry ”. This document proposes: By 2020, 3 to 5 internationally competitive leading enterprises and 8 to 10 supporting industrial clusters are cultivated. In terms of high-end robots, domestic robots account for about 45% of the market share, which provides policy support for the development of industrial robots. Based on this, this paper divides the total sample into two periods: 2006–2012 and 2013–2019, and analyzes the heterogeneous impact of industrial robots on environmental pollution in different periods. The estimation results are shown in Table 13 . Compared with 2006–2012, the emission reduction effect of industrial robots during 2013–2019 is greater.

The use of industrial robots can effectively reduce environmental pollution, which is consistent with hypothesis 1. This is contrary to the findings of Luan et al. 22 , who believed that the use of industrial robots would exacerbate air pollution. The inconsistency in research conclusions may be due to differences in research focus, sample size, and maturity of industrial robot technology. In terms of research focus, this paper mainly focuses on the role of industrial robots in reducing pollutant emissions during industrial production processes. Their research focuses more on the energy consumption caused by the production and use of industrial robots, which could aggravate environmental pollution. In terms of sample size, the sample size of this paper is 30 provinces in China from 2006 to 2019. These regions share consistency in economic development, industrial policies and environmental regulations. Their sample size is 74 countries from 1993 to 2019. These countries cover different geographical, economic and industrial development stages, affecting the combined effect of robots on environmental pollution. In terms of the maturity of industrial robots, the maturity of industrial robot technology has undergone tremendous changes from 1993 to 2019. In the early stages, industrial robot technology was immature, which might cause environmental pollution. In recent years, industrial robot technology has gradually matured, and its operating characteristics have become environmentally friendly. Their impact on environmental pollution has gradually improved. This paper mainly conducts research on the mature stage of industrial robot technology. Their research covers the transition period from immature to mature industrial robot technology. The primary reason that the use of industrial robots can reduce environmental pollution is: The use of industrial robots has a substitution effect on labor force, which reduces the raw materials needed for manual operation. For example, in the industrial spraying of manufacturing industry, the spraying robot can improve the spraying quality and material utilization rate, thereby reducing the waste of raw materials by manual operation. Zhang et al. 57 argued that energy consumption has been the primary source of environmental pollution. Coal is the main energy in China, and the proportion of clean energy is low 58 . In 2022, clean energy such as natural gas, hydropower, wind power and solar power in China accounts for only 25.9% of the total energy consumption, which can cause serious environmental pollution problems. Industrial robots can promote the use of clean energy in industrial production and the upgrading of energy structure 24 . The reduction of raw materials and the upgrading of energy structure can control pollutant emissions in front-end production. On September 1, 2021, the World Economic Forum (WEF) released the report “ Using Artificial Intelligence to Accelerate Energy Transformation ”. The report points out that industrial robots can upgrade pollution monitoring equipment and sewage equipment, which reduces pollutant emissions in end-of-pipe treatment. Ye et al. 59 also share the same viewpoint.

The use of industrial robots can reduce environmental pollution through green technology innovation, which is consistent with hypothesis 2. Industrial robots promote the integration of knowledge, which helps enterprises to carry out green technology innovation activities. Meanwhile, Jung et al. 60 suggested that industrial robots can lower production costs for companies, allowing them to invest in green technology research. The level of green technology innovation is improved. Green technology innovation reduces environmental pollution through the following three aspects: Firstly, the improvement of energy utilization efficiency. China’s utilization efficiency of traditional energy sources such as coal is not high. The report of “ 2013-Global Energy Industry Efficiency Research ” points out that China’s energy utilization rate is only ranked 74th in the world in 2013. Low energy efficiency brings serious environmental pollution problems 61 . Du et al. 62 found that the innovation of green technologies, such as clean coal, can enhance energy efficiency and decrease environmental pollution. Secondly, the production of green products. Green technology innovation accelerates the green and recyclable process of production, thereby reducing the pollutants generated in production process. Thirdly, the improvement of enterprise competitive advantage. Green technology innovation can enable enterprises to gain greater competitive advantage in green development 63 . The supply of environmentally friendly products increases, which not only meets the green consumption needs of consumers, but also reduces the emission of pollutants.

Industrial robots can reduce environmental pollution by optimizing the structure of employment skills, which is consistent with hypothesis 3. Autor et al. 64 contended that industrial robots would replace conventional manual labor positions, reducing the demand for low-skilled labor. Industrial robots represent the development of numerical intelligence. With the continuous development of digital intelligence, the demand for high-skilled labor in enterprises has increased. Koch et al. 65 demonstrated that the use of industrial robots in Spanish manufacturing firms leads to an increase in the number of skilled workers. In February 2020, the Ministry of Human Resources and Social Security, the State Administration of Market Supervision and the National Bureau of Statistics jointly issues 16 new professions such as intelligent manufacturing engineering and technical personnel, industrial Internet engineering and technical personnel, and virtual reality engineering and technical personnel to the society. These new occupations increase the demand for highly skilled labor. The reduction of low-skilled labor and increase of high-skilled labor optimize the structure of employment skills. The optimization of employment skill structure narrows the wage gap between employees, reducing the communication cost of employees. Employees learn and exchange technology with each other, which not only improves the absorption capacity of clean technology. It also improves the production efficiency of enterprises and increases corporate profits, so that enterprises can use more funds for clean technology research and development, thereby reducing environmental pollution.

Conclusions and policy recommendations

Based on the panel data of 30 provinces in China from 2006 to 2019, this paper uses the panel data model and mediating effect model to empirically test the impact of industrial robots on environmental pollution and its transmission mechanism. This paper uses panel quantile model, regional samples and time samples to further analyze the heterogeneous impact of industrial robots on environmental pollution. The conclusions are as follows: (1) Industrial robots can significantly reduce environmental pollution. For every 1% increase in industrial robots, the emissions of industrial wastewater, industrial SO 2 , and industrial dust and smoke decrease by − 0.242%, − 0.0875%, and − 0.277%. This finding is contrary to that of Luan et al. 22 , who argued that the use of industrial robots exacerbates air pollution. The results of this paper provide a contrasting perspective, highlighting the potential value of industrial robots in mitigating environmental pollution. (2) Industrial robots can reduce environmental pollution by improving green technology innovation level and optimizing employment skills structure. In the impact of industrial robots on industrial wastewater discharge, the mediating effect of green technology innovation accounts for 8.17% of total effect. The mediating effect of employment skill structure accounts for 6.67% of total effect. In the impact of industrial robots on industrial SO 2 emissions, the mediating effect of green technology innovation accounts for 11.8% of total effect. The mediating effect of employment skill structure accounts for 20.66% of total effect. In the impact of industrial robots on industrial soot emissions, the mediating effect of green technology innovation accounts for 3.72% of total effect. The mediating effect of employment skill structure accounts for 15.53% of total effect. While Obobisa et al. 66 and Zhang et al. 67 highlighted the role of green technological innovation in addressing environmental pollution. Chiacchio et al. 68 and Dekle 69 focused on the effects of industrial robots on employment. The mediating impact of technology and employment in the context of robots affecting pollution hasn’t been addressed. Our research provides the first in-depth exploration of this crucial intersection. (3) Under different environmental pollution levels, the impact of industrial robots on environmental pollution is different. Among them, with the increase of industrial wastewater discharge, the impact of industrial robots on industrial wastewater discharge shows a “W-shaped” change. With the increase of industrial SO 2 emissions, the negative impact of industrial robots on industrial SO 2 emissions is gradually increasing. On the contrary, with the increase of industrial soot emissions, the negative impact of industrial robots on industrial soot emissions gradually weakens. (4) Industrial robots in different regions and different periods have heterogeneous effects on environmental pollution. Regarding regional heterogeneity, industrial robots in eastern region have the greatest negative impact on environmental pollution, followed by central region, and western region has the least negative impact on environmental pollution. Regarding time heterogeneity, the negative impact of industrial robots on environmental pollution in 2013–2019 is greater than that in 2006–2012. Chen et al. 5 and Li et al. 24 both examined the overarching impact of industrial robots on environmental pollution. They did not consider the varying effects of robots on pollution across different regions and time periods. Breaking away from the limitations of previous holistic approaches, our study offers scholars a deeper understanding of the diverse environmental effects of industrial robots.

According to the above research conclusions, this paper believes that the government and enterprises can promote emission reduction through industrial robots from the following aspects.

Increase the scale of investment in robot industry and promote the development of robot industry. China’s industrial robot ownership ranks first in the world. Its industrial robot installation density is lower than that of developed countries such as the United States, Japan and South Korea. The Chinese government should give some financial support to robot industry and promote the development of robot industry, so as to effectively reduce environmental pollution. The R&D investment of industrial robots should be increased so that they can play a full role in reducing raw material consumption, improving energy efficiency and sewage treatment capacity.

Give full play to the role of industrial robots in promoting green technology innovation. Industrial robots can reduce environmental pollution through green technology innovation. The role of industrial robots in innovation should be highly valued. The advantages of knowledge integration and data processing of industrial robots should be fully utilized. Meanwhile, the government should support high-polluting enterprises that do not have industrial robots from the aspects of capital, talents and technology, so as to open up the channels for these enterprises to develop and improve clean technology by using industrial robots.

Give full play to the role of industrial robots in optimizing employment skills structure. The use of industrial robots can create jobs with higher skill requirements and increase the demand for highly skilled talents. China is relatively short of talents in the field of emerging technologies. The education department should actively build disciplines related to industrial robots to provide talent support for high-skilled positions. Enterprises can also improve the skill level of the existing labor force through on-the-job training and job competition.

Data availability

The datasets used or analyzed during the current study are available from Yanfang Liu on reasonable request.

Liu, Y. & Dong, F. How technological innovation impacts urban green economy efficiency in emerging economies: A case study of 278 Chinese cities. Resour. Conserv. Recycl. 169 , 105534 (2021).

Article   Google Scholar  

Wang, Y. & Chen, X. Natural resource endowment and ecological efficiency in China: Revisiting resource curse in the context of ecological efficiency. Resour. Policy 66 , 101610 (2020).

Kampa, M. & Castanas, E. Human health effects of air pollution. Environ. Pollut. 151 , 362–367 (2008).

Article   CAS   PubMed   Google Scholar  

Feng, Y., Chen, H., Chen, Z., Wang, Y. & Wei, W. Has environmental information disclosure eased the economic inhibition of air pollution?. J. Clean. Prod. 284 , 125412 (2021).

Article   CAS   Google Scholar  

Chen, Y., Cheng, L. & Lee, C.-C. How does the use of industrial robots affect the ecological footprint? International evidence. Ecol. Econ. 198 , 107483 (2022).

Krenz, A., Prettner, K. & Strulik, H. Robots, reshoring, and the lot of low-skilled workers. Eur. Econ. Rev. 136 , 103744 (2021).

Xu, C., Zhao, W., Zhang, M. & Cheng, B. Pollution haven or halo? The role of the energy transition in the impact of FDI on SO 2 emissions. Sci. Total Environ. 763 , 143002 (2021).

Article   ADS   CAS   PubMed   Google Scholar  

Yuan, H. et al. Influences and transmission mechanisms of financial agglomeration on environmental pollution. J. Environ. Manag. 303 , 114136 (2022).

Liu, G., Dong, X., Kong, Z. & Dong, K. Does national air quality monitoring reduce local air pollution? The case of PM 2.5 for China. J. Environ. Manag. 296 , 113232 (2021).

Ren, S., Hao, Y. & Wu, H. Digitalization and environment governance: Does internet development reduce environmental pollution?. J. Environ. Plan. Manag. 66 , 1533–1562 (2023).

Zhao, J., Zhao, Z. & Zhang, H. The impact of growth, energy and financial development on environmental pollution in China: New evidence from a spatial econometric analysis. Energy Econ. 93 , 104506 (2021).

Wang, H. & Liu, H. Foreign direct investment, environmental regulation, and environmental pollution: An empirical study based on threshold effects for different Chinese regions. Environ. Sci. Pollut. Res. 26 , 5394–5409 (2019).

Albulescu, C. T., Tiwari, A. K., Yoon, S.-M. & Kang, S. H. FDI, income, and environmental pollution in Latin America: Replication and extension using panel quantiles regression analysis. Energy Economics 84 , 104504 (2019).

Li, K., Fang, L. & He, L. How population and energy price affect China’s environmental pollution?. Energy Policy 129 , 386–396 (2019).

Liang, L., Wang, Z. & Li, J. The effect of urbanization on environmental pollution in rapidly developing urban agglomerations. J. Clean. Prod. 237 , 117649 (2019).

Sharma, R., Shahbaz, M., Kautish, P. & Vo, X. V. Does energy consumption reinforce environmental pollution? Evidence from emerging Asian economies. J. Environ. Manag. 297 , 113272 (2021).

Chen, F., Wang, M. & Pu, Z. The impact of technological innovation on air pollution: Firm-level evidence from China. Technol. Forecast. Soc. Change 177 , 121521 (2022).

Hao, Y., Chen, H. & Zhang, Q. Will income inequality affect environmental quality? Analysis based on China’s provincial panel data. Ecol. Ind. 67 , 533–542 (2016).

Liu, Q., Wang, S., Zhang, W., Zhan, D. & Li, J. Does foreign direct investment affect environmental pollution in China’s cities? A spatial econometric perspective. Sci. Total Environ. 613 , 521–529 (2018).

Article   ADS   PubMed   Google Scholar  

Mughal, N. et al. The role of technological innovation in environmental pollution, energy consumption and sustainable economic growth: Evidence from South Asian economies. Energy Strat. Rev. 39 , 100745 (2022).

Ghobakhloo, M. & Fathi, M. Industry 4.0 and opportunities for energy sustainability. J. Clean. Prod. 295 , 126427 (2021).

Luan, F., Yang, X., Chen, Y. & Regis, P. J. Industrial robots and air environment: A moderated mediation model of population density and energy consumption. Sustain. Prod. Consump. 30 , 870–888 (2022).

Wang, Q., Li, Y. & Li, R. Do industrial robots reduce carbon intensity? The role of natural resource rents and corruption control. Environ. Sci. Pollut. Res. https://doi.org/10.1007/s11356-023-29760-7 (2023).

Li, Y., Zhang, Y., Pan, A., Han, M. & Veglianti, E. Carbon emission reduction effects of industrial robot applications: Heterogeneity characteristics and influencing mechanisms. Technol. Soc. 70 , 102034 (2022).

Acemoglu, D. & Restrepo, P. Robots and jobs: Evidence from US labor markets. J. Polit. Econ. 128 , 2188–2244 (2020).

Liu, J., Liu, L., Qian, Y. & Song, S. The effect of artificial intelligence on carbon intensity: Evidence from China’s industrial sector. Socio Econ. Plan. Sci. 83 , 101002 (2022).

Lee, C.-C., Qin, S. & Li, Y. Does industrial robot application promote green technology innovation in the manufacturing industry?. Technol. Forecast. Soc. Change 183 , 121893 (2022).

Riza, M., Purba, H. H. & Mukhlisin,. The implementation of economic order quantity for reducing inventory cost. Res. Logist. Prod. 8 , 207–216 (2018).

Google Scholar  

Tang, Z. & Ge, Y. CNN model optimization and intelligent balance model for material demand forecast. Int. J. Syst. Assur. Eng. Manag. 13 , 978–986 (2022).

Wang, Q. & Ren, S. Evaluation of green technology innovation efficiency in a regional context: A dynamic network slacks-based measuring approach. Technol. Forecast. Soc. Change 182 , 121836 (2022).

Chang, K., Liu, L., Luo, D. & Xing, K. The impact of green technology innovation on carbon dioxide emissions: The role of local environmental regulations. J. Environ. Manag. 340 , 117990 (2023).

Tu, Y. & Wu, W. How does green innovation improve enterprises’ competitive advantage? The role of organizational learning. Sustain. Prod. Consum. 26 , 504–516 (2021).

Dauth, W., Findeisen, S., Südekum, J. & Woessner, N. German robots-the impact of industrial robots on workers (2017).

Berger, N. & Fisher, P. A well-educated workforce is key to state prosperity. Economic Policy Institute 22 , 1–14 (2013).

Bourke, J. & Roper, S. AMT adoption and innovation: An investigation of dynamic and complementary effects. Technovation 55 , 42–55 (2016).

Mundlak, Y. On the pooling of time series and cross section data. Econometrica J. Econom. Soc. 46 , 69–85 (1978).

Article   MathSciNet   MATH   Google Scholar  

Sun, B., Li, J., Zhong, S. & Liang, T. Impact of digital finance on energy-based carbon intensity: Evidence from mediating effects perspective. J. Environ. Manag. 327 , 116832 (2023).

MacKinnon, D. P., Warsi, G. & Dwyer, J. H. A simulation study of mediated effect measures. Multivar. Behav. Res. 30 , 41–62 (1995).

Koenker, R. & Bassett, G. Jr. Regression quantiles. Econometrica J. Econom. Soc. 23 , 33–50 (1978).

Akram, R., Chen, F., Khalid, F., Ye, Z. & Majeed, M. T. Heterogeneous effects of energy efficiency and renewable energy on carbon emissions: Evidence from developing countries. J. Clean. Prod. 247 , 119122 (2020).

Pham, A.-D. & Ahn, H.-J. Rigid precision reducers for machining industrial robots. Int. J. Precis. Eng. Manuf. 22 , 1469–1486 (2021).

Du, L. & Lin, W. Does the application of industrial robots overcome the Solow paradox? Evidence from China. Technol. Soc. 68 , 101932 (2022).

Sun, H., Edziah, B. K., Sun, C. & Kporsu, A. K. Institutional quality, green innovation and energy efficiency. Energy Policy 135 , 111002 (2019).

Wang, X., Su, Z. & Mao, J. How does haze pollution affect green technology innovation? A tale of the government economic and environmental target constraints. J. Environ. Manag. 334 , 117473 (2023).

Tang, C., Huang, K. & Liu, Q. Robots and skill-biased development in employment structure: Evidence from China. Econ. Lett. 205 , 109960 (2021).

Cicatiello, L., Ercolano, S., Gaeta, G. L. & Pinto, M. Willingness to pay for environmental protection and the importance of pollutant industries in the regional economy. Evidence from Italy. Ecol. Econ. 177 , 106774 (2020).

Xie, Q., Xu, X. & Liu, X. Is there an EKC between economic growth and smog pollution in China? New evidence from semiparametric spatial autoregressive models. J. Clean. Prod. 220 , 873–883 (2019).

Liu, K. & Lin, B. Research on influencing factors of environmental pollution in China: A spatial econometric analysis. J. Clean. Prod. 206 , 356–364 (2019).

Wang, Y. & Wang, J. Does industrial agglomeration facilitate environmental performance: New evidence from urban China?. J. Environ. Manag. 248 , 109244 (2019).

Cheng, Z. & Hu, X. The effects of urbanization and urban sprawl on CO 2 emissions in China. Environ. Dev. Sustain. 25 , 1792–1808 (2023).

Hu, W., Tian, J. & Chen, L. An industrial structure adjustment model to facilitate high-quality development of an eco-industrial park. Sci. Total Environ. 766 , 142502 (2021).

Hao, Y. et al. Reexamining the relationships among urbanization, industrial structure, and environmental pollution in China—New evidence using the dynamic threshold panel model. Energy Rep. 6 , 28–39 (2020).

Guo, Y., Xia, X., Zhang, S. & Zhang, D. Environmental regulation, government R&D funding and green technology innovation: Evidence from China provincial data. Sustainability 10 , 940 (2018).

Ouyang, X., Li, Q. & Du, K. How does environmental regulation promote technological innovations in the industrial sector? Evidence from Chinese provincial panel data. Energy Policy 139 , 111310 (2020).

Zhang, W. & Li, G. Environmental decentralization, environmental protection investment, and green technology innovation. Environ. Sci. Pollut. Res. https://doi.org/10.1007/s11356-020-09849-z (2020).

Cheng, H., Jia, R., Li, D. & Li, H. The rise of robots in China. J. Econ. Perspect. 33 , 71–88 (2019).

Zhang, X. et al. Evaluating the relationships among economic growth, energy consumption, air emissions and air environmental protection investment in China. Renew. Sustain. Energy Rev. 18 , 259–270 (2013).

Jia, Z. & Lin, B. How to achieve the first step of the carbon-neutrality 2060 target in China: The coal substitution perspective. Energy 233 , 121179 (2021).

Ye, Z. et al. Tackling environmental challenges in pollution controls using artificial intelligence: A review. Sci. Total Environ. 699 , 134279 (2020).

Jung, J. H. & Lim, D.-G. Industrial robots, employment growth, and labor cost: A simultaneous equation analysis. Technol. Forecas. Soc. Change 159 , 120202 (2020).

Liu, H., Zhang, Z., Zhang, T. & Wang, L. Revisiting China’s provincial energy efficiency and its influencing factors. Energy 208 , 118361 (2020).

Article   PubMed   Google Scholar  

Du, K. & Li, J. Towards a green world: How do green technology innovations affect total-factor carbon productivity. Energy Policy 131 , 240–250 (2019).

Li, G., Wang, X., Su, S. & Su, Y. How green technological innovation ability influences enterprise competitiveness. Technol. Soc. 59 , 101136 (2019).

Autor, D. H., Levy, F. & Murnane, R. J. The skill content of recent technological change: An empirical exploration. Q. J. Econ. 118 , 1279–1333 (2003).

Article   MATH   Google Scholar  

Koch, M., Manuylov, I. & Smolka, M. Robots and firms. Econ. J. 131 , 2553–2584 (2021).

Obobisa, E. S., Chen, H. & Mensah, I. A. The impact of green technological innovation and institutional quality on CO 2 emissions in African countries. Technol. Forecast. Soc. Change 180 , 121670 (2022).

Zhang, M. & Liu, Y. Influence of digital finance and green technology innovation on China’s carbon emission efficiency: Empirical analysis based on spatial metrology. Sci. Total Environ. 838 , 156463 (2022).

Chiacchio, F., Petropoulos, G. & Pichler, D. The impact of industrial robots on EU employment and wages: A local labour market approach (Bruegel working paper, 2018).

Dekle, R. Robots and industrial labor: Evidence from Japan. J. Jpn. Int. Econ. 58 , 101108 (2020).

Download references

Author information

Authors and affiliations.

Harbin Vocational College of Science and Technology, Harbin, 150300, Heilongjiang, People’s Republic of China

Yanfang Liu

You can also search for this author in PubMed   Google Scholar

Contributions

Y.L.: Conceptualization, Resources, Supervision, Methodology, Software. I have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Yanfang Liu .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Liu, Y. Impact of industrial robots on environmental pollution: evidence from China. Sci Rep 13 , 20769 (2023). https://doi.org/10.1038/s41598-023-47380-6

Download citation

Received : 24 July 2023

Accepted : 13 November 2023

Published : 26 November 2023

DOI : https://doi.org/10.1038/s41598-023-47380-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

research paper is based on

Help | Advanced Search

Computer Science > Computation and Language

Title: leave no context behind: efficient infinite context transformers with infini-attention.

Abstract: This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Rating Report

Versailles Commercial Paper, LLC

Thu 18 Apr, 2024 - 12:00 PM ET

The ratings assigned to Versailles CP’s ABCP are based on the rating of Natixis, the sole support provider to this program. NNYB provides program-wide liquidity to Versailles Assets in the form of a liquidity loan agreement sized to cover 100% of the outstanding balance of each asset funded by Versailles plus interest/discount to maturity on the related ABCP. This ensures timely payment to Versailles Assets’ sole noteholder, Versailles CP. NNYB serves as Versailles’ program administrator and has managed the program since its inception in June 2006. Fitch considers NNYB to be an effective program administrator based on its conduit management, treasury operations and administration, and credit risk management capabilities. Versailles CP and Versailles Assets are both special-purpose, bankruptcy-remote limited liability companies established in the State of Delaware. The contracts supporting the program’s operations provide for structural protections that isolate the assets from the bankruptcy of entities involved in each transaction and ensure full and timely payment of CP.

research paper is based on

IMAGES

  1. How to Write a Research Paper in English

    research paper is based on

  2. Research paper in college. 200 Easy Research Paper Topics for College

    research paper is based on

  3. How to Write a Professional Paper Using Psychology Research Topics

    research paper is based on

  4. Types of research papers

    research paper is based on

  5. INTRODUCTION TO RESEARCH PAPER

    research paper is based on

  6. How to Write and Publish a Research Paper.pdf

    research paper is based on

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. How To Start A Research Paper? #research #journal #article #thesis #phd

  3. Difference between Research Paper and Research Article

  4. Different Types of Research Papers

  5. Journal vs. Research Paper vs. Thesis: Research Aptitude

  6. How to do research? and How to write a research paper?

COMMENTS

  1. Research Paper

    Definition: Research Paper is a written document that presents the author's original research, analysis, and interpretation of a specific topic or issue. It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new ...

  2. How to Write a Research Paper

    A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research. Research papers are similar to academic essays, but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research ...

  3. What is a research paper?

    Definition. A research paper is a paper that makes an argument about a topic based on research and analysis. Any paper requiring the writer to research a particular topic is a research paper. Unlike essays, which are often based largely on opinion and are written from the author's point of view, research papers are based in fact.

  4. How to start your research paper [step-by-step guide]

    Below is a step-by-step guide to starting and completing your research paper. Organize your papers in one place. Try Paperpile. No credit card needed. Get 30 days free. 1. Choose your topic. Choose a topic that interests you. Writing your research paper will be so much more pleasant with a topic that you actually want to know more about.

  5. What are the different types of research papers?

    Experimental research paper. This type of research paper basically describes a particular experiment in detail. It is common in fields like: biology. chemistry. physics. Experiments are aimed to explain a certain outcome or phenomenon with certain actions. You need to describe your experiment with supporting data and then analyze it sufficiently.

  6. How To Write A Research Paper (FREE Template

    Step 1: Find a topic and review the literature. As we mentioned earlier, in a research paper, you, as the researcher, will try to answer a question.More specifically, that's called a research question, and it sets the direction of your entire paper. What's important to understand though is that you'll need to answer that research question with the help of high-quality sources - for ...

  7. How to Write a Research Paper

    By refining your focus, you can produce a thoughtful and engaging paper that effectively communicates your ideas to your readers. 5. Write a thesis statement. A thesis statement is a one-to-two-sentence summary of your research paper's main argument or direction.

  8. Writing a Research Paper Introduction

    This paper first discusses several examples of survey-based research into adolescent social media use, then goes on to … Research paper introduction examples. Full examples of research paper introductions are shown in the tabs below: one for an argumentative paper, the other for an empirical paper.

  9. How to Write and Publish a Research Paper for a Peer-Reviewed Journal

    The introduction section should be approximately three to five paragraphs in length. Look at examples from your target journal to decide the appropriate length. This section should include the elements shown in Fig. 1. Begin with a general context, narrowing to the specific focus of the paper.

  10. A Beginner's Guide to Starting the Research Process

    This describes who the problem affects, why research is needed, and how your research project will contribute to solving it. >>Read more about defining a research problem. Step 3: Formulate research questions. Next, based on the problem statement, you need to write one or more research questions. These target exactly what you want to find out.

  11. How to Write and Publish a Research Paper for a Peer ...

    Communicating research findings is an essential step in the research process. Often, peer-reviewed journals are the forum for such communication, yet many researchers are never taught how to write a publishable scientific paper. In this article, we explain the basic structure of a scientific paper and describe the information that should be included in each section. We also identify common ...

  12. How to Write Your First Research Paper

    Behavioral computer-based experiments of Study 1 were programmed by using E-Prime. We took ratings of enjoyment, mood, and arousal as the patients listened to preferred pleasant music and unpreferred music by using Visual Analogue Scales (SI Methods). ... In the "standard" research paper approach, your Results section should exclude data ...

  13. Tips on Writing a Good Research Paper

    While writing a paper can be a time-consuming process, selecting a good thesis statement and employing the right research strategies may help students efficiently gather information, maximize their time, and produce a high-quality research paper. Based on my experience, here are several tips to help with writing a high-quality research paper.

  14. How to Write the Methods Section of a Research Paper

    The methods section is a fundamental section of any paper since it typically discusses the 'what', 'how', 'which', and 'why' of the study, which is necessary to arrive at the final conclusions. In a research article, the introduction, which serves to set the foundation for comprehending the background and results is usually ...

  15. Evidence-Based Research Series-Paper 1: What Evidence-Based Research is

    Objectives: There is considerable actual and potential waste in research. Evidence-based research ensures worthwhile and valuable research. The aim of this series, which this article introduces, is to describe the evidence-based research approach. Study design and setting: In this first article of a three-article series, we introduce the ...

  16. Research Paper Examples

    A research paper represents the pinnacle of academic investigation, a scholarly manuscript that encapsulates a detailed study, analysis, or argument based on extensive independent research. It is an embodiment of the researcher's ability to synthesize a wealth of information, draw insightful conclusions, and contribute novel perspectives to ...

  17. Reporting Survey Based Studies

    Abstract. The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the ...

  18. 113 Great Research Paper Topics

    113 Great Research Paper Topics. One of the hardest parts of writing a research paper can be just finding a good topic to write about. Fortunately we've done the hard work for you and have compiled a list of 113 interesting research paper topics. They've been organized into ten categories and cover a wide range of subjects so you can easily ...

  19. Research Paper Format

    Formatting a Chicago paper. The main guidelines for writing a paper in Chicago style (also known as Turabian style) are: Use a standard font like 12 pt Times New Roman. Use 1 inch margins or larger. Apply double line spacing. Indent every new paragraph ½ inch. Place page numbers in the top right or bottom center.

  20. Comparing Paper and Computer Testing: 7 Key Research Studies

    To give a deeper look at the issues behind this "mode effect," Education Week examined seven key research studies on the topic: 1. "Online Assessment and the Comparability of Score Meaning ...

  21. "We've shifted the responsibility of extracting relevant context for

    A research paper recently published by Microsoft details how AI is turning software engineering into a fully automated task, rendering developers "mere supervisors." This reiterates the NVIDIA CEO ...

  22. Social Media Recommendation System Using Hybrid AI Model Research Paper

    In this research our team will be combine the two filtering [content based filtering and collaborative based filtering] and create a better hybird AI model. Our team will be the first team to ...

  23. Artificial intelligence: A powerful paradigm for scientific research

    The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.

  24. Journal of Medical Internet Research

    Background: With the rapid aging of the global population, the prevalence of mild cognitive impairment (MCI) and dementia is anticipated to surge worldwide. MCI serves as an intermediary stage between normal aging and dementia, necessitating more sensitive and effective screening tools for early identification and intervention. The BrainFx SCREEN is a novel digital tool designed to assess ...

  25. Research on Dynamic Searchable Encryption Method Based on Bloom ...

    This paper proposes an efficient searchable encryption scheme based on the Authenticator Bloom Filter (ABF). The solution can support dynamic updates and multiple users and meet forward and backward security. This paper uses an ABF to improve the efficiency of searches and updates while playing a significant role in dynamic updates.

  26. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  27. Impact of industrial robots on environmental pollution: evidence from

    Based on the above analysis, this paper innovatively incorporates industrial robots and environmental pollution into a unified framework. Based on the panel data of 30 provinces in China from 2006 ...

  28. Pre-training a Foundation Model for Generalizable Fluorescence

    UniFMIR provides a universal solution for fluorescence microscopy-based image restoration. Pre-trained UniFMIR can be applied to different tasks, imaging modalities and biological structures through simple fine-tuning, demonstrating the great role of foundation model methods in biological imaging research.

  29. [2404.07143] Leave No Context Behind: Efficient Infinite Context

    This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and ...

  30. Versailles Commercial Paper, LLC

    The ratings assigned to Versailles CP's ABCP are based on the rating of Natixis, the sole support provider to this program. NNYB provides program-wide liquidity to Versailles Assets in the form of a liquidity loan agreement sized to cover 100% of the outstanding balance of each asset funded by Versailles plus interest/discount to maturity on the related ABCP.