Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE INDEX
  • 12 October 2022

Growth in AI and robotics research accelerates

It may not be unusual for burgeoning areas of science, especially those related to rapid technological changes in society, to take off quickly, but even by these standards the rise of artificial intelligence (AI) has been impressive. Together with robotics, AI is representing an increasingly significant portion of research volume at various levels, as these charts show.

Across the field

The number of AI and robotics papers published in the 82 high-quality science journals in the Nature Index (Count) has been rising year-on-year — so rapidly that it resembles an exponential growth curve. A similar increase is also happening more generally in journals and proceedings not included in the Nature Index, as is shown by data from the Dimensions database of research publications.

Bar charts comparing AI and robotics publications in Nature Index and Dimensions

Source: Nature Index, Dimensions. Data analysis by Catherine Cheung; infographic by Simon Baker, Tanner Maxwell and Benjamin Plackett

Leading countries

Five countries — the United States, China, the United Kingdom, Germany and France — had the highest AI and robotics Share in the Nature Index from 2015 to 2021, with the United States leading the pack. China has seen the largest percentage change (1,174%) in annual Share over the period among the five nations.

Line graph showing the rise in Share for the top 5 countries in AI and robotics

AI and robotics infiltration

As the field of AI and robotics research grows in its own right, leading institutions such as Harvard University in the United States have increased their Share in this area since 2015. But such leading institutions have also seen an expansion in the proportion of their overall index Share represented by research in AI and robotics. One possible explanation for this is that AI and robotics is expanding into other fields, creating interdisciplinary AI and robotics research.

Graphs showing Share of the 5 leading institutions in AI and robotics

Nature 610 , S9 (2022)

doi: https://doi.org/10.1038/d41586-022-03210-9

This article is part of Nature Index 2022 AI and robotics , an editorially independent supplement. Advertisers have no influence over the content.

Related Articles

robotics research paper 2021

Partner content: AI helps computers to see and hear more efficiently

Partner content: Canada's welcoming artificial intelligence research ecosystem

Partner content: TINY robots inspired by insects

Partner content: Pioneering a new era of drug development

Partner content: New tool promises smarter approach to big data and AI

Partner content: Intelligent robots offer service with a smile

Partner content: Hong Kong’s next era fuelled by innovation

Partner content: Getting a grip on mass-produced artificial muscles with control engineering tools

Partner content: A blueprint for AI-powered smart speech technology

Partner content: All in the mind’s AI

Partner content: How artificial intelligence could turn thoughts into actions

Partner content: AI-powered start-up puts protein discovery on the fast track

Partner content: Intelligent tech takes on drone safety

  • Computer science
  • Mathematics and computing

AI now beats humans at basic tasks — new benchmarks are needed, says major report

AI now beats humans at basic tasks — new benchmarks are needed, says major report

News 15 APR 24

High-threshold and low-overhead fault-tolerant quantum memory

High-threshold and low-overhead fault-tolerant quantum memory

Article 27 MAR 24

Three reasons why AI doesn’t model human language

Correspondence 19 MAR 24

NATO is boosting AI and climate research as scientific diplomacy remains on ice

NATO is boosting AI and climate research as scientific diplomacy remains on ice

News Explainer 25 APR 24

Are robots the solution to the crisis in older-person care?

Are robots the solution to the crisis in older-person care?

Outlook 25 APR 24

Lethal AI weapons are here: how can we control them?

Lethal AI weapons are here: how can we control them?

News Feature 23 APR 24

AI’s keen diagnostic eye

AI’s keen diagnostic eye

Outlook 18 APR 24

Use game theory for climate models that really help reach net zero goals

Correspondence 16 APR 24

Tenure-Track/Tenured Faculty Positions

Tenure-Track/Tenured Faculty Positions in the fields of energy and resources.

Suzhou, Jiangsu, China

School of Sustainable Energy and Resources at Nanjing University

robotics research paper 2021

Faculty Positions in Westlake University

Founded in 2018, Westlake University is a new type of non-profit research-oriented university in Hangzhou, China, supported by public a...

Hangzhou, Zhejiang, China

Westlake University

robotics research paper 2021

Global Faculty Recruitment of School of Life Sciences, Tsinghua University

The School of Life Sciences at Tsinghua University invites applications for tenure-track or tenured faculty positions at all ranks (Assistant/Ass...

Beijing, China

Tsinghua University (The School of Life Sciences)

robotics research paper 2021

Professor/Associate Professor/Assistant Professor/Senior Lecturer/Lecturer

The School of Science and Engineering (SSE) at The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen) sincerely invites applications for mul...

Shenzhen, China

The Chinese University of Hong Kong, Shenzhen (CUHK Shenzhen)

robotics research paper 2021

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

IEEE

  • Publications

IEEE Transactions on Robotics (T-RO)

-  ICRA@40 :    September 1, 2023 to May 31, 2024

The IEEE Transactions on Robotics (T-RO)  publishes research papers that represent major advances in the state-of-the-art in all areas of robotics. The Transactions welcomes original papers that report on any combination of theory, design, experimental studies, analysis, algorithms, and integration and application case studies involving all aspects of robotics. You can learn more about T-RO's scope, paper length policy, open access option, and preparation of papers for submission at the  Information for Authors page .

As of late May 2020, T-RO no longer has a "short paper" category for new submissions.  Papers that are short may still be published, but they are treated as Regular paper submissions, and they are subject to the same standards for significance.  Authors of short papers (8 pages or fewer) may consider our sister journal, the  IEEE Robotics and Automation Letters  (RA-L).

Table of Contents of the latest T-RO issue ( IEEE Xplore ) Early Access Articles Most Downloaded Articles Special Collections

Joining the Transactions on Robotics Editorial Board

Presenting your transactions on robotics paper at icra, iros, and case.

Any IEEE Transactions on Robotics (T-RO) paper, other than communication items and survey papers, may be presented at either an upcoming IEEE International Conference on Robotics and Automation (ICRA), an upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), or International Conference On Automation Science and Engineering (CASE), provided most of the key ideas of the paper have never appeared at a conference with a published proceedings (i.e., the paper is a "new" paper and not the evolved version of a previous conference paper or papers). For conference eligibility deadlines, see the RAS conference dates in the blue box above.

Authors may not request any acceleration or delay of the review process based on these criteria.

Upon final notification of acceptance, eligible papers will be offered an option to present at conference in the author's workspace within the PaperCept platform. The prompt within the workspace will include an option to transfer the paper directly to conference organizers. Authors will have a window of one month to select and accept which conference they will present at. Authors are expected to pay the conference fee. Eligible papers may only be presented at one conference.

Historically papers in the Transactions on Robotics have been either "evolutionary" papers (papers extended, with new results, from previously presented conference papers by the same authors) or "new" direct-to-journal papers (papers that are not evolved from conference papers).  Since the introduction of the Robotics and Automation Letters (RA-L), the robotics community has demonstrated strong support for direct-to-journal papers (maximum of eight pages) with the possibility of presentation at a conference.

This IEEE RAS policy, adopted by AdCom in September 2017 and formalizing pilots of the policy at ICRA 2017 and 2018, provides a conference presentation option for "new" direct-to-journal T-RO papers.  Authors are no longer forced to write two versions of the paper (a short one for conference presentation and a longer one for the "final" journal version) if they want the work both to be presented at a conference and to appear in a journal.  This saves on author and reviewer effort, eliminates the confusion over which paper to cite, and reduces the stress on authors and reviewers arising due to submission deadlines for ICRA, IROS, or CASE. The new policy gives a new benefit to T-RO authors and brings high-quality T-RO papers to ICRA, IROS, or CASE without harming the traditional evolutionary model.

Is My Paper "Evolved" or "New?"

This initiative distinguishes between papers that have evolved directly from conference papers ("evolved" papers) and papers that have not ("new" direct-to-journal papers).  Of course the distinction is not always clear-cut, since almost all of one's research has evolved in some way from one's previous papers.

Below are some criteria to consider in the judgment of whether a paper is evolved or new.  If the answer to one or more of these questions is "yes," this is a good sign that your paper should be considered to be evolved.

  • Does the journal paper have the same title as the previous conference paper?
  • Is there a direct lineage from the conference paper(s) to the journal paper?
  • Typically a paper has one or a small number of key new ideas.  (There may be many supporting details.)  Does a majority of the key ideas in the T-RO paper appear in the previous conference paper(s)?
  • Would the T-RO paper have been rejected without the content of the previous conference paper(s)?
  • Does the T-RO paper use a significant amount of text, results, data, or figures from the previous conference paper(s)?

An advantage of having your paper be considered "evolved" is that you are free to incorporate much of the material from your conference paper(s) without penalty in the review process, provided the new paper provides a significant contribution beyond the conference paper(s) (see the guidance here for more details).  The disadvantage is that your "evolved" paper is not eligible for presentation at ICRA, IROS, or CASE.  The disadvantage of declaring your paper "new" is that you cannot reuse significant portions of the material from your conference paper(s), but the advantage is that the new paper (if accepted) is eligible for presentation at ICRA, IROS, or CASE.

Note that no submission can be considered to be "evolved" from a paper that previously appeared in a journal (including the IEEE Robotics and Automation Letters).

If you are in doubt, send your brief analysis along with the T-RO paper and the relevant conference paper(s) to the Editor-in-Chief for an evaluation.  It is unethical to withhold relevant previous conference paper(s) in this analysis.

IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award

2022:  " Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems "   by Yulun Tian; Yun Chang; Fernando Herrera Arias; Carlos Nieto-Granda; Jonathan P. How; Luca Carlone   vol. 38, no. 4, pp. 2022-2038, August 2022, [ Xplore Link ]

Honorable Mention

"Stabilization of Complementarity Systems via Contact-Aware Controllers"   [ Xplore Link ]

"Autonomous Cave Surveying With an Aerial Robot"   [ Xplore Link ]

"Prehensile Manipulation Planning: Modeling, Algorithms and Implementation"   [ Xplore Link ]

"Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control"   [ Xplore Link ]

        "Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications"   [ Xplore Link ]

2021:  " Collision Resilient Insect-scale Soft-actuated Aerial Robots With High Agility "   by YuFeng Chen; Siyi Xu; Zhijian Ren; Pakpong Chirarattananon   vol. 37, no. 5, pp. 1752-1764, October 2021, [ Xplore Link ]

"A Backdrivable Kinematically Redundant (6+3)-dof Hybrid Parallel Robot for Intuitive Sensorless Physical Human-Robot Interaction"   [ Xplore Link ]

"Stochastic Dynamic Games in Belief Space"   [ Xplore Link ]

"ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM"   [ Xplore Link ]

"Active Interaction Force Control for Contact-Based Inspection with a Fully Actuated Aerial Vehicle"   [ Xplore Link ]

        "Distributed Certifiably Correct Pose-Graph Optimization"   [ Xplore Link ]

2020: "TossingBot: Learning to Throw Arbitrary Objects With Residual Physics"   by Andy Zeng; Shuran Song; Johnny Lee; Alberto Rodriguez; Thomas Funkhouser vol. 36, no. 4, pp. 1307-1319, August 2020, [ Xplore Link ]

"Design and Validation of a Powered Knee-Ankle Prosthesis With High-Torque, Low-Impedance Actuators"    [ Xplore Link ]

"Quantifying Hypothesis Space Misspecification in Learning From Human-Robot Demonstrations and Physical Corrections"    [ Xplore Link ]

"Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments"    [ Xplore Link ]

"Deep Drone Racing: From Simulation to Reality With Domain Randomization"    [ Xplore Link ]

2019: "Active Learning of Dynamics for Data-Driven Control Using Koopman Operators"   by Ian Abraham and Todd D. Murphey   vol. 35, no. 5, pp. 1071-1083, October 2019, [ Xplore Link ]

2018: "Grasping Without Squeezing: Design and Modeling of Shear-Activated Grippers"   by Elliot Wright Hawkes, Hao Jiang, David L. Christensen, Amy K. Han, and Mark R. Cutkosky   vol. 34, no. 2, pp. 303-316, April 2018, [ Xplore Link ]

"Exploiting Elastic Energy Storage for “Blind” Cyclic Manipulation: Modeling, Stability Analysis, Control, and Experiments for Dribbling"   [ Xplore Link ]

"VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator"  [ Xplore Link ]

2017: "On-Manifold Preintegration for Real-Time Visual-Inertial Odometry"   by Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza   vol. 33, no. 1, pp. 1-21, February 2017, [ Xplore Link ]

2016: "Rapidly Exploring Random Cycles: Persistent Estimation of Spatiotemporal Fields With Multiple Sensing Robots"   by Xiaodong Lan and Mac Schwager   vol. 32, no. 5, pp. 1230-1244, October 2016, [ Xplore Link ]

2015:  " ORB-SLAM: A Versatile and Accurate Monocular SLAM System" by  Raul Mur-Artal, J. M. M. Montiel and Juan D. Tardos vol. 31, no. 5, pp. 1147-1163, 2015 [ Xplore Link ].

2014:  " Catching Objects in Flight" by  Seungsu Kim, Ashwini Shukla, Aude Billard vol. 30, no. 5, pp. 1049-1065, 2014 [ Xplore Link ].

2013: " Robots Driven by Compliant Actuators: Optimal Control under Actuation Constraints" by  David J. Braun, Florian Petit, Felix Huber, Sami Haddadin, Patrick van der Smagt, Alin Albu-Schäffer, Sethu Vijayakumar vol. 29, no. 5, pp. 1085-1101, 2013 [ Xplore Link ].

2012: " Reinforcement Learning With Sequences of Motion Primitives for Robust Manipulation" by  Freek Stulp, Evangelos A. Theodorou, Stefan Schaal vol. 28, no. 6, pp. 1360-1370, 2012 [ Xplore Link ].

2011: " Human-Like Adaptation of Force and Impedance in Stable and Unstable Interactions" by  Chenguang Yang, Gowrishankar Ganesh, Sami Haddadin, Sven Parusel, Alin Albu-Schaeffer, Etienne Burdet vol. 27, no. 5, pp. 918-930, 2011 [ Xplore Link ].

2010: " Design and Control of Concentric-Tube Robots" by  Pierre E. Dupont, Jesse Lock, Brandon Itkowitz, Evan Butler vol. 26, no. 2, pp. 209-225, 2010 [ Xplore Link ].

2009: " Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing" by  Anastasios I. Mourikis, Nikolas Trawny, Stergios I. Roumeliotis, Andrew E. Johnson, Adnan Ansar, Larry Matthies vol. 25, no, 2, pp. 264-280, 2009 [ Xplore Link ].

2008: " Smooth Vertical Surface Climbing with Directional Adhesion" by  Sangbae Kim, Matthew Spenko, Salomon Trujillo, Barrett Heyneman, Daniel Santos, Mark R. Cutkosky vol. 24, no. 1, pp. 65-74, 2008 [ Xplore Link ].

2007: " Manipulation Planning for Deformable Linear Objects" by  Mitul Saha, Pekka Isto vol. 23, no. 6, pp. 1141-1150, 2007 [ Xplore Link ].

2006: " Exactly Sparse Delayed-State Filters for View-Based SLAM" by  Ryan M. Eustice, Hanumant Singh, John J. Leonard vol. 22, no. 6, pp. 1100-1114, 2006 [ Xplore Link ].

2005: " Active Filtering of Physiological Motion in Robotized Surgery Using Predictive Control" by  Romuald Ginhoux, Jacques Gangloff, Michel de Mathelin,Luc Soler, Mara M. Arenas Sanchez, Jacques Marescaux vol. 21, no. 1, pp. 67-79, 2005 [ Xplore Link ].

2004: " Reactive Path Deformation for Nonholonomic Mobile Robots" by  Florent Lamiraux, David Bonnafous, Olivier Lefebvre vol. 20, no. 6, pp. 967-977, 2004 [ Xplore Link ].

  • Subscription Information
  • Video Submission Guidelines
  • RA Magazine
  • Information for Authors
  • Submission Procedures
  • Special Collections
  • Information for Reviewers
  • List of Reviewers
  • Editorial Board
  • Information for Associate Editors
  • Information for Editors
  • T-RO Papers Presented at Conferences
  • Tips for Making a Good Robot Video
  • IEEE Author Center
  • Plagiarism & Ethical Issues
  • Young Reviewers Program

Students are future of robotics and automation.

easyLink-students

IEEE International Conference on Automation Science and Engineering

CASE 2024 Logo

IEEE/RSJ International Conference on Intelligent Robots and Systems

IROS 2023 Logo

Special 40th anniversary celebration of RAS and ICRA

ICRA40 Call for Contribution

IEEE International conference on Robotics and Automation

ICRA2024 logo quick links

  • Reference Manager
  • Simple TEXT file

People also looked at

Systematic review article, augmented reality meets artificial intelligence in robotics: a systematic review.

www.frontiersin.org

  • Vision and Robotics Lab, Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon

Recently, advancements in computational machinery have facilitated the integration of artificial intelligence (AI) to almost every field and industry. This fast-paced development in AI and sensing technologies have stirred an evolution in the realm of robotics. Concurrently, augmented reality (AR) applications are providing solutions to a myriad of robotics applications, such as demystifying robot motion intent and supporting intuitive control and feedback. In this paper, research papers combining the potentials of AI and AR in robotics over the last decade are presented and systematically reviewed. Four sources for data collection were utilized: Google Scholar, Scopus database, the International Conference on Robotics and Automation 2020 proceedings, and the references and citations of all identified papers. A total of 29 papers were analyzed from two perspectives: a theme-based perspective showcasing the relation between AR and AI, and an application-based analysis highlighting how the robotics application was affected. These two sections are further categorized based on the type of robotics platform and the type of robotics application, respectively. We analyze the work done and highlight some of the prevailing limitations hindering the field. Results also explain how AR and AI can be combined to solve the model-mismatch paradigm by creating a closed feedback loop between the user and the robot. This forms a solid base for increasing the efficiency of the robotic application and enhancing the user’s situational awareness, safety, and acceptance of AI robots. Our findings affirm the promising future for robust integration of AR and AI in numerous robotic applications.

Introduction

Artificial intelligence (AI) is the science of empowering machines with human-like intelligence ( Nilsson, 2009 ). It is a broad branch of computer science that mimics human capabilities of functioning independently and intelligently ( Nilsson, 1998 ). Although AI concepts date back to the 1950s when Alan Turing proposed his famous Turing test ( Turing, 1950 ), its techniques and algorithms were abandoned for a while as the computational power needed was still insufficient. Recently, the advent of big data and the Internet of Things (IoT), supercomputers, and cheap accessible storage have paved the way for a long-awaited renaissance in artificial intelligence. Currently, research in AI is involved in many domains including robotics ( Le et al., 2018 ; Gonzalez-Billandon et al., 2019 ), natural language processing (NLP) ( Bouaziz et al., 2018 ; Mathews, 2019 ), and expert systems ( Livio and Hodhod, 2018 ; Nicolotti et al., 2019 ). It is becoming ubiquitous in almost every field that requires humans to perform intelligent tasks like detecting fraudulent transactions, diagnosing diseases, and driving cars on crowded streets.

Specifically, in the field of robotics, AI is optimizing a robot’s autonomy in planning tasks and interacting with the world. The AI robot offers a greater advantage over the conventional robot that can only apply pre-defined reflex actions ( Govers, 2018 ). AI robots can learn from experience, adapt to an environment, and make reasonable decisions based on their sensing capabilities. For example, research is now leveraging AI’s learning algorithms to make robots learn the best path to take for different cases ( Kim and Pineau, 2016 ; Singh and Thongam, 2019 ), NLP for an intuitive human-robot interaction ( Kahuttanaseth et al., 2018 ), and deep neural networks to develop an understanding of emotional intents in human-robot interactions (HRI) ( Chen et al., 2020a ; Chen et al., 2020b ). Computer vision is also another field of AI that has enhanced the perception and awareness of robots. It combines machine learning with image capture and analysis to support robot navigation and automatic inspection. This ability of a robot to possess self-awareness is facilitating the field of HRI ( Busch et al., 2017 ).

The field of robotics has also benefited from the rising technology of augmented reality (AR). AR expands a user’s physical world by augmenting his/her view with digital information ( Van Krevelen and Poelman, 2010 ). AR devices are used to support the augmented interface and are classified into eye-wear devices like head-mounted displays (HMD) and glasses, handheld devices like tablets and mobile phones, and spatial projectors. Two other extended reality (XR) technologies exist that we need to distinguish from AR, and they are virtual reality (VR) and mixed reality (MR). VR is a system that, compared to AR which augments information on a live view of the real world, simulates a 3D graphical environment totally different from the physical world, and enables a human to naturally and intuitively interact with it ( Tzafestas, 2006 ). MR combines AR and VR, meaning that it merges physical and virtual environments ( Milgram and Kishino, 1994 ). Recently, the research sector witnessed a booming activity of integrating augmented reality in supporting robotics applications ( Makhataeva and Varol, 2020 ). These applications include robot-assisted surgery (RAS) ( Pessaux et al., 2015 ; Dickey et al., 2016 ), navigation and teleoperation ( Dias et al., 2015 ; Papachristos and Alexis, 2016 ; Yew et al., 2017 ), socially assistive robots ( Čaić et al., 2020 ), and human-robot collaboration ( Gurevich et al., 2015 ; Walker et al., 2018 ; Makhataeva et al., 2019 ; Wang and Rau, 2019 ). AR has also revolutionized the concepts of human-robot interaction (HRI) by providing a user-friendly medium for perception, interaction, and information exchange ( De Tommaso et al., 2012 ).

What has preceded affirms that the benefits of combining AI and AR in robotics are manifold, and special attention should be given to such efforts. There are several review papers highlighting the integration of augmented reality to robotics from different perspectives such as human-robot interaction ( Green et al., 2008 ; Williams et al., 2018 ), industrial robotics ( De Pace et al., 2020 ), robotic-assisted surgery (L. Qian et al., 2020 ), and others ( Makhataeva and Varol, 2020 ). Similarly, there exist papers addressing the potential of integrating artificial intelligence in robotics as reviewed in Loh (2018) , De Pace et al. (2020) and Tussyadiah (2020) . A recent review ( Makhataeva and Varol, 2020 ) summarizes the work done at the intersection of AR and Robotics, yet it only mentions how augmented reality has been used within the context of robotics and does not touch on the intelligence in the system from different perspectives as highlighted in this paper. Similarly, another systematic review ( Norouzi et al., 2019 ) presented the convergence of three technologies: Augmented reality, intelligent virtual agents, and internet of things (IOT). However, it did not focus on robotics as the main intelligent systems and even excludes agents having physical manifestations of humanoid robots. Consequently, this paper systematically reviews literature done over the past 10 years at the intersection of AI, AR, and robotics. The purpose of this review is to compile what has been previously done, analyze how augmented reality is supporting the integration of artificial intelligence in robotics and vice versa, and suggest prospective research opportunities. Ultimately, we contribute to future research through building a foundation on the current state of AR and AI in robotics, specifically addressing the following research questions:

1) What is the current state of the field on research incorporating both AR and AI in Robotics?

2) What are the various elements and disciplines of AR and AI used and how are they intertwined?

3) What are some of the current applications that have benefited from the inclusion of AR and AI? And how were these applications affected?

To the best of our knowledge, this is the first literature review combining AR and AI in robotics where papers are systematically collected, reviewed, and analyzed. A categorical analysis is presented, where papers are classified based on which technology supports the other, i.e., AR supporting AI or vice versa, all under the hood of robotics. We also classify papers into their perspective robotic applications (for example grasping) and explain how this application was improved. Research questions 1 and 2 are answered in Results , and research question 3 is answered in Discussion .

The remainder of the paper is organized according to the following sections: Methods, which specifies the survey methodology adopted as well as inclusion and exclusion criteria, Results, which presents descriptive statistics and analysis on the total number of selected papers in this review (29 papers), Discussion, which presents an analysis on each paper from different perspectives, and finally Concluding Remarks, which highlights key findings and proposes future research.

This paper follows a systematic approach in collecting literature. We adopt the systematic approach set forth in Pickering and Byrne (2014) , which is composed of 15 steps as illustrated in Figure 1 .

www.frontiersin.org

FIGURE 1 . The adopted systematic approach in this review paper.

Steps 1 and 2 were explicitly identified in the Introduction . This section outlines the used keywords (step 3) and the used databases (step 4).

Search Strategy and Data Sources

Regarding keywords, this review targets papers that combine augmented reality with artificial intelligence in robotics. The first source used was Google Scholar denoted by G. Initially, we excluded the words surgery and education (search keys G1, G2, and G3) to narrow down the total number of output papers. Concurrently, there are several papers reviewing AI Robots in surgical applications ( Loh, 2018 ; Andras et al., 2020 ; Bhandari et al., 2020 ) and AI in education ( Azhar et al., 2020 ; Chen et al., 2020a ; Chen et al., 2020b ). Then, search keys G4 and G5 were used (Where we re-included the terms “surgery” and “education”) to cover a wider angle and returned a large number of papers, upon which we scrutinized only the first 35 pages. The second source of information is Scopus Database denoted by S, upon which two search keys were used, S1 and S2, and the third is ICRA 2020 proceedings. Finally, the references and citations of the corresponding selected outputs from these three sources were checked.

The time range of this review includes papers spanning the years between 2010 and 2020. Note that the process of paper collection for search keys G1, G2, G3, G4, S1, and S2 started on the 30 th of June and ended in July 21 st 2020. G5 search key was explored between August 11 th and August 20 th , 2020 and finally, ICRA 2020 proceedings were explored between August 20 th and August 31 st 2020.

Study Selection Criteria

The selection process was as follows: First duplicates, patents, and non-English papers were excluded. Then, some papers were directly excluded by scanning their titles, while others were further evaluated by looking into their abstract and keywords and downloading those that are relevant. Downloaded papers are then scanned through quickly going over their headers, sub-headers, figures, and conclusions. Starting from a total of 1,200, 329, and 1,483 papers from Google Scholar, Scopus database, and ICRA proceedings respectively, the total number of selected papers were funneled down to 13, 8, and 3 papers, respectively. After that, we looked into the references and citations of these 24 papers and selected a total of five papers. The inclusion and exclusion criteria were as follows:

Exclusion Criteria

• Papers with a non-English content

• Duplicate papers

• Patents

Inclusion Criteria

• The application should directly involve a robot

• Artificial Intelligence is involved in the Robotics Application. Although the words artificial intelligence and machine learning are used interchangeably in this paper, most of the cited work is more accurately a machine learning application. Artificial intelligence remains the broader concept of machines acting with intelligence and thinking as humans, with machine learning being the subset of algorithms mainly concerned with developing models based on data in order to identify patterns and make decisions.

• An Augmented Reality technology is utilized in the paper.

The process flow is also illustrated in Figure 2 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines ( Moher et al., 2009 ).

www.frontiersin.org

FIGURE 2 . The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) chart.

A total of 29 papers were selected and individually examined through checking their abstracts, conclusions, and analyzing their main content. This section presents how the collected literature was classified into categories and presents some descriptive statistics visualized through figures and tables.

Categorization

There are two parallel categorizations in this paper: a theme-based categorization and an application-based categorization. Initially, all papers were grouped into two clusters based on a theme-based grouping of how AR and AI serve each other in a certain robotics application. The two distinguished clusters were as follows: AR supports AI, and AI supports AR. Each of these clusters is further explained below along with the total number of papers per group.

AR Supports AI (18 Papers)

This cluster groups papers in which a certain augmented reality visualization facilitates the integration of artificial intelligence in robotics. An example is an augmented reality application which provides visual feedback that aids in AI robot performance testing.

AI Supports AR (11 Papers)

Papers in which the output of AI algorithms and neural networks support an accurate display of augmented reality markers and visualizations.

Another remarkable pattern was noted among the 29 papers in terms of the specific robotics application that this AR-AI alliance is serving. In consequence, a parallel categorization of the 29 reviewed articles is realized, and three clusters were distinguished as follows:

Learning (12 Papers)

A robot learns to achieve a certain task, and the task is visualized to the human using AR. This category combines papers on learning from demonstration (LFD) and learning to augment human performance.

Planning (8 Papers)

A robot intelligently plans a certain path, task, or grasp, and the user can visualize robot information and feedback through AR.

Perception (9 Papers)

A robot depends on AI vision algorithms to localize itself or uses object detection and recognition to perceive the environment. AR serves here in identifying the robot’s intent.

Statistical Data

For the sake of analyzing historical and graphical aspects of the reviewed topic, Figures 3 , 4 present the yearly and regional distribution of reviewed papers, respectively. Historically, the number of publications integrating AR and AI in robotics applications has increased significantly between the years 2010 and 2020 (2020 is equal to 2019 but the year has not ended yet), demonstrating the growing interest in combining the capabilities of AR and AI to solve many challenges in robotics applications. Regionally, the united states is the leading country in terms of the number of published articles, followed by Germany. Note that we only considered the country of the first author for each paper.

www.frontiersin.org

FIGURE 3 . The growing rate of published papers addressing our target topic over time.

www.frontiersin.org

FIGURE 4 . The distribution of reviewed papers over their countries of origin.

Additional quantitative data are detailed in Table 1 . For each article, the table identifies five types of information: The AR technology and platform, the type of robot platform, the used AI algorithm, and to which cluster (from each category) it belongs. Overall, the most commonly used AR component is the HMD (48% of papers), mainly Microsoft HoloLens ( Microsoft HoloLens, 2020 ), Oculus Rift ( Oculus, 2021 ), or custom designed headsets. This is followed by desktop-based monitors (28%) and AR applications on handheld tablets and mobile phones (21%). Projection-based spatial AR were the least implemented (3%), which can be explained by the added complexity of the setup and lack of mobility. Unity3D game engine was the most commonly used for developing AR applications and visualizations, in comparison to Unreal Engine. Other options were using the Tango AR features supported by the Google Tango tablet or creating applications from scratch using the OpenGL graphics library. Regarding the type of robot used, aerial robots, such as UAVs and drones, were the least utilized (13%) in comparison to mobile robots (48%) and robotic arms (39%). Deep Neural networks were the most investigated in literature (52%) along with other state-of-the-art machine learning algorithms. Furthermore, the majority of papers were involved in creating visualizations that support AI integration to robotics, rather than implementing AI to enhance the augmented reality application in robotics.

www.frontiersin.org

TABLE 1 . Descriptive elements on the type of the used AR Component, robotics platform, AI component, and categorization for all reviewed papers.

Another set of distinctive features were extracted through analyzing three attributes, mainly the type of robot platform used, the type of AR technology employed, and the nature of the AI method performed, for each of the three robotics applications. The results are depicted in Figure 5 . The majority of papers (around 70%) that fall under the “Learning category” were using robot arms and manipulators as their robot platform. This is mainly because the Learning category reviews the learning from demonstration application, which is historically more common for industrial robotics applications in which a user demonstrates the trajectory of the end effector (EE) of a robot arm ( Billard et al., 2008 ; Mylonas et al., 2013 ; Zhu and Hu, 2018 ) than in the context of mobile robots ( Simões et al., 2020 ) or aerial robots ( Benbihi et al., 2019 ). On the other hand, around 70% of reviewed papers targeting robot “Perception” applications were using mobile robots. The reason is that vision-based localization algorithms are usually more ubiquitous for mobile robots ( Bonin-Font et al., 2008 ) compared to the other two platforms. The three robot platforms were almost equally distributed in the “Planning” category with a relatively higher prevalence of mobile robots.

www.frontiersin.org

FIGURE 5 . The quantity distribution of three factors: Robot platform, AR technology, and AI method, over the three robot applications: Learning, Planning, and Perception.

Regarding the type of AR hardware/technology used, it was noted that the HMD was the most commonly used for all robotics applications covered, followed by the tablet or the desktop-based monitor. Spatial AR, or projection-based AR, was the least commonly used given its rigidness in terms of mobility and setup. As for the used AI, there was a variety of methods used, including regression, support vector machine (SVM), and Q-learning. However, neural networks, including YOLO and SSD deep neural networks, were the more commonly used across the three robotics applications. Neural networks were utilized in 42, 25, and 80% of the reviewed papers in the learning, planning, and perception categories, respectively.

Augmented reality technology has created a new paradigm for human-robot interaction. Through enabling a human-friendly visualization of how a robot is perceiving its environment, an improved human-in-the-loop model can be achieved ( Sidaoui et al., 2019 ; Gong et al., 2017 ). The use of AR technology for robotics has been elevated by the aid of several tools, mainly Vuforia Engine ( Patel et al., 2019 ; Makita et al., 2021 ; Comes et al., 2021 ), RosSharp ( Kästner and Lambrecht, 2019 ; Rosen et al., 2019 ; Qiu et al., 2021 ), ARCore ( Zhang et al., 2019 ; Chacko et al., 2020 ; Mallik and Kapila, 2020 ), and ARKit ( Feigl et al., 2020 ; McHenry et al., 2021 ). ARCore and ARKit are tools that have enhanced the AR experience for motion tracking, environmental understanding, light estimation, among other features. RosSharp has provided an open-source software for communication between ROS and Unity, which have greatly facilitated the use of AR for robot applications and provided useful easy-access functionalities, such as publishing and subscribing to topics and transferring URDF files.

In this section, 29 papers are analyzed in two parallel categorizations as explained in Results , a theme-based analysis capturing the relation between AR and AI in a robotic context (AR supports AI, AI supports AR) and an application-based analysis focusing on the perspective of how the robotic application itself was improved. We have also compiled a qualitative table ( Table 2 ) highlighting several important aspects in each paper. The highlighted aspects include the type of robot used, the nature of the experiment and number of human subjects, the human-robot interaction aspect, and the advantages, disadvantages and limitations of integrating AR and AI.

www.frontiersin.org

TABLE 2 . Qualitative information and analysis of each paper.

Theme-Based Analysis

The two themes highlighted here depend on the nature of the AR-AI alliance. Consequently, 18 papers in which an augmented reality technology is facilitating the integration of AI to robotics are reviewed under the “AR supports AI” theme, and 11 papers in which AI has been integrated to enhance the AR experience for a certain robotics application are reviewed under the “AI supports AR” theme.

AR Supports AI

In this cluster, augmented reality is used as an interface to facilitate AI, such as visualizing the output of AI algorithms in real-time. Papers are grouped depending on the type of robotic platform used: mobile robots, robotic arms, or aerial robots. Some papers contain both and are categorized based on the more relevant type.

Mobile Robots

An AR interface was developed in El Hafi et al. (2020) for an intelligent robotic system to improve the interaction of service robots with non-technical employees and customers in a retail store. The robot performs unsupervised learning to autonomously form multimodal place categorization from a user’s language command inputs and associates them to spatial concepts. The interface provided by an HMD enables the employee to monitor the robot’s training in real-time and confirm its AI status.

After investigating possible interfaces that allow user-friendly interactive teaching of a robot’s virtual borders ( Sprute et al., 2019a ), the authors in Sprute et al. (2019b) used a Google-Tango tablet to develop an AR application which prompts the user to specify virtual points on a live video of the environment from the tablet’s camera. The used system incorporates a Learning and Support Module which learns from previous user-interactions and supports users through recommending new virtual borders. The borders will be augmented on the live stream and the user can directly select and integrate them to the Occupancy Grid Map (OGM).

An augmented reality framework was proposed in Muvva et al. (2017) to provide a cost-effective medium for training a robot an optimal policy using Q-learning. The authors used ODG-R7 glasses to augment virtual objects at locations specified by fiducial markers. A CMU pixy sensor was used to detect both physical and virtual objects.

An AR mobile application was developed in Tay et al. (n.d.) that can inform the user of specific motion abnormalities of a Turtlebot, predict their causes, and indicate future failure. This information will be augmented on the live video of a mobile phone and sent to the user via email. The system uses the robot’s IMU data to train a gradient boosting algorithm which classifies the state of the motor into fault conditions indicating the level of balancing of the robot (tilting). This system decreases the downtime of the robot and the time spent on troubleshooting.

The authors in Corotan and Irgen-Gioro (2019) investigated the capabilities of augmented reality (ARCore) as an all in one solution for localization, indoor routing, and detecting obstacles. The application runs on a Google Pixel smartphone, which acts as both the controller (through a three-view user interface) and the sensor. Using its on-board localization features, an optimal path is planned from a starting position to an end position based on a Q-learning algorithm.

Omidshafiei et al. ( Measurable Augmented Reality for Prototyping Cyberphysical Systems, 2016 ) implemented an AR environment that provides visual feedback of hidden information to assist users in hardware prototyping and testing of learning and planning algorithms. In this framework, a ceiling-mounted projection system augments the physical environment in the laboratory with specific mission-related features, such as visualizing the state observation probabilities. In this system, the tracking of mobile and aerial robots is based on motion-capture cameras. Similarly, Hastie et al. (2018) presented the MIRIAM interface developed by the ORCA Hub: a user-centered interface that supports on-demand explainable AI through natural language processing and AR visualizations.

Robotic Arms

An Android mobile AR application was developed in Dias et al. (2020) as a training interface for a multi-robot system to perform a task variant. The tablet acts as a data collection interface based on the captured input demonstrations of several users. The application visualizes detected robots (using AR markers) and enables each user to construct a toy building of their choice through sequential tasks. Deep Q-learning ( Hester et al., 2017 ) has been employed to learn from the sequence of user demonstrations, predict valid variants for the given complex task, and achieve this task through a team of robots. The accuracy achieved in task prediction was around 80%.

The authors in Warrier and Devasia (2018) implemented a Complex Gaussian Process Regression model to learn the intent of a novice user during his/her teaching of the End Effector (EE) position trajectory. A Kinect camera captures the user’s motion, and an AR HMD visualizes the desired trajectory versus the demonstrated trajectory, which allows the operator to estimate the error (i.e., difference between the two trajectories) and correct accordingly. This approach was tested by a single operator and showed a 20% decrease in the tracking errors of demonstrations compared to manual tracking.

AR solutions were investigated in Ong et al. (2010) to correct for model mismatches in programming by demonstration (PBD), where they used an HMD as a feedback and data collection interface for robot path planning in an unknown environment. The user moves a virtual robot (a probe with AR markers) along a desired 3D curve with a consistent orientation, while evaluating the drawn curve using AR. The collected data points are then fed to a three-stage curve learning method, which increased the accuracy of the desired curve. The system was further enhanced in Fang et al. (2013) through considering robot dynamics, basically the end effector (EE) orientation. Once the output curve is generated, a collision-free volume (CFV) is displayed and augmented on a desktop screen to the user who can select control points for EE orientation. Some limitations in the proposed interface were found, such as the difficulty in aligning the virtual robot with the interactive tool, occluding markers or moving them out of the camera’s view, and selecting inclination angles that are not within range, causing the EE to disappear from the display. Consequently, the used AR visual cues were further developed for a robust HRI in Fang et al. (2014) , such as the use of virtual cones to define the orientation range of the EE, colors to distinguish dataset points, control points, and points outside the range of the CFV, and an augmented path rendered by a set of selected control points.

A HoloLens HMD was also used in Liu et al. (2018) as an AR interface in the teaching process of interpretable knowledge to a 7-DoF Baxter robot. The full tree of robot coordinates TF and latent force data were augmented on the physical robot. The display also offers the user to turn on the robot’s learned knowledge represented by a “Temporal And-Or graph,” which presents live feedback of the current knowledge and the future states of the robot.

A semi-automatic object labeling method was developed in De Gregorio et al. (2020) based on an AR pen and a 2D tracking camera system mounted on the arm. In this method, a user first outlines objects with virtual boxes using an AR pen (covered with markers) and a robot acquires different camera poses through scanning the environment. These images are used to augment bounding boxes on a GUI which enables the user to refine them.

The authors in Gadre (2018) implemented a training interface facilitated by Microsoft HoloLens for learning from demonstration. The user can control the EE position by clicking commands on a transparent sphere augmented on the EE and use voice commands to start and end the recording of the demonstration. Through clicking on the sphere at a specific EE position, the system will store it as a critical point (CP) and augment a transparent hologram of the robot on its position as a visual reminder of all saved CPs. The saved CPs are then used to learn a Dynamic Movement Primitive (DMV).

A spatial programming by demonstration (PBD) called GhostAR was developed in Cao et al. (2019) , which captures the real-time motion of the human, feeds it to a dynamic time warping (DTW) algorithm which maps it to an authored human motion, and outputs corresponding robot actions in a human-lead robot-assist scenario. The captured human motions and the corresponding robot actions are saved and visualized to the user who can observe the complete demonstration with saved AR ghosts of both the human and robot and interactively perform edits on robot actions to clarify user intent.

The authors in Zhang et al. (2020) created the Dex-Net deep grasp planner, a distributed open-source pipeline that can predict 100 potential grasps from the object’s depth image based on a pre-trained Grasp Quality CNN. The grasp with the highest Quality value will be overlaid on the object’s depth map and visualized on the object through an AR application interface provided by ARKit. The system was able to produce optimal grasps in cases where the top-down approach doesn’t detect the object’s complex geometry.

An AR assistive-grasping system was implemented in Weisz et al. (2017) that can be used by impaired individuals in cluttered scenes. The system is facilitated by a surface electromyography (sEMG) input device (a facial muscle signal) and can be evaluated using an augmented reality desktop-based display of the grasping process. The interface allows a visualization of the planned grasp. The probabilistic road map planner ( Kavraki et al., 1996 ) was used to verify the reachability of an object and a K-nearest neighbor (KNN) classifier for classifying objects into reachable and unreachable.

The authors in Chakraborti et al. (2017) proposed combining AR technology with electroencephalographic (EEG) signals to enhance Human-robot collaboration specifically in shared workspaces. Two AR interaction modalities were implemented via an HMD. The first facilitates the human-in-the-loop task planning while the other enhances situational awareness. Through observing the emotions from EEG signals, the robot can be trained through reinforcement learning to understand the user’s preferences and learn the process of human-aware task planning.

Aerial Robots

A teleoperation system was developed in Zein et al. (2020) that recognizes specific desired motions from the user joystick input and accordingly suggests to auto-complete the predicted motion through an augmented user interface. The proposed system was tested on Gazebo using a simulated Parrot Ar. Drone 2.0 and performed better than manual steering by 14.8, 16.4, and 7.7% for the average distance, time, and Hausdorff metric, respectively.

The authors in Bentz et al. (2019) implemented a system in which an aerial collaborative robot feeds the data from the head motions of a human performing a multitasking job to an Expectation-Maximization that learns which environment views have the highest visual interest to the user. Consequently, the co-robot is directed to capture these relevant views through its camera, and an AR HMD supplements the human’s field of view with views when needed.

Overall, the advantages of augmented reality in facilitating the integration of AI to robotics applications are manifold. AR technologies can provide a user-friendly and intuitive medium to visualize the learning process and provide the live learned state of the robot. They also provide a medium for the robot to share its present and future intent, such as the robot perceived knowledge and the robot’s planned actions based on its AI algorithms. Although the AR HMDs - such as those provided by Microsoft HoloLens and Oculus Rift - are the most commonly used for an intuitive HRI, they still have their limitations such as their narrow field of view (FOV) and impractical weight. Other AR interfaces used included mobile phones, tablets, and desktop displays. The latter is more practical in simulations, otherwise, the user will need to split attention between the actual robot and the augmented display. Tablets and mobile phones are generally more intuitive but impractical in situations where the user has to use both hands. Spatial AR, also known as projection-based AR, is less used due to its mobility restrictions.

AI Supports AR

In this cluster, AI contributes to an accurate and more reliable augmented reality application, or interface, such as applying deep learning for detecting obstacles in the robot’s path. Papers are also grouped depending on the type of robotic platform used.

The authors in Ghiringhelli et al. (2014) implemented an AR overlay on the camera view of a multi-robot system. The system supports three types of information: textual, symbolic, and spatially situated. While the first two reveal insights about the internal state of each robot without considering its orientation or camera perspective, spatially situated information depends on how the robot perceives its surrounding environment and are augmented on each robot using its frame of reference. Properly augmenting information depends on a visual tracking algorithm that identifies robots from the blinking code of an onboard RGB LED.

In Wang et al. (2018) , the authors used deep learning to obtain the location of a target in the robot’s view. The robot first runs simultaneous localization and mapping (SLAM) to localize and map the place in an urban search and rescue scenario. Once the robot detects a target in the area, an AR marker is placed on its global coordinate and displayed to the user on the augmented remote screen. Even when the detected target is not within display, the location of the marker changes according to its place relative to the robot.

The authors in Kastner et al. (2020) developed a markerless calibration method between a HoloLens HMD and a mobile robot. The point cloud data acquired from the 3D depth sensor of the AR device are fed into a modified neural network based on VoteNet. Although the approach was feasible in terms of an accurate localization and augmentation of the robot by a 3D bounding box, the intensive live processing operations of point cloud data was very slow. Two seconds was the time needed for the user to stay still while the neural network processes the incoming data, which can be impractical and lead to a bad user experience.

Alternatively, Kästner et al. (2020) investigated using the 2D RGB data provided by the HoloLens instead, which is relatively faster to process than 3D data and can be applied to any AR device. SSPE neural networks were deployed in order to localize the six DOF pose of a robot. Meanwhile, the resulting bounding boxes are augmented to the user, who can evaluate the live training process. This method is around 3% less accurate than the first one but almost 97% faster.

The authors in Puljiz et al. (2019) reviewed the referencing and object detection methods used in the robotics field in general and the referencing methods currently used between a robot and the HMD in particular. Based on this, authors proposed three referencing algorithms that can serve this particular domain: Semi-Automatic One Shot, Automatic One Shot, and Automatic Continuous. While the trials for the proposed automatic methods (based on neural networks) are still in their infancy, a detailed implementation of Semi-Automatic referencing (ICP and Super4PCS algorithms) was tested on a KUKA KR-5 robot. With a minimal user input - positioning a cube (a seed hologram) on the base of the robot and rotating its z-axis towards its front - the referenced robot will be augmented on the actual one via the Microsoft HoloLens display.

An AR teleoperation interface was implemented in Gradmann et al. (2018) of a KUKA lightweight robot using a Google Tango Tablet. The interface allows the user to change the robot joint configuration, move the tool center point, and perform grasping and placing objects. The application provides a preview of the future location of the robot by augmenting its corresponding virtual one according to the new joint configuration. Object Detection was done using Tango’s built-in depth camera and RGB camera and is based on DBSCAN algorithm.

The authors in ( Chu et al., 2008 .) used a Tongue Drive System as input for an assistive grasping system facilitated through an AR interface. The system implements the YOLO neural network [39] for object detection and a deep grasp algorithm ( Chu and Vela, 2018 ) for detecting the graspable locations for each object. Consequently, this information (bounding boxes and grasp lines) will be properly augmented on objects within the user’s FOV. Furthermore, a virtual menu provides the user with robot affordances that can be performed.

A teleoperation surveillance system was proposed in Sawarkar et al. (2016) composed of an unmanned ground vehicle (UGV) and an unmanned aerial vehicle (UAV) in the context of a hostile environment. The IMU measurements of a VR goggle are used to control the rotations of a camera mounted on each vehicle. The live video stream is processed to detect individuals and their probabilities of being terrorists using a CNN. This information is then augmented to the user through the goggle.

As implied in literature, artificial intelligence techniques are a great means for a robust visualization and an improved user experience. Traditional techniques to augment information on objects or targets are mainly using fiducial AR markers, which are impractical in cases of new environments such as in urban search and rescue (USAR) scenarios. On one hand deep learning can improve robot perception of its environment to detect objects and properly augment related information on each. On the other hand, it can be used to localize the robot itself and reveal information during its live performance. A key consideration for these systems is the processing requirements versus the current capabilities of the hardware.

Application-Based Analysis

This section focuses on the areas in which AR and AI were applied. In other words, we explain here how the challenges of a certain robotics application - such as learning from demonstration and robot localization - were addressed through leveraging resources from augmented reality and artificial intelligence. We divide this into three main headings: Learning (12 papers), Planning (8 papers), and Perception (9 papers). Tables 3 , 4 , and 5 summarize the advantages as well as disadvantages and limitations of each method in each of the three subheadings respectively.

www.frontiersin.org

TABLE 3 . The advantages as well as the disadvantages and limitations of each method in the Learning sub-heading.

www.frontiersin.org

TABLE 4 . The advantages as well as the disadvantages and limitations of each method in the Planning sub-heading.

www.frontiersin.org

TABLE 5 . The advantages as well as the disadvantages and limitations of each method in the Perception sub-heading.

In general terms, a robot is said to learn from its environment or from the human if it can develop novel skills from past experience and adapt according to the situation at hand. According to the collected literature, we divide the scope of learning here to two basic paradigms: Learning from demonstration and Learning to augment human performance.

Learning From Demonstration

Robot learning from demonstration (LFD) is described as the ability of a robot to learn a policy – identified as a mapping between the robot world state and the needed actions – through utilizing the dataset of user demonstrated behavior ( Argall et al., 2009 ). This dataset is called the training dataset, and it is formally composed of pairs of observations and actions. Consequently, training channels are a bottleneck in such applications, and this is where augmented reality comes very handy. AR interfaces can act as a means for demonstrating the required behavior, and more importantly, improve the overall process through demystifying user intent. Consequently, the user can intuitively understand the “robot intent” (i.e., how the robot is understanding his/her demonstration). On the other hand, AI can be used for the robot to learn the “user intent” (i.e., understand what the user wants the robot to perform and adapt accordingly), and visualize this intent through AR. The following analysis clarifies this within the context of LFD.

In Ong et al. (2010) and Fang et al. (2013 , 2014) , data points of the demonstrated trajectory (of a virtual robot) are collected, edited, and visualized through a HMD/GUI allowing the user to intuitively clarify his/her intent of the desired trajectory. These demonstrations are first parameterized using a Piecewise Linear Parameterization (PLP) algorithm, then fed to a Bayesian neural network (BNN), and finally reparametrized. Authors compared error metrics and demonstrated that the proposed three-stage curve learning method (PLP, BNN, and reparameterization) improved the accuracy of the output curve much faster than the basic approach. Similarly, authors in Gadre (2018) used the Microsoft HoloLens as an interface for data collection in demonstrating a desired curve for a real Baxter robot. The interface allows the user to interactively control a teleoperation sphere augmented on the robot EE. The environment is modeled as a Markov Decision Process, and the agent (robot) learns a Dynamic Movement Primitive based on the user-defined critical points. Data from demonstrations were processed through a least-square function. Although this methodology supports an intuitive interface for collecting training data, it was prone to errors as the real robot and hologram were not lining up all the time, causing inaccurate representation of locations. Furthermore, the system was only tested by a single expert demonstrator.

In Warrier and Devasia (2018) , the authors trained a kernel-based regression model to predict the desired trajectory of the EE based on a database of human-motor dynamics. Through observing the human-motor actions collected through a Microsoft Kinect camera, the model can infer the intent of the user of the desired trajectory. A single trial allows the robot to infer a new desired trajectory, which is then visualized to the user through the HoloLens against the actual demonstrated trajectory. This allows the user to spatially correct the error through moving their hand (tracked using the Skeleton Tracking routine) to minimize the distance between the demonstrated and desired trajectories. Alternatively, the authors in Liu et al. (2018) captured demonstrations by tracking hand-object interactions collected through a LeapMotion sensor. After manually segmenting the captured data into groups of atomic actions (such as pinch, twist, and pull), this data is used to train a modified version of the unsupervised learning algorithm: ADIOS (Automatic Distillation of Structure). This induces a Temporal and Or Graph (AOG), a stochastic structural model which provides a hierarchical representation of entities. The AR interface then allows to interactively guide the robot without any physical interactions, for example through dragging the hologram of the virtual robot to a new pose.

In Cao et al. (2019) , the human motion is captured through the AR elements (Oculus Rift and two Oculus Touch Controllers) and saved as ghost holograms. Dynamic Time Warping is used to infer the human motion in real time from a previously compiled list of groups that represent human authorized motions. The workflow of the proposed system consists of five modes: The Human Authoring Mode in which the demonstrations are recorded, The Robot Authoring Mode in which the user can interactively author the collaborative robot task, The Action Mode in which the user performs the new collaborative task, and The Observation and Preview Modes for visualizing saved holograms and an animation of the whole demonstration.

A tablet was used in Dias et al. (2020) for data collection, prompting the user to construct a toy building through controlling a multi-robot system consisting of two mobile robots to carry blocks of different types and one robot arm for pick and place and a grid of state cells is used to represent the workspace. Given that the user can select between 135 possible actions to construct the toy, the application stores this data for training the DNN model. The model computes the posterior probability of the uncertain action (how the user is building the structure), predicting the move with the highest probability depending on what the current state is in the occupancy grid. Although the model performed successful task variants for 80% of the trials, authors indicated that further improvements should be done to improve the prediction of sequential actions and investigate more complex tasks.

Learning to Augment Human Performance

Machine Learning opens a great avenue for improving the quality and efficiency of tasks performed by humans, such as maintenance and troubleshooting, multitasking work, or even teleoperation. AI would be used for understanding data and providing suggestions that would augment (improve) human performance of the task at hand. In the following analysis, we analyze content within this perspective, focusing on how the application was improved.

Multitasking is improved in Bentz et al. (2019) , where data from a HMD are fit to a model that identifies views of interest to the human, directs an aerial co-robot to capture these views, and augments them on his/her. The input data is the head pose collected through a VICON motion capture system. A function, modeled as a mixture of Gaussians, receives this data and estimates the human visual interest via expectation maximization (EM). Although the average time to complete the primary task increased by around 10–16 s, the head motions recorded throughout the experiment were reduced by around 0.47 s per subject.

In Tay et al. (n.d.) , the authors investigated two machine learning models trained on IMU sensor data of a Turtlebot to predict possible motor failures. SAS Visual Data Modelling and Machine Learning (VDMML) was used to test which of the Random Forest Model and Gradient Boosting would perform better to track the balance (tilting) of the robot. Gradient Boosting was chosen as it showed a lower average squared error in predictions, with 315 generated decision trees and 426 maximum leaf size.

An “Autocomplete” framework was proposed in Zein et al. (2020) that would support novice users in teleoperating complex systems such as drones. The system takes the human input from a Joystick, predicts what the actual desired teleoperation command is, and then shares it with the user through an augmented reality interface. The used model is an SVM trained on 794 motion examples to classify the input motion as one from a library of motion primitives which currently are lines, arcs, 3D helix motions, and sine motion.

In this section, two learning paradigms were discussed, robot learning from demonstration (LFD) and robot learning to augment human performance. The presented literature affirms that AR and AI will be extensively integrated in these two robotics applications in the near future. In the former, AR serves as a user-friendly training interphase and has a great potential for swarm mobile robotics, as multiple users can more easily train a multi-robot system. In the context of manipulators and robotic arms, visualizing demonstrations in real time allows the user to understand trajectories, correct for errors, and introduce new constraints to the system. In the latter, there is a potentially growing avenue to employ AI in robotic applications that understand user instructions (of the task at hand) and employ AR to visualize what the robot understands and interactively ask for feedback from the user. This has a great potential in complex applications where multiple factors concurrently affect the process, such as in the cases of teleoperating unmanned aerial vehicles (UAVs) or controlling mobile robots in dynamic environments like in the case of USAR.

This cluster groups papers in which AI is integrated to improve task planning, path planning, and grasping.

Task Planning

In Chakraborti et al. (2017) , a system for human-aware task planning was proposed featuring an “Augmented Workspace” allowing the robot to visualize their intent such as their current planning state, and a “Consciousness Cloud” which learns from EEG signals the intent of the human collaborator while executing the task. This cloud is two-fold: an SVM model is used to classify input EEG signals into specific robot commands, and a Q-learning model which learns from the task-coupled emotions (mainly stress and excitement levels) the preferences of the human to plan accordingly. Although results were promising on novice users, authors reflected that the significance of the system might drastically decrease when tested on experienced individuals and proposed this as a future work.

Path Planning

Optimal path planning through reinforcement learning was done in Muvva et al. (2017) in a working environment combining both physical and AR (virtual) obstacles. The environment is represented as a Markov Decision Process, and the Depth First Search (DFS) was used for a sub-optimal solution. Then the robot is trained to find the optimal path in grid world using Q-learning which returns the path as the optimal policy learned. Similarly in Corotan and Irgen-Gioro (2019) , the robot learns the shortest path to its destination using Q-learning while relying solely on ARCore capabilities of localization and object avoidance. However, authors concluded that the robot’s dependence on one input (basically the camera of a smart phone mounted on the robot) supported by ARCore is inefficient. Whenever anything obstructs the sensor, the robot loses its localization and routing performance.

A deep AR grasp planning system was proposed in Zhang et al. (2020) which utilizes the ARKit platform to collect point cloud data of the object-to-grasp as well as visualizing the planned grasp vector overlaid on the object’s depth map. The pipeline is five-folds: Recording RGB images of the object to grasp, extracting the point cloud using Structure from Motion (SFM), cleaning the data using RANSAC and KNN, transforming the data to an artificial depth map, and finally feeding this map to a pre-trained GQ – CNN. Although this methodology was efficient in detecting optimal grasps for cases where the traditional top-down approach fails, its downside is the very high time taken for collecting data (2 min per object).

The authors in Chu et al. (2018) also investigated AR and AI solutions for grasping, specifically those controlled by a Tongue Drive System (TDS). The input is RGB-D images from the META AR glasses, and the output is potential grasp predictions each represented by a 5D grasp rectangle augmented on the target object. Before applying the deep grasp algorithm ( Chu et al., 2018 ), YOLO ( Redmon et al., 2016 ) is first applied on the RGB-D for generating 2D bounding boxes, which are further manipulated into 3D bounding boxes for localization. The system achieved competitive results with state-of-the-art TDS manipulation tasks.

Through using grasp quality measurements in Weisz et al. (2017) taking into consideration the uncertainty of the grasp acquisition and the object’s local geometry in a cluttered scene, the system can robustly perform grasps that match the user’s intent. The presented human-in-the-loop system was tested on both healthy and impaired individuals and subjects successfully grasped 82% of the objects. However, subjects found some difficulties in the grasp-refinement phase mainly due to their lack of the gripper’s friction properties.

Based on the literature presented, we foresee several opportunities for the utilization of AR and AI in future planning and manipulation tasks. This can result in a paradigm shift in collaborative human-in-the-loop frameworks, where AI can add the needed system complexities and AR can bridge the gap for the user to understand these complexities. For example, the challenges of assistive robotic manipulators ( Graf et al., 2004 ; Chen et al., 2013 ) to people with disabilities can be mitigated, and the integration of new input modalities to grasp planning can be facilitated. Concurrently, in all planning frameworks, attention should be given to the added mental load of AR visualizations, which might obstruct the user in some cases or even hinder efficient performance.

This cluster groups papers in which AI is integrated for robot and environment perception through object detection or localization.

Object Detection

In Sawarkar et al. (2016) the data received from the IP camera mounted on the UGV is initially de-noised using the Gaussian filter, then processed using two algorithms for detecting individuals: an SVM trained with HOG features, and a Haar Cascade classifier. These algorithms detect the human anatomy and selects it as the ROI, which is then fed to a CNN trained to recognize individuals holding several types of guns. Once the data is processed, the detected human is augmented with a colored bounding box and a percentage representing his/her probability of being a terrorist.

In Wang et al. (2018) , an automatic target detection mode was developed for the AR system based on an object semi-supervised segmentation applied to a convolutional neural network. The segmentation algorithm used is the One-Shot Video Object Segmentation (OSVOS). The methodology is limited as the chosen algorithm was prone to errors especially when there is no target in the view. Furthermore, post-processing the results was needed unless the user manually specifies whether a target is within view or not.

In De Gregorio et al. (2020) , authors compared the results of two object-detecting CNNs: YOLO and SSD on the dataset they generated using ARS, an AR semi-automatic object self-annotating method. The proposed method enabled the annotation of nine sequences of around 35,000 frames in 1 hour compared to manual annotation which usually takes around 10 h to annotate 1,000 frames improving the data annotation process. Furthermore, both recall and precision metrics were increased by around 15% compared to manual labeling. In El Hafi et al. (2020) , authors developed a method to form spatial concepts based on multimodal inputs from imaged features obtained by AlexNet-based CNN ( Krizhevsky et al., 2012 ), self-location information from the Monte Carlo localizer, and word information obtained from a speech recognition system.

To reduce the time spent on restricting the workspace of mobile co-robots, authors in Sprute et al. (2019b) developed a learning and support system that learns from previous user-defined virtual borders and recommends similar ones that can be directly selected through an AR application. The system uses a perception module based on RGB cameras and applies a deep learning algorithm (ResNet101) to the semantically segmented images of previous user interactions. Some limitations are mainly due to occlusion from furniture or having a camera setup that doesn’t cover the whole area.

The DBSCAN algorithm was used in Gradmann et al. (2018) to detect objects for a pick and place task. Objects are clustered according to their depth and color information provided by the depth camera of the Google Tango tablet. AR provides a live visual interface of the detected objects and a preview of robot intent (future position). 82% of pick and place tasks with different object positions were performed successfully, although the algorithm’s runtime can be impractical for some applications.

Robot Localization

In order to localize the robot and properly augment the information on each robot in a multi-robot system, authors in Ghiringhelli et al. (2014) used an active marker (one blinking RGB LED per robot) imaged by a fixed camera overlooking the robots environment. The blinking of each LED is set to a predefined pattern alternating two colors (blue and green). Initially, bright objects were detected through a fast beacon-detection frame-based algorithm. These detected objects were filtered first through evaluating the Track Quality Index, and then through a linear binary model which classifies the tracked points of the RGB color into either blue or green, based on a logistic regression learning of the blue and green color features applied during calibration.

The authors in Puljiz et al. (2019) presented a review of different approaches that can potentially be used for referencing between a robot and the AR HMD, such as training a neural network to estimate the joint positions of a robot manipulator based on RGB data ( Heindl et al., 2019 ). This was actually done in Kästner et al. (2020) to localize the six DOF pose of a mobile robot instead, while evaluating the training process through the AR interface. Authors compared two state-of-the-art neural networks – SSPE and BetaPose - previously trained on real and artificial datasets. The artificial dataset is based on a 3D robot model generated by Unreal Engine and annotated using NDDS plugin tool. Both networks, upon receiving a live video stream from the HoloLens, predicted accurate 3D pose of the robot, with the SSPE being 66% faster. Estimating the pose based on depth sensor data was investigated in Kastner et al. (2020) . Authors also developed an open source 6D annotation tool for 2D RGB images.

In this section, almost all the literature is an integration of AI to improve the AR experience, whether in innovating robust calibration methods or improving the tracking and object detection capabilities of AR systems. This provides an insight of what is done and what can be done to achieve a smooth integration of augmented reality applications. These methods are still limited in terms of robustness to ambient conditions like lighting, and the problem of increased computational time is still impractical for some applications. However, this can be mitigated in the future as hardware power is constantly improving and cloud computing is becoming ubiquitous.

The Ethical Perspective of Robotics and AI

As robots become ubiquitous, there are ethical considerations ranging from liability to privacy. The notion of a robot’s ability to do ethical decision making was first framed in Wallach and Allen (2009) yet the need to set rules for robot morality has been foresighted much earlier in Asimov’s fiction literature. Several organizations are trying to set guidelines and standards for such systems, we mention the IEEE 7010–2020 standard on ethically aligned design. The ethical challenges arising from complex intelligent systems span civilian and military use. Several aspects of concern emerged, ranging from discrimination and bias to privacy and surveillance. Service robots, which are designed to accompany humans at home or work present some of the greatest concerns as they serve in private and proprietary environments. Currently, AI capabilities possessed by robots are still relatively limited, where robots are only capable of a simple navigation task or taking a simple decision. However, as the research field evolves, robots will be able to do much more complex tasks with a greater level of intelligence. Therefore, there is a moral obligation for ethical consideration to evolve with the evolving technology.

Concluding Remarks

This paper provided a systematic review of literature on robotics which have employed artificial intelligence (AI) algorithms and augmented reality (AR) technology. A total of 29 papers were selected and analyzed within two perspectives: A theme-based analysis featuring the relation between AR and AI, and an application-based analysis focusing on how this relation has affected the robotics application. In each group, the 29 papers were further clustered based on the type of robotics platform and the type of robotics application, respectively. The major insights that can be drawn from this review are summarized below.

Augmented reality is a promising tool to facilitate the integration of AI to numerous robotics application. To counter the effect of increased complexity in understanding AI systems, AR offers an intuitive way of visualizing the robot internal state and its live training process. This is done through augmenting live information to the user via an HMD, a desktop-based GUI, a mobile phone, or a spatial projection system. This proved to improve several applications, such as learning by demonstration tasks, grasping, and planning. Learning from demonstration for robot manipulators is a field that has greatly benefited from the integration of AR and AI for an intuitive and user-friendly method of teaching, as done in Fang et al. (2014) and Liu et al. (2018) . AR has served as a user-friendly interface to ask the user to accept or reject the AI output, such as recommending to “Autocomplete” a predicted trajectory or suggesting a faster mapping of new virtual borders. We suspect the use of AR could contribute to the acceptability and trust of the general public in AI-enabled robots, as it can explicitly reveal the decision-making process and intentions of the robot. This also has the potential to contribute to not only increasing the efficiency of robotic systems but also their safety.

To improve the AR experience, accurate and reliable calibration and object localization methods are needed. As can be seen from the literature, artificial intelligence is a viable element supporting this notion for robotics applications. AR markers are widely used but are limited in dynamic environments and in cases of occlusions. Deep neural networks for object detection and robot localization seem the most promising for unstructured robotic environments ( De Gregorio et al., 2020 ; El Hafi et al., 2020 ), although they rely more on computational power and some methods are still computationally demanding. However, progress in hardware and cloud computing is making AI more viable in such scenarios. We suspect that AI will be used more for context and situational awareness in addition to detection of objects and events, which are capabilities that would enrich more AR displayed content.

The potentials of integrating these two elements in robotics applications are manifold and provide a means of deciphering the traditional human-robot mismatch model. Specifically, in the context of human-robot collaboration, AI can be used to understand the real user intent filtered from the perceived tasks the robot traditionally performs as in the work of Zein et al. (2020) . At the same time, AR can visualize information of the robot’s understanding of the user intent as in the work of Ghiringhelli et al. (2014) , providing a closed feedback loop into the model mismatch paradigm. The combination of these technologies will empower the next phase on human-robot interfacing and interaction. This is an area that highlights the importance of AI working side by side with humans instead of being perceived as a substitute for them.

This study confirms the many benefits of integrating AR and AI in robotics and reveals that the field is fertile and expects a striking surge in scholarly work. This result aligns with the current trends of incorporating more AI in the field of robotics ( Dimitropoulos et al., 2021 ). After the outbreak of COVID-19, the demand to replace humans with smart robots have become critical in some fields ( Feizi et al., 2021 ) affirming the increasing trend. Similarly, AR technology is currently at its rise, with several broad applications spanning education ( Samad et al., 2021 ), medicine ( Mantovani et al., 2020 ), and even sports ( da Silva et al., 2021 ). As AR and AI related technologies evolve, their integration will have numerous advantages to every application in robotics as well as other technological fields.”

Despite the well-developed resources, some limitations need to be addressed for powerful implementation of AR and AI in robotics. For example, AR devices are still hardware-limited, and some do not support advanced graphical processing, which challenges the implementation of computationally intensive AI algorithms on AR devices in real-time. Current methods rely on external remote servers for heavy computations, which might be impractical in some cases. Furthermore, vision-based approaches to track objects using AR markers are prone to errors and performance drops largely when occlusions happen or under challenging lighting conditions. Further improvements in AR hardware are needed to improve processing, battery life, and weight; all are elements needed for AR use for an extended period of time.

Future work can apply new out-of-the-box AItbox1 techniques to improve the AR experience with tracking methods robust in dynamic situations. Additional work is needed in AI to better understand human preferences in “how,” “when,” and “what” AR visual displays are shown to the user while debugging or performing a collaborative task with a robot. This can be framed when a robot can fully understand the “user intent” and show the user only relevant information through an intuitive AR interface. Similarly, AR holds potentials for integrating AI in complex robotics applications, such as grasping tasks in highly cluttered environments, detecting targets and localizing robots in dynamic environments and urban search and rescue, and teleoperating UAVs applying intelligent navigation and path planning. The future will have AI and AR in robotics ubiquitous and robust, just like networking, a given in a robotic system.

The major limitation of this systematic review is the potential underrepresentation of some papers combining AR, AI, and robotics. Given the choice of search terms identified in Methods , there is a possible incomplete documentation of research papers that do not contain a specified keyword, rather contain another synonym or an implied meaning in text.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

ZB performed the literature search, data analysis, and wrote the draft. IE came up with the idea of this article, advised on the review and analysis, and critically revised the manuscript.

The research was funded by the University Research Board (URB) at the American University of Beirut.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abbreviations

AR, Augmented Reality; MR, Mixed Reality; AI, Artificial Intelligence; HMD, Head Mounted Display; GUI, Graphical User Interface; PBD, Programming by demonstration; OGM, Occupancy Grid Map; CNN, Convolutional Neural Network; FOV, Field of View; KNN, K-Nearest-Neighbor; SVM, Support Vector Machine; EEG, Electroencephalographic; SFM, Structure from Motion; RANSAC, Random Sample Consensus; CNN, Convolutional Neural Network, YOLO, You Only Look Once; SSD, Single Shot Detector; ADIOS, Automatic Distillation of Structure; RGB, Red Green Blue; SVD, Singular Value Decomposition; MDP, Markov Decision Process; DTW, Dynamic Time Warping; EE, End Effector; DMP, Dynamic Movement Primitive; CP, Critical Point; ROS, Robot Operating System; GQ – CNN, Grasp Quality CNN; UGV, Unmanned Ground Vehicle; DBSCAN, Density-Based Spatial Clustering of Applications with Noise.

Andras, I., Mazzone, E., van Leeuwen, F. W. B., De Naeyer, G., van Oosterom, M. N., Beato, S., et al. (2020). Artificial Intelligence and Robotics: a Combination that Is Changing the Operating Room. World J. Urol. 38, 2359–2366. doi:10.1007/s00345-019-03037-6

CrossRef Full Text | Google Scholar

Argall, B. D., Chernova, S., Veloso, M., and Browning, B. (2009). A Survey of Robot Learning from Demonstration. Robotics Autonomous Syst. 57, 469–483. doi:10.1016/j.robot.2008.10.024

Azhar, H., Waseem, T., and Ashraf, H. (2020). Artificial Intelligence in Surgical Education and Training: a Systematic Literature Review. Arch. Surg. Res. 1, 39–46.

Google Scholar

Benbihi, A., Geist, M., and Pradalier, C. (2019). “Learning Sensor Placement from Demonstration for UAV Networks,” in 2019 IEEE Symposium on Computers and Communications (ISCC). Presented at the 2019 IEEE Symposium on Computers and Communications (Barcelona: ISCC ), 1–6. doi:10.1109/ISCC47284.2019.8969582

Bentz, W., Dhanjal, S., and Panagou, D. (2019). “Unsupervised Learning of Assistive Camera Views by an Aerial Co-robot in Augmented Reality Multitasking Environments,” in 2019 International Conference on Robotics and Automation (ICRA). Presented at the 2019 International Conference on Robotics and Automation (ICRA) , Montreal, QC, Canada ( IEEE ), 3003–3009. doi:10.1109/ICRA.2019.8793587

Bhandari, M., Zeffiro, T., and Reddiboina, M. (2020). Artificial Intelligence and Robotic Surgery: Current Perspective and Future Directions. Curr. Opin. Urol. 30, 48–54. doi:10.1097/MOU.0000000000000692

PubMed Abstract | CrossRef Full Text | Google Scholar

Billard, A., Calinon, S., Dillmann, R., and Schaal, S. (2008). “Robot Programming by Demonstration,” in Springer Handbook of Robotics . Editors B. Siciliano, and O. Khatib (Berlin, Heidelberg: Springer Berlin Heidelberg ), 1371–1394. doi:10.1007/978-3-540-30301-5_60

Bonin-Font, F., Ortiz, A., and Oliver, G. (2008). Visual Navigation for Mobile Robots: A Survey. J. Intell. Robot. Syst. 53, 263–296. doi:10.1007/s10846-008-9235-4

Bouaziz, J., Mashiach, R., Cohen, S., Kedem, A., Baron, A., Zajicek, M., et al. (2018). How Artificial Intelligence Can Improve Our Understanding of the Genes Associated with Endometriosis: Natural Language Processing of the PubMed Database. Biomed. Res. Int. 2018, 1–7. doi:10.1155/2018/6217812

Busch, B., Grizou, J., Lopes, M., and Stulp, F. (2017). Learning Legible Motion from Human-Robot Interactions. Int. J. Soc. Robotics 9, 765–779. doi:10.1007/s12369-017-0400-4

Čaić, M., Avelino, J., Mahr, D., Odekerken-Schröder, G., and Bernardino, A. (2020). Robotic versus Human Coaches for Active Aging: An Automated Social Presence Perspective. Int. J. Soc. Robotics 12, 867–882. doi:10.1007/s12369-018-0507-2

Cao, Y., Wang, T., Qian, X., Rao, P. S., Wadhawan, M., Huo, K., and Ramani, K. (2019). “GhostAR: A Time-Space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality,” in Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. Presented at the UIST ’19: The 32nd Annual ACM Symposium on User Interface Software and Technology , New Orleans LA USA (New York: ACM ), 521–534. doi:10.1145/3332165.3347902

Chacko, S. M., Granado, A., and Kapila, V. (2020). An Augmented Reality Framework for Robotic Tool-Path Teaching. Proced. CIRP 93, 1218–1223. doi:10.1016/j.procir.2020.03.143

Chakraborti, T., Sreedharan, S., Kulkarni, A., and Kambhampati, S., 2017. Alternative Modes of Interaction in Proximal Human-In-The-Loop Operation of Robots. ArXiv170308930 Cs.

Chen, L., Chen, P., and Lin, Z. (2020a). Artificial Intelligence in Education: A Review. IEEE Access 8, 75264–75278. doi:10.1109/ACCESS.2020.2988510

Chen, L., Su, W., Wu, M., Pedrycz, W., and Hirota, K. (2020b). A Fuzzy Deep Neural Network with Sparse Autoencoder for Emotional Intention Understanding in Human-Robot Interaction. IEEE Trans. Fuzzy Syst. 28, 1. doi:10.1109/TFUZZ.2020.2966167

Chen, T. L., Ciocarlie, M., Cousins, S., Grice, P. M., Hawkins, K., Kaijen Hsiao, K., et al. (2013). Robots for Humanity: Using Assistive Robotics to Empower People with Disabilities. IEEE Robot. Automat. Mag. 20, 30–39. doi:10.1109/MRA.2012.2229950

Chu, F.-J., and Vela, P. (2018). Deep Grasp: Detection and Localization of Grasps with Deep Neural Networks .

Chu, F.-J., Xu, R., and Vela, P. A., 2018. Real-world Multi-Object, Multi-Grasp Detection. ArXiv180200520 Cs. doi:10.1109/lra.2018.2852777

Chu, F.-J., Xu, R., Zhang, Z., Vela, P. A., and Ghovanloo, M. (2008). The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and Tongue-Drive Interfaces. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 4, 2158–2161. doi:10.1109/EMBC.2018.8512668

Comes, R., Neamtu, C., and Buna, Z. L. (2021). “Work-in-Progress-Augmented Reality Enriched Project Guide for Mechanical Engineering Students,” in 2021 7th International Conference of the Immersive Learning Research Network (ILRN). Presented at the 2021 7th International Conference of the Immersive Learning Research Network (Eureka: iLRN ), 1–3. doi:10.23919/iLRN52045.2021.9459247

Corotan, A., and Irgen-Gioro, J. J. Z. (2019). “An Indoor Navigation Robot Using Augmented Reality,” in 2019 5th International Conference on Control, Automation and Robotics (ICCAR). Presented at the 2019 5th International Conference on Control, Automation and Robotics (ICCAR) , Beijing, China ( IEEE ), 111–116. doi:10.1109/ICCAR.2019.8813348

da Silva, A. M., Albuquerque, G. S. G., and de Medeiros, F. P. A. (2021). “A Review on Augmented Reality Applied to Sports,” in 2021 16th Iberian Conference on Information Systems and Technologies (CISTI). Presented at the 2021 16th Iberian Conference on Information Systems and Technologies (CISTI) , 1–6. doi:10.23919/CISTI52073.2021.9476570

De Gregorio, D., Tonioni, A., Palli, G., and Di Stefano, L. (2020). Semiautomatic Labeling for Deep Learning in Robotics. IEEE Trans. Automat. Sci. Eng. 17, 611–620. doi:10.1109/TASE.2019.2938316

De Pace, F., Manuri, F., Sanna, A., and Fornaro, C. (2020). A Systematic Review of Augmented Reality Interfaces for Collaborative Industrial Robots. Comput. Ind. Eng. 149, 106806. doi:10.1016/j.cie.2020.106806

De Tommaso, D., Calinon, S., and Caldwell, D. G. (2012). A Tangible Interface for Transferring Skills. Int. J. Soc. Robotics 4, 397–408. doi:10.1007/s12369-012-0154-y

Dias, A., Wellaboda, H., Rasanka, Y., Munasinghe, M., Rodrigo, R., and Jayasekara, P. (2020). “Deep Learning of Augmented Reality Based Human Interactions for Automating a Robot Team,” in 2020 6th International Conference on Control, Automation and Robotics (ICCAR). Presented at the 2020 6th International Conference on Control, Automation and Robotics (ICCAR) , Singapore, Singapore ( IEEE ), 175–182. doi:10.1109/ICCAR49639.2020.9108004

Dias, T., Miraldo, P., Gonçalves, N., and Lima, P. U. (2015). “Augmented Reality on Robot Navigation Using Non-central Catadioptric Cameras,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems IROS , 4999–5004. doi:10.1109/IROS.2015.7354080

Dimitropoulos, K., Daras, P., Manitsaris, S., Fol Leymarie, F., and Calinon, S. (2021). Editorial: Artificial Intelligence and Human Movement in Industries and Creation. Front. Robot. AI 8, 712521. doi:10.3389/frobt.2021.712521

El Hafi, L., Isobe, S., Tabuchi, Y., Katsumata, Y., Nakamura, H., Fukui, T., et al. (2020). System for Augmented Human-Robot Interaction through Mixed Reality and Robot Training by Non-experts in Customer Service Environments. Adv. Robotics 34, 157–172. doi:10.1080/01691864.2019.1694068

Fang, H. C., Ong, S. K., and Nee, A. Y. C. (2014). Novel AR-based Interface for Human-Robot Interaction and Visualization. Adv. Manuf. 2, 275–288. doi:10.1007/s40436-014-0087-9

Fang, H. C., Ong, S. K., and Nee, A. Y. C. (2013). Orientation Planning of Robot End-Effector Using Augmented Reality. Int. J. Adv. Manuf. Technol. 67, 2033–2049. doi:10.1007/s00170-012-4629-7

Feigl, T., Porada, A., Steiner, S., Löffler, C., Mutschler, C., and Philippsen, M., 2020. Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments. Presented at the VISIGRAPP (1: GRAPP), pp. 307–318. doi:10.5220/0008989903070318

Feizi, N., Tavakoli, M., Patel, R. V., and Atashzar, S. F. (2021). Robotics and Ai for Teleoperation, Tele-Assessment, and Tele-Training for Surgery in the Era of Covid-19: Existing Challenges, and Future Vision. Front. Robot. AI 8, 610677. doi:10.3389/frobt.2021.610677

Gadre, S. Y. (2018). Teaching Robots Using Mixed Reality. Brown Univ. Dep. Comput. Sci.

Ghiringhelli, F., Guzzi, J., Di Caro, G. A., Caglioti, V., Gambardella, L. M., and Giusti, A., 2014. Interactive Augmented Reality for Understanding and Analyzing Multi-Robot Systems, in: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. Presented at the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014) , IEEE , Chicago, IL, USA , pp. 1195–1201. doi:10.1109/IROS.2014.6942709

Gong, L., Gong, C., Ma, Z., Zhao, L., Wang, Z., Li, X., Jing, X., Yang, H., and Liu, C. (2017). “Real-time Human-In-The-Loop Remote Control for a Life-Size Traffic Police Robot with Multiple Augmented Reality Aided Display Terminals,” in 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM). Presented at the 2017 2nd International Conference on Advanced Robotics and Mechatronics (China: ICARM ), 420–425. doi:10.1109/ICARM.2017.8273199

Gonzalez-Billandon, J., Aroyo, A. M., Tonelli, A., Pasquali, D., Sciutti, A., Gori, M., et al. (2019). Can a Robot Catch You Lying? A Machine Learning System to Detect Lies during Interactions. Front. Robot. AI 6, 64. doi:10.3389/frobt.2019.00064

Govers, F. X. (2018). Artificial Intelligence for Robotics: Build Intelligent Robots that Perform Human Tasks Using AI Techniques . Packt Publishing Limited .

Mylonas, G. P., Giataganas, P., Chaudery, M., Vitiello, V., Darzi, A., and Guang-Zhong Yang, G., 2013. Autonomous eFAST Ultrasound Scanning by a Robotic Manipulator Using Learning from Demonstrations, in: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Presented at the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp. 3251–3256. doi:10.1109/IROS.2013.6696818

Gradmann, M., Orendt, E. M., Schmidt, E., Schweizer, S., and Henrich, D. (2018). Augmented Reality Robot Operation Interface with Google Tango 8 .

Graf, B., Hans, M., and Schraft, R. D. (2004). Care-O-bot II-Development of a Next Generation Robotic Home Assistant. Autonomous Robots 16, 193–205. doi:10.1023/B:AURO.0000016865.35796.e9

Green, S. A., Billinghurst, M., Chen, X., and Chase, J. G. (2008). Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design. Int. J. Adv. Robotic Syst. 5, 1. doi:10.5772/5664

Gurevich, P., Lanir, J., and Cohen, B. (2015). Design and Implementation of TeleAdvisor: a Projection-Based Augmented Reality System for Remote Collaboration. Comput. Supported Coop. Work 24, 527–562. doi:10.1007/s10606-015-9232-7

Hakky, T., Dickey, R., Srikishen, N., Lipshultz, L., Spiess, P., and Carrion, R. (2016). Augmented Reality Assisted Surgery: a Urologic Training Tool. Asian J. Androl. 18, 732. doi:10.4103/1008-682X.166436

Hastie, H., Lohan, K., Chantler, M., Robb, D. A., Ramamoorthy, S., Petrick, R., et al. 2018. The ORCA Hub: Explainable Offshore Robotics through Intelligent Interfaces. ArXiv180302100 Cs.

Heindl, C., Zambal, S., Ponitz, T., Pichler, A., and Scharinger, J., 2019. 3D Robot Pose Estimation from 2D Images. ArXiv190204987 Cs.

Hester, T., Vecerik, M., Pietquin, O., Lanctot, M., Schaul, T., Piot, B., et al. 2017. Deep Q-Learning from Demonstrations. ArXiv Prepr. ArXiv170403732.

Kästner, L., Dimitrov, D., and Lambrecht, J., 2020. A Markerless Deep Learning-Based 6 Degrees of Freedom PoseEstimation for with Mobile Robots Using RGB Data. ArXiv200105703 Cs.

Kahuttanaseth, W., Dressler, A., and Netramai, C. (2018). “Commanding mobile Robot Movement Based on Natural Language Processing with RNN Encoder-decoder,” in 2018 5th International Conference on Business and Industrial Research (ICBIR). Presented at the 2018 5th International Conference on Business and Industrial Research (Bangkok: ICBIR ), 161–166. doi:10.1109/ICBIR.2018.8391185

Kastner, L., Frasineanu, V. C., and Lambrecht, J. (2020). “A 3D-Deep-Learning-Based Augmented Reality Calibration Method for Robotic Environments Using Depth Sensor Data,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA) , Paris, France ( IEEE ), 1135–1141. doi:10.1109/ICRA40945.2020.9197155

Kästner, L., and Lambrecht, J. (2019). “Augmented-Reality-Based Visualization of Navigation Data of Mobile Robots on the Microsoft Hololens - Possibilities and Limitations,” in 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM). Presented at the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics (Bangkok: Automation and Mechatronics RAM ), 344–349. doi:10.1109/CIS-RAM47153.2019.9095836

Kavraki, L. E., Svestka, P., Latombe, J.-C., and Overmars, M. H. (1996). Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces. IEEE Trans. Robot. Automat. 12, 566–580. doi:10.1109/70.508439

Kim, B., and Pineau, J. (2016). Socially Adaptive Path Planning in Human Environments Using Inverse Reinforcement Learning. Int. J. Soc. Robotics 8, 51–66. doi:10.1007/s12369-015-0310-2

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). “ImageNet Classification with Deep Convolutional Neural Networks,”. Advances in Neural Information Processing Systems . Editors F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Red Hook, NY: Curran Associates, Inc. ), 25, 1097–1105.

Le, T. D., Huynh, D. T., and Pham, H. V. (2018). “Efficient Human-Robot Interaction Using Deep Learning with Mask R-CNN: Detection, Recognition, Tracking and Segmentation,” in 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). Presented at the 2018 15th International Conference on Control, Automation, Robotics and Vision (Singapore: ICARCV ), 162–167. doi:10.1109/ICARCV.2018.8581081

Liu, H., Zhang, Y., Si, W., Xie, X., Zhu, Y., and Zhu, S.-C. (2018). “Interactive Robot Knowledge Patching Using Augmented Reality,” in IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2018 IEEE International Conference on Robotics and Automation (ICRA) , Brisbane, QLD ( IEEE ), 1947–1954. doi:10.1109/ICRA.2018.8462837

Livio, J., and Hodhod, R. (2018). AI Cupper: A Fuzzy Expert System for Sensorial Evaluation of Coffee Bean Attributes to Derive Quality Scoring. IEEE Trans. Fuzzy Syst. 26, 3418–3427. doi:10.1109/TFUZZ.2018.2832611

Loh, E. (2018). Medicine and the Rise of the Robots: a Qualitative Review of Recent Advances of Artificial Intelligence in Health. leader 2, 59–63. doi:10.1136/leader-2018-000071

Makhataeva, Z., and Varol, H. (2020). Augmented Reality for Robotics: A Review. Robotics 9, 21. doi:10.3390/robotics9020021

Makhataeva, Z., Zhakatayev, A., and Varol, H. A. (2019). “Safety Aura Visualization for Variable Impedance Actuated Robots,” in IEEE/SICE International Symposium on System Integration (SII). Presented at the 2019 IEEE/SICE International Symposium on System Integration (SII) , 805–810. doi:10.1109/SII.2019.8700332

Makita, S., Sasaki, T., and Urakawa, T. (2021). Offline Direct Teaching for a Robotic Manipulator in the Computational Space. Ijat 15, 197–205. doi:10.20965/ijat.2021.p0197

Mallik, A., and Kapila, V. (2020). “Interactive Learning of Mobile Robots Kinematics Using ARCore,” in 2020 5th International Conference on Robotics and Automation Engineering (ICRAE). Presented at the 2020 5th International Conference on Robotics and Automation Engineering (Singapore: ICRAE ), 1–6. doi:10.1109/ICRAE50850.2020.9310865

Mantovani, E., Zucchella, C., Bottiroli, S., Federico, A., Giugno, R., Sandrini, G., et al. (2020). Telemedicine and Virtual Reality for Cognitive Rehabilitation: a Roadmap for the COVID-19 Pandemic. Front. Neurol. 11, 926. doi:10.3389/fneur.2020.00926

Mathews, S. M. (2019). “Explainable Artificial Intelligence Applications in NLP, Biomedical, and Malware Classification: A Literature Review,” in Intelligent Computing, Advances in Intelligent Systems and Computing . Editors K. Arai, R. Bhatia, and S. Kapoor (Cham: Springer International Publishing ), 1269–1292. doi:10.1007/978-3-030-22868-2_90

McHenry, N., Spencer, J., Zhong, P., Cox, J., Amiscaray, M., Wong, K., and Chamitoff, G. (2021). “Predictive XR Telepresence for Robotic Operations in Space,” in Presented at the 2021 IEEE Aerospace Conference (50100) ( IEEE ), 1–10.

Measurable Augmented Reality for Prototyping Cyberphysical Systems (2016). A Robotics Platform to Aid the Hardware Prototyping and Performance Testing of Algorithms. IEEE Control. Syst. 36, 65–87. doi:10.1109/MCS.2016.2602090

Microsoft HoloLens (2020). Mixed Reality Technology for Business. Available at: https://www.microsoft.com/en-us/hololens (accessed 1 11, 20).

Milgram, P., and Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. E77-d 12, 1321–1329.

Moher, D., Liberati, A., Tetzlaff, J., and Altman, D. G. (2009). The PRISMA GroupPreferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Plos Med. 6, e1000097. doi:10.1371/journal.pmed.1000097

Muvva, V. V. R. M. K. R., Adhikari, N., and Ghimire, A. D. (2017). “Towards Training an Agent in Augmented Reality World with Reinforcement Learning,” in 2017 17th International Conference on Control, Automation and Systems (ICCAS). Presented at the 2017 17th International Conference on Control (Jeju: Automation and Systems (ICCAS) ), 1884–1888. doi:10.23919/ICCAS.2017.8204283

Nicolotti, L., Mall, V., and Schieberle, P. (2019). Characterization of Key Aroma Compounds in a Commercial Rum and an Australian Red Wine by Means of a New Sensomics-Based Expert System (SEBES)-An Approach to Use Artificial Intelligence in Determining Food Odor Codes. J. Agric. Food Chem. 67, 4011–4022. doi:10.1021/acs.jafc.9b00708

Nilsson, N. J. (1998). Artificial Intelligence: A New Synthesis . Elsevier .

Nilsson, N. J. (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements . Cambridge: Cambridge University Press . doi:10.1017/CBO9780511819346

CrossRef Full Text

Norouzi, N., Bruder, G., Belna, B., Mutter, S., Turgut, D., and Welch, G. (2019). “A Systematic Review of the Convergence of Augmented Reality, Intelligent Virtual Agents, and the Internet of Things,” in Artificial Intelligence in IoT . Editor F. Al-Turjman (Cham: Springer International Publishing ), 1–24. doi:10.1007/978-3-030-04110-6_1

Oculus | VR Headsets & Equipment, 2021. Available at: https://www.oculus.com/ (accessed 11.1.20).

Ong, S. K., Chong, J. W. S., and Nee, A. Y. C. (2010). A Novel AR-based Robot Programming and Path Planning Methodology. Robotics and Computer-Integrated Manufacturing 26, 240–249. doi:10.1016/j.rcim.2009.11.003

Papachristos, C., and Alexis, K. (2016). “Augmented Reality-Enhanced Structural Inspection Using Aerial Robots,” in IEEE International Symposium on Intelligent Control (ISIC). Presented at the 2016 IEEE International Symposium on Intelligent Control (Buenos Aires: ISIC ), 1–6. doi:10.1109/ISIC.2016.7579983

Patel, J., Xu, Y., and Pinciroli, C. (2019). “Mixed-Granularity Human-Swarm Interaction,” in 2019 International Conference on Robotics and Automation (ICRA). Presented at the 2019 International Conference on Robotics and Automation (Montreal: ICRA ), 1059–1065. doi:10.1109/ICRA.2019.8793261

Pessaux, P., Diana, M., Soler, L., Piardi, T., Mutter, D., and Marescaux, J. (2015). Towards Cybernetic Surgery: Robotic and Augmented Reality-Assisted Liver Segmentectomy. Langenbecks Arch. Surg. 400, 381–385. doi:10.1007/s00423-014-1256-9

Pickering, C., and Byrne, J. (2014). The Benefits of Publishing Systematic Quantitative Literature Reviews for PhD Candidates and Other Early-Career Researchers. Higher Education Res. Development 33, 534–548. doi:10.1080/07294360.2013.841651

Puljiz, D., Riesterer, K. S., Hein, B., and Kröger, T., 2019. Referencing between a Head-Mounted Device and Robotic Manipulators. ArXiv190402480 Cs.

Qian, L., Wu, J. Y., DiMaio, S. P., Navab, N., and Kazanzides, P. (2020). A Review of Augmented Reality in Robotic-Assisted Surgery. IEEE Trans. Med. Robot. Bionics 2, 1–16. doi:10.1109/TMRB.2019.2957061

Qiu, S., Liu, H., Zhang, Z., Zhu, Y., and Zhu, S.-C. (2020). “Human-Robot Interaction in a Shared Augmented Reality Workspace,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 11413–11418. doi:10.1109/IROS45743.2020.9340781

Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., 2016. You Only Look once: Unified, Real-Time Object Detection. ArXiv150602640 Cs.

Rosen, E., Whitney, D., Phillips, E., Chien, G., Tompkin, J., Konidaris, G., et al. (2019). Communicating and Controlling Robot Arm Motion Intent through Mixed-Reality Head-Mounted Displays. Int. J. Robotics Res. 38, 1513–1526. doi:10.1177/0278364919842925

Samad, S., Nilashi, M., Abumalloh, R. A., Ghabban, F., Supriyanto, E., and Ibrahim, O. (2021). Associated Advantages and Challenges of Augmented Reality in Educational Settings: A Systematic Review. J. Soft Comput. Decis. Support. Syst. 8, 12–17.

Sawarkar, A., Chaudhari, V., Chavan, R., Zope, V., Budale, A., and Kazi, F., 2016. HMD Vision-Based Teleoperating UGV and UAV for Hostile Environment Using Deep Learning. ArXiv160904147 Cs.

Sidaoui, A., Zein, M. K., Elhajj, I. H., and Asmar, D. (2019). “A-SLAM: Human In-The-Loop Augmented SLAM,” in 2019 International Conference on Robotics and Automation (ICRA). Presented at the 2019 International Conference on Robotics and Automation (Montreal: ICRA ), 5245–5251. doi:10.1109/ICRA.2019.8793539

Simões, M. A. C., da Silva, R. M., and Nogueira, T. (2020). A Dataset Schema for Cooperative Learning from Demonstration in Multi-Robot Systems. J. Intell. Robot. Syst. 99, 589–608. doi:10.1007/s10846-019-01123-w

Singh, N. H., and Thongam, K. (2019). Neural Network-Based Approaches for mobile Robot Navigation in Static and Moving Obstacles Environments. Intel Serv. Robotics 12, 55–67. doi:10.1007/s11370-018-0260-2

Sprute, D., Tönnies, K., and König, M. (2019a). A Study on Different User Interfaces for Teaching Virtual Borders to Mobile Robots. Int. J. Soc. Robotics 11, 373–388. doi:10.1007/s12369-018-0506-3

Sprute, D., Viertel, P., Tonnies, K., and Konig, M. (2019b). “Learning Virtual Borders through Semantic Scene Understanding and Augmented Reality,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , Macau, China ( IEEE ), 4607–4614. doi:10.1109/IROS40897.2019.8967576

Tay, Y. Y., Goh, K. W., Dares, M., Koh, Y. S., and Yeong, C. F. (). Augmented Reality (AR) Predictive Maintenance System with Artificial Intelligence (AI) for Industrial Mobile Robot 12 .

Turing, A. M. (1950). I.-Computing Machinery and Intelligence. Mind New Ser. LIX, 433–460. doi:10.1093/mind/lix.236.433

Tussyadiah, I. (2020). A Review of Research into Automation in Tourism: Launching the Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics in Tourism. Ann. Tourism Res. 81, 102883. doi:10.1016/j.annals.2020.102883

Tzafestas, C. S. (2006). “Virtual and Mixed Reality in Telerobotics: A Survey,” in Industrial Robotics: Programming, Simulation and Applications (London: IntechOpen ). doi:10.5772/4911

Van Krevelen, D. W. F., and Poelman, R. (2010). A Survey of Augmented Reality Technologies, Applications and Limitations. Ijvr 9, 1–20. ISSN 1081-1451 9. doi:10.20870/IJVR.2010.9.2.2767

Walker, M., Hedayati, H., Lee, J., and Szafir, D. (2018). “Communicating Robot Motion Intent with Augmented Reality,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. Presented at the HRI ’18: ACM/IEEE International Conference on Human-Robot Interaction , Chicago IL USA (Chicago: ACM ), 316–324. doi:10.1145/3171221.3171253

Wallach, W., and Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong . New York: Oxford University Press . doi:10.1093/acprof:oso/9780195374049.001.0001

Wang, B., and Rau, P.-L. P. (2019). Influence of Embodiment and Substrate of Social Robots on Users' Decision-Making and Attitude. Int. J. Soc. Robotics 11, 411–421. doi:10.1007/s12369-018-0510-7

Wang, R., Lu, H., Xiao, J., Li, Y., and Qiu, Q. (2018). “The Design of an Augmented Reality System for Urban Search and Rescue,” in IEEE International Conference on Intelligence and Safety for Robotics (ISR). Presented at the 2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR) , Shenyang ( IEEE ), 267–272. doi:10.1109/IISR.2018.8535823

Warrier, R. B., and Devasia, S. (2018). “Kernel-Based Human-Dynamics Inversion for Precision Robot Motion-Primitives,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , Madrid ( IEEE ), 6037–6042. doi:10.1109/IROS.2018.8594164

Weisz, J., Allen, P. K., Barszap, A. G., and Joshi, S. S. (2017). Assistive Grasping with an Augmented Reality User Interface. Int. J. Robotics Res. 36, 543–562. doi:10.1177/0278364917707024

Williams, T., Szafir, D., Chakraborti, T., and Ben Amor, H. (2018). “Virtual, Augmented, and Mixed Reality for Human-Robot Interaction,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. Presented at the HRI ’18: ACM/IEEE International Conference on Human-Robot Interaction , Chicago IL USA (Daegu: ACM ), 403–404. doi:10.1145/3173386.3173561

Yew, A. W. W., Ong, S. K., and Nee, A. Y. C. (2017). Immersive Augmented Reality Environment for the Teleoperation of Maintenance Robots. Proced. CIRP 61, 305–310. doi:10.1016/j.procir.2016.11.183

Zein, M. K., Sidaoui, A., Asmar, D., and Elhajj, I. H. (2020). “Enhanced Teleoperation Using Autocomplete,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA) , Paris, France ( IEEE ), 9178–9184. doi:10.1109/ICRA40945.2020.9197140

Zhang, H., Ichnowski, J., Avigal, Y., Gonzales, J., Stoica, I., and Goldberg, K. (2020). “Dex-Net AR: Distributed Deep Grasp Planning Using a Commodity Cellphone and Augmented Reality App,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA) , Paris, France ( IEEE ), 552–558. doi:10.1109/ICRA40945.2020.9197247

Zhang, X., Yao, X., Zhu, Y., and Hu, F. (2019). An ARCore Based User Centric Assistive Navigation System for Visually Impaired People. Appl. Sci. 9, 989. doi:10.3390/app9050989

Zhu, Z., and Hu, H. (2018). Robot Learning from Demonstration in Robotic Assembly: A Survey. Robotics 7, 17. doi:10.3390/robotics7020017

Keywords: robotics, augmented realitiy, artificial intelligence, planning, learning, perception

Citation: Bassyouni Z and Elhajj IH (2021) Augmented Reality Meets Artificial Intelligence in Robotics: A Systematic Review. Front. Robot. AI 8:724798. doi: 10.3389/frobt.2021.724798

Received: 14 June 2021; Accepted: 30 August 2021; Published: 22 September 2021.

Reviewed by:

Copyright © 2021 Bassyouni and Elhajj. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zahraa Bassyouni, [email protected]

robotics research paper 2021

Call for Papers

It’s a pleasure to invite you to submit your best research to the 2021 Robotics: Science and Systems Conference, a virtual meeting connecting researchers working on all aspects of robotics including scientific foundations, mechanisms, algorithms, applications, and analysis of robotic systems. The paper submission deadline is March 1st .

This, the 17th edition of RSS, will be the first time it has been planned to be virtual from the get-go. The conference will preserve the single-track format, and the final program will be the result of a thorough review process to give attendees an opportunity to see the most exciting research in all areas of robotics. Submissions will be evaluated in terms of their novelty, technical quality, significance, potential impact, and clarity. The program will include invited talks as well as oral and poster presentations of accepted papers. We will also continue the RSS tradition of vibrant, engaging, and trailblazing workshops across a diverse range of topics. (The workshop submission deadline is a fortnight after that for papers: March 15th.)

Highlights of RSS 2021:

Rather than weathering virtual meetings as a mere stopgap, the intention is to embrace the new opportunities that they afford. The virtual format has the advantage of reaching a wider audience than ever before, including many who may have heard of RSS but never participated previously, or for whom travel is prohibitive. We will have a focus on making openly available and retaining resources (like recorded presentations, self-contained unabridged text) so that the event will be of value to our community for far longer than a single week in July.

A strong and wide-spanning set of Area Chairs have agreed to serve in 2021. The process of handling papers will retain those traditional features of RSS which emphasize care and quality with a rigorous double-blind review process; we will continue the rebuttal format successfully introduced last time, with author rebuttals being directed to the ACs directly.

RSS will have a Best Paper Award, a Best Student Paper Award, and a Best Systems Paper Award. The latter is specifically dedicated to papers that pioneer novel robotic systems (e.g. hardware or mechatronics) or demonstrate novel behaviors on a robotic systems beyond what has been seen previously. Further, RSS 2021 will continue the Test of Time Award, along with the associated panel discussion.

The technical program will include single track oral and poster presentations of all accepted papers, as well as invited talks. Opportunities for virtual mingling and networking are planned (but you’ll have to supply your own real coffee).

Important Dates

  • March 1st, 2021: Paper Submission Deadline
  • March 15th, 2021: Workshop Submission Deadline
  • April 5th, 2021: Workshop Acceptance Notification
  • May 10th, 2021 Paper Acceptance Notification
  • July 12th - 16th, 2021 Virtual RSS 2021

Submission information is available here .

Subject Areas

We’re seeking high-quality research papers that introduce new ideas and stimulate future trends in robotics across a diverse range of areas. We invite submissions in all areas of robotics, including: mechanisms and design, robot learning, control and dynamics, planning, manipulation, field robotics, human-robot interaction, robot perception, formal methods, multi-robot systems, healthcare and medical robotics, biological robotics, mobile robotics. Also, we specifically encourage creative submissions that strike out in new directions and define new sub-areas. Excellence of ideas will be a central criterion for acceptance.

Special Journal Issues

RSS maintains a strong on-going relationship with top robotics journals. In 2021, about half of the selected papers will be invited for submission to a special issue of International Journal of Robotics Research (IJRR) and a special issue of Autonomous Robots (AuRo). As of late 2020, the most recently published special issues with RSS papers are International Journal of Robotics Research (IJRR) and Autonomous Robots (AuRo) .

Artificial Intelligence and Robotics: Impact & Open issues of automation in Workplace

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Smart Agriculture and Agricultural Robotics: Review and Perspective

  • Conference paper
  • First Online: 18 October 2023
  • Cite this conference paper

robotics research paper 2021

  • Avital Bechar 4 &
  • Shimon Y. Nof 5  

Part of the book series: Automation, Collaboration, & E-Services ((ACES,volume 14))

Included in the following conference series:

  • International Conference on Production Research

126 Accesses

The purpose of this chapter is to review the contribution of agricultural robotics to smart agriculture through the perspective of three contributing technology pillars: agricultural robotics; precision agriculture; and artificial intelligence. In this context, we describe contributions of recent research projects in agricultural robotics, their impacts on and prospects for smart agriculture and the next era in agriculture.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Airlangga, G., Liu, A.: Initial machine learning framework development of agriculture cyber physical systems. J. Phys. Conf. Ser. (2019)

Google Scholar  

Ajidarma, P.: Multi-Sensor Fault Tolerant Learning Algorithm in an Agricultural Robotic System. M.Sc., Purdue University (2017)

Ajidarma, P., Nof, S.Y.: Collaborative detection and prevention of errors and conflicts in an agricultural robotic system. Stud. Inform. Control 30 , 19–28 (2021)

Alwis, S.D., Hou, Z., Zhang, Y., Na, M.H., Ofoghi, B., Sajjanhar, A.: A survey on smart farming data, applications and techniques. Comput. Indust. 138 (2022)

Arnal Barbedo, J.G.: Digital image processing techniques for detecting, quantifying and classifying plant diseases. SpringerPlus 2, 1–12 (2013)

Asaei, H., Jafari, A., Loghavi, M.: Site-specific orchard sprayer equipped with machine vision for chemical usage management. Comput. Electron. Agric. 162 , 431–439 (2019)

Ayoub Shaikh, T., Rasool, T., Rasheed Lone, F.: Towards leveraging the role of machine learning and artificial intelligence in precision agriculture and smart farming. Comput. Electron. Agricul. 198 (2022)

Bac, C.W., Hemming, J., van Henten, E.J.: Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper. Comput. Electron. Agric. 96 , 148–162 (2013)

Bac, C.W., van Henten, E.J., Hemming, J., Edan, Y.: Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. J. Field Robot. 31 , 888–911 (2014)

Bechar, A.: Robotics in horticultural field production. Stewart Postharvest Rev. 6 (3), 1–11 (2010). https://doi.org/10.2212/spr.2010.3.11

Article   Google Scholar  

Bechar, A.: Agricultural robotics for precision agriculture tasks: concepts and principles. In: Bechar, A. (ed.) Innovation in Agricultural Robotics for Precision Agriculture. PPA, pp. 17–30. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77036-5_2

Chapter   Google Scholar  

Bechar, A., et al.: Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System AgEng 2014. Zurich, Switzerland (2014a)

Bechar, A., et al.: Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System. The EurAgEng 2014 International Conference. Zurich, Switzerland. C0287 (2014b)

Bechar, A., Edan, Y.: Human-robot collaboration for improved target recognition of agricultural robots. Ind. Robot. 30 , 432–436 (2003)

Bechar, A., Gan-Mor, S., Ronen, B.: A method for increasing the electrostatic deposition of pollen and powder. J. Electrostat. 66 , 375–380 (2008)

Bechar, A., Meyer, J., Edan, Y.: An objective function to evaluate performance of human-robot collaboration in target recognition tasks. IEEE Trans. Syst. Man Cybern. Part C-Appl. Rev. 39 , 611–620 (2009)

Bechar, A., Nof, S., Tao, Y.: Final report: Development of a robotic inspection system for early identification and locating of biotic and abiotic stresses in greenhouse crops. BARD Research Project IS-4886-16 R (2020)

Bechar, A., Nof, S.Y., Wachs, J.P.: A review and framework of laser-based collaboration support. Annu. Rev. Control. 39 , 30–45 (2015)

Bechar, A., Vigneault, C.: Agricultural robots for field operations: concepts and components. Biosys. Eng. 149 , 94–111 (2016)

Bechar, A., Vigneault, C.: Agricultural robots for field operations. Part 2: operations and systems. Biosys. Eng. 153 , 110–128 (2017)

Behmann, J., Mahlein, A.K., Rumpf, T., Römer, C., Plümer, L.: A review of advanced machine learning methods for the detection of biotic stress in precision crop protection. Precision Agric. 16 , 239–260 (2015)

Benos, L., Bechar, A., Bochtis, D.: Safety and ergonomics in human-robot interactive agricultural operations. Biosys. Eng. 200 , 55–72 (2020)

Bhimanpallewar, R.N., Narasingarao, M.R.: AgriRobot: implementation and evaluation of an automatic robot for seeding and fertiliser microdosing in precision agriculture. Int. J. Agric. Resour. Gov. Ecol. 16 , 33–50 (2020)

Bloch, V., Bechar, A., Degani, A.: Development of an environment characterization methodology for optimal design of an agricultural robot. Ind. Robot. 44 , 94–103 (2017)

Bloch, V., Degani, A., Bechar, A.: A methodology of orchard architecture design for an optimal harvesting robot. Biosys. Eng. 166 , 126–137 (2018)

Bock, C.H., Poole, G.H., Parker, P.E., Gottwald, T.R.: Plant disease severity estimated visually, by digital photography and image analysis, and by hyperspectral imaging. Crit. Rev. Plant Sci. 29 , 59–107 (2010)

Boursianis, A.D., et al.: Internet of Things (IoT) and Agricultural Unmanned Aerial Vehicles (UAVs) in smart farming: a comprehensive review. Internet of Things (Netherlands) 18 (2022)

Canning, J.R., Edwards, D.B., Anderson, M.J.: Development of a fuzzy logic controller for autonomous forest path navigation. Trans. Asae 47 , 301–310 (2004)

Carpio, R.F., et al.: A Navigation architecture for ackermann vehicles in precision farming. IEEE Robot. Autom. Let. 5 , 1103–1110 (2020)

Ceres, R., Pons, J.L., Jiménez, A.R., Martín, J.M., Calderón, L.: Design and implementation of an aided fruit‐harvesting robot (Agribot). Indust. Robot Int. J. 25 (5), 337–346 (1998). https://doi.org/10.1108/01439919810232440

Dimitriadis, S., Goumopoulos, C.: Applying machine learning to extract new knowledge in precision agriculture applications. Proceedings - 12th Pan-Hellenic Conference on Informatics, PCI 2008, pp. 100–104 (2008)

Dong, X., Vuran, M.C., Irmak, S.: Autonomous precision agriculture through integration of wireless underground sensor networks with center pivot irrigation systems. Ad Hoc Netw. 11 , 1975–1987 (2013)

Dusadeerungsikul, P.O., Nof, S.Y.: A collaborative control protocol for agricultural robot routing with online adaptation. Comput. Ind. Eng. 135 , 456–466 (2019)

Dusadeerungsikul, P.O., Nof, S.Y.: A cyber collaborative protocol for real-time communication and control in human-robot-sensor work. Int. J. Comput. Commun. Control 16 , 1–11 (2021)

Dusadeerungsikul, P.O., et al.: Collaboration requirement planning protocol for hub-Ci in factories of the future, pp. 218–225 (2019)

Edan, Y., Bechar, A.: Multi-purpose agricultural robot. In: The Sixth IASTED International Conference, Robotics And Manufacturing, pp. 205–212. 1998 Banff, Canada. (1998)

Emmi, L., Paredes-Madrid, L., Ribeiro, A., Pajares, G., Gonzalez-De-santos, P.: Fleets of robots for precision agriculture: a simulation environment. Ind. Robot. 40 , 41–58 (2013)

Finkelshtain, R., Bechar, A., Yovel, Y., Kósa, G.: Investigation and analysis of an ultrasonic sensor for specific yield assessment and greenhouse features identification. Precision Agric. 18 , 916–931 (2017)

Franke, J., Gebhardt, S., Menz, G., Helfrich, H.P.: Geostatistical analysis of the spatiotemporal dynamics of powdery mildew and leaf rust in wheat. Phytopathology 99 , 974–984 (2009)

Franke, J., Menz, G.: Multi-temporal wheat disease detection by multi-spectral remote sensing. Precision Agric. 8 , 161–172 (2007)

Freitas, H., Faical, B. S., Silva, A., Ueyama, J.: Use of UAVs for an efficient capsule distribution and smart path planning for biological pest control. Comput. Electron. Agric. 173 (2020)

Gao, G.H., Feng, T.X., Yang, H., Li, F.: Development and optimization of end-effector for extraction of potted anthurium seedlings during transplanting. Appl. Eng. Agric. 32 , 37–46 (2016)

Goap, A., Sharma, D., Shukla, A.K., Rama Krishna, C.: An IoT based smart irrigation management system using Machine learning and open source technologies. Comput. Electron. Agric. 155, 41–49 (2018)

Guzman, R., Navarro, R., Beneto, M., Carbonell, D.: Robotnik-professional service robotics applications with ROS. In: Koubaa, A. (ed.) Robot Operating System (2016)

Hellstrom, T., Ringdahl, O.: A software framework for agricultural and forestry robots. Indust. Robot Int. J. 40 , 20–26 (2013)

Holland, S.W., Nof, S.Y.: Emerging Trends and Industry Needs. Wiley, Handbook of Industrial Robotics (2007)

Junejo, K.N., Goh, J.: Behaviour-based attack detection and classification in cyber physical systems using machine learning. In: CPSS 2016 - Proceedings of the 2nd ACM International Workshop on Cyber-Physical System Security, Co-located with Asia CCS 2016, pp. 34–43 (2016)

Kerkech, M., Hafiane, A., Canals, R.: Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 174 (2020)

Lati, R.N., Rosenfeld, L., David, I.B., Bechar, A.: Power on! Low-energy electrophysical treatment is an effective new weed control approach. Pest Manag. Sci. 77 , 4138–4147 (2021)

Lee, W.S., Alchanatis, V., Yang, C., Hirafuji, M., Moshou, D., Li, C.: Sensing technologies for precision specialty crop production. Comput. Electron. Agric. 74 , 2–33 (2010)

Liang, H., He, J., Lei, J.J.: Monitoring of corn canopy blight disease based on UAV hyperspectral method. Spectrosc. Spect. Anal. 40 , 1965–1972 (2020)

Linard, A., Bueno, M.L.P.: Towards adaptive scheduling of maintenance for Cyber-Physical Systems. In: Margaria, T., Steffen, B. (eds.) ISoLA 2016. LNCS, vol. 9952, pp. 134–150. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47166-2_9

Lipinski, A.J., Markowski, P., Lipinski, S., Pyra, P.: Precision of tractor operations with soil cultivation implements using manual and automatic steering modes. Biosys. Eng. 145 , 22–28 (2016)

Lukowska, A., Tomaszuk, P., Dzierzek, K., Magnuszewski, L.: Soil sampling mobile platform for Agriculture 4.0 (2019)

Mann, M.P., Rubinstein, D., Shmulevich, I., Linker, R., Zion, B.: Motion planning of a mobile cartesian manipulator for optimal harvesting of 2-D crops. Trans. ASABE 57 , 283–295 (2014)

Moghaddam, M., Nof, S.Y.: Information flow optimization in augmented reality systems for production & manufacturing. In: Proceedings of ICPR-AR. Curitiba, Brazil (2022)

Moshou, D., Pantazi, X.-E., Kateris, D., Gravalos, I.: Water stress detection based on optical multisensor fusion with a least squares support vector machine classifier. Biosys. Eng. 117 , 15–22 (2014)

Moysiadis, V., Tsolakis, N., Katikaridis, D., Sorensen, C.G., Pearson, S., Bochtis, D.: Mobile robotics in agricultural operations: a narrative review on planning aspects. Appl. Sci. (Switzerland), 10 (2020)

Mulla, D.J.: Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng. 114, 358–371 (2013)

Nair, A.S., Bechar, A., Tao, Y., Nof, S.Y.: The HUB-CI model for telerobotics in greenhouse monitoring. Procedia Manufac. 39 , 414–421 (2019)

Nair, A.S., Nof, S.Y., Bechar, A.: Emerging directions of precision agriculture and agricultural robotics. In: Bechar, A. (ed.) Innovation in Agricultural Robotics for Precision Agriculture. PPA, pp. 177–210. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77036-5_8

Nguyen, T.T., Kayacan, E., de Baedemaeker, J., Saeys, W.: Task and motion planning for apple harvesting robot*. IFAC Proc. Vol. 46 , 247–252 (2013)

Nof, S.Y. (ed.): Handbook of Automation: Springer (2009)

Nof, S.Y.: Automation: what it means to us around the world. In: Nof, S.Y. (ed.) Handbook of Automation. 2nd ed. Springer (2022)

Oerke, E.C., Dehne, H.W.: Safeguarding production—losses in major crops and the role of crop protection. Crop Prot. 23 , 275–285 (2004)

Oerke, E.C., Fröhling, P., Steiner, U.: Thermographic assessment of scab disease on apple leaves. Precision Agric. 12, 699–715 (2011)

Pandey, A., Kumar, S., Tiwary, P., Das, S.K.: A hybrid classifier approach to multivariate sensor data for climate smart agriculture cyber-physical systems. ACM Int. Conf. Proc. Ser. 337–341 (2019)

Qureshi, T., Saeed, M., Ahsan, K., Malik, A.A., Muhammad, E.S., Touheed, N.: Smart agriculture for sustainable food security using Internet of Things (IoT). Wireless Commun. Mobile Comput. (2022)

Raja, R., Nguyen, T.T., Slaughter, D.C., Fennimore, S.A.: Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosys. Eng. 192 , 257–274 (2020)

Sanchez, L., Pant, S., Mandadi, K., Kurouski, D.: Raman spectroscopy vs quantitative polymerase chain reaction in early stage huanglongbing diagnostics. Sci. Reports 10 (2020)

Sankaran, S., Mishra, A., Ehsani, R., Davis, C.: A review of advanced techniques for detecting plant diseases. Comput. Electron. Agric. 72 , 1–13 (2010)

Sargolzaei, A., Crane, C.D., III, Abbaspour, A., Noei, S.: A machine learning approach for fault detection in vehicular cyber-physical systems. In: Proceedings - 2016 15th IEEE International Conference on Machine Learning and Applications, ICMLA 2016, pp. 636–640 (2017)

Schnug, E., Panten, K., Haneklaus, S.: Sampling and nutrient recommendations - the future. Commun. Soil Sci. Plant Anal. 29 , 1455–1462 (1998)

Schor, N., Bechar, A., Ignat, T., Dombrovsky, A., Elad, Y., Berman, S.: Robotic disease detection in greenhouses: combined detection of powdery mildew and tomato spotted wilt virus. IEEE Robot. Autom. Let. 1 , 354–360 (2016a)

Schor, N., Berman, S., Dombrovsky, A., Elad, Y., Ignat, T., Bechar, A.: Development of a robotic detection system for greenhouse pepper plant diseases. Prec. Agric. 18 (3), 394–409 (2016b). https://doi.org/10.1007/s11119-017-9503-z

Schor, N., Berman, S., Dombrovsky, A., Elad, Y., Ignat, T., Bechar, A.: Development of a robotic detection system for greenhouse pepper plant diseases. Precision Agric. 18 , 394–409 (2017)

Schueller, J.K.: CIGR Handbook of Agricultural Engineering, CIGR – The International Commission of Agricultural Engineering (2006)

Schuster, R., et al.: Multi-cue learning and visualization of unusual events. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1933–1940 (2011)

Seok, H., Nof, S.: The HUB-CI initiative for cultural, education and training, and healthcare networks. 21st IICPR. Stuttgart, Germany (2011)

Shoshan, T., Bechar, A., Cohen, Y., Sadowsky, A., Berman, S.: Segmentation and motion parameter estimation for robotic Medjoul-date thinning. Precision Agric. 23 , 514–537 (2022)

Spezzano, G., Vinci, A.: Pattern detection in Cyber-Physical Systems. Procedia Comput. Sci. 52 , 1016–1021 (2015). https://doi.org/10.1016/j.procs.2015.05.096

Sreeram, M., Nof, S.: Human-in-the-loop of cyber physical agricultural robotic systems. Int. J. Computers, Comm. Control 16 (2021)

Stafford, J.V.: Implementing precision agriculture in the 21st century. J. Agric. Eng. Res. 76 (3), 267–275 (2000). https://doi.org/10.1006/jaer.2000.0577

Steiner, U., Burling, K., Oerke, E.C.: Sensor use in plant protection. Gesunde Pflanzen 60 , 131–141 (2008)

Steinfeld, A.: Interface lessons for fully and semi-autonomous mobile robots. In: IEEE International Conference on Robotics and Automation,2752–2757 (2004)

Taki, M., Mehdizadeh, S.A., Rohani, A., Rahnama, M., Rahmati-Joneidabad, M.: Applied machine learning in greenhouse simulation; new application and analysis. Inform. Process. Agric. 5 (2), 253–268 (2018). https://doi.org/10.1016/j.inpa.2018.01.003

Tillett, N.D., Hague, T., Grundy, A.C., Dedousis, A.P.: Mechanical within-row weed control for transplanted crops using computer vision. Biosys. Eng. 99 , 171–178 (2008)

Tremblay, N., Fallon, E., Ziadi, N.: Sensing of crop nitrogen status: opportunities, tools, limitations, and supporting information requirements. HortTechnology 21 , 274–281 (2011)

Urrea, C., Munoz, J.: Path tracking of mobile robot in crops. J. Intell. Rob. Syst. 80 , 193–205 (2015)

van Henten, E.J., Bac, C.W., Hemming, J., Edan, Y.: Robotics in protected cultivation. IFAC Proc. Vol. 46 , 170–177 (2013)

Veerendra, G., Swaroop, R., Dattu, D.S., Jyothi, C.A., Singh, M.K.: Detecting plant Diseases, quantifying and classifying digital image processing techniques 51, 837–841 (2021)

Vidoni, R., Bietresato, M., Gasparetto, A., Mazzetto, F.: Evaluation and stability comparison of different vehicle configurations for robotic agricultural operations on side-slopes. Biosys. Eng. 129 , 197–211 (2015)

Wang, D., et al.: Early detection of tomato spotted wilt virus by hyperspectral imaging and outlier removal auxiliary classifier generative adversarial nets (OR-AC-GAN). Sci. Reports 9 (2019)

Wang, Z., Gong, L., Chen, Q., Li, Y., Liu, C., Huang, Y.: Rapid developing the simulation and control systems for a multifunctional autonomous agricultural robot with ROS (2016)

Wani, H., Ashtankar, N.: An appropriate model predicting pest/diseases of crops using machine learning algorithms. In: 2017 4th International Conference on Advanced Computing and Communication Systems, ICACCS 2017 (2017)

Wetterich, C.B., De Oliveira Neves, R.F., Belasque, J., Marcassa, L.G.: Detection of citrus canker and Huanglongbing using fluorescence imaging spectroscopy and support vector machine technique. Appl. Opt. 55 , 400–407 (2016)

Wu, X., Aravecchia, S., Lottes, P., Stachniss, C., Pradalier, C.: Robotic weed control using automated weed and crop classification. J. Field Robot. 37 , 322–340 (2020)

Wu, Z., et al.: K-PdM: KPI-oriented machinery deterioration estimation framework for predictive maintenance using cluster-based hidden markov model. IEEE Access 6 , 41676–41687 (2018)

Xiang, R., Jiang, H., Ying, Y.: Recognition of clustered tomatoes based on binocular stereo vision. Comput. Electron. Agric. 106 , 75–90 (2014)

Yahata, S., et al.: A hybrid machine learning approach to automatic plant phenotyping for smart agriculture. In: Proceedings of the International Joint Conference on Neural Networks, 1787–1793 (2017)

Zaidner, G., Shapiro, A.: A novel data fusion algorithm for low-cost localisation and navigation of autonomous vineyard sprayer robots. Biosys. Eng. 146 , 133–148 (2016)

Zhong, H., Nof, S.Y., Berman, S.: Asynchronous cooperation requirement planning with reconfigurable end-effectors. Robot. Comput. Integrat. Manufac. 34 , 95–104 (2015)

Zion, B., Mann, M., Levin, D., Shilo, A., Rubinstein, D., Shmulevich, I.: Harvest-order planning for a multiarm robotic harvester. Comput. Electron. Agric. 103 , 75–81 (2014)

Download references

Author information

Authors and affiliations.

Institute of Agriculture Engineering (IAE), Agriculture Research Organization (ARO), Bet Dagan, Israel

Avital Bechar

PRISM Center, and School of IE, Purdue University, West Lafayette, IN, USA

Shimon Y. Nof

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shimon Y. Nof .

Editor information

Editors and affiliations.

Industrial Engineering and Enterprise Information, Tunghai University, Taichung, Taiwan

Chin-Yin Huang

Department of Systems Science and Industrial Engineering, State University of New York at Binghamton, New York, NY, USA

Sang Won Yoon

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Bechar, A., Nof, S.Y. (2023). Smart Agriculture and Agricultural Robotics: Review and Perspective. In: Huang, CY., Yoon, S.W. (eds) Systems Collaboration and Integration. ICPR1 2021. Automation, Collaboration, & E-Services, vol 14. Springer, Cham. https://doi.org/10.1007/978-3-031-44373-2_26

Download citation

DOI : https://doi.org/10.1007/978-3-031-44373-2_26

Published : 18 October 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-44372-5

Online ISBN : 978-3-031-44373-2

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • World J Emerg Surg

Logo of wjes

Robotic surgery in emergency setting: 2021 WSES position paper

Nicola de’angelis.

1 Unit of Digestive, Hepatobiliary, and Pancreatic Surgery, CARE Department, Henri Mondor University Hospital (AP-HP), Créteil, France

2 Faculty of Medicine, University of Paris Est, UPEC, Créteil, France

3 Department of Colorectal Surgery, Queen Alexandra Hospital, University of Portsmouth, Southwick Hill Road, Cosham, Portsmouth, UK

Francesco Marchegiani

4 First Surgical Clinic, Department of Surgical Oncological and Gastroenterological Sciences, University of Padua, Padua, Italy

Giorgio Bianchi

Filippo aisoni, daniele alberti.

5 Department of Pediatric Surgery, Spedali Civili Children’s Hospital of Brescia, Brescia, BS Italy

Luca Ansaloni

6 General Surgery, San Matteo University Hospital, Pavia, Italy

Walter Biffl

7 Division of Trauma and Acute Care Surgery, Scripps Memorial Hospital La Jolla, La Jolla, CA USA

Osvaldo Chiara

8 General Surgery and Trauma Team, ASST Niguarda Milano, University of Milano, Milan, Italy

Graziano Ceccarelli

9 General Surgery, San Giovanni Battista Hospital, USL Umbria 2, Foligno, Italy

Federico Coccolini

10 General, Emergency and Trauma Department, Pisa University Hospital, Pisa, Italy

Enrico Cicuttin

Mathieu d’hondt.

11 Department of Digestive and Hepatobiliary/Pancreatic Surgery, Groeninge Hospital, Kortrijk, Belgium

Salomone Di Saverio

12 Department of Surgery, Cambridge University Hospital, NHS Foundation Trust, Cambridge, UK

Michele Diana

13 Digestive and Endocrine Surgery, Nouvel Hôpital Civil, University of Strasbourg, Strasbourg, France

14 IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France

Belinda De Simone

15 Department of General and Metabolic Surgery, Poissy and Saint-Germain-en-Laye Hospitals, Poissy, France

Eloy Espin-Basany

16 Department of General Surgery, Hospital Valle de Hebron, Universitat Autonoma de Barcelona, Barcelona, Spain

Stefan Fichtner-Feigl

17 Department of General and Visceral Surgery, Medical Center University of Freiburg, Freiburg, Germany

Jeffry Kashuk

18 Department of Surgery, Tel Aviv University, Sackler School of Medicine, Tel Aviv, Israel

Ewout Kouwenhoven

19 Department of Surgery, Hospital Group Twente ZGT, Almelo, Netherlands

Ari Leppaniemi

20 Department of Gastrointestinal Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland

Nassiba Beghdadi

Riccardo memeo.

21 Unit of Hepato-Pancreato-Biliary Surgery, General Regional Hospital “F. Miulli”, Acquaviva delle Fonti, Bari, Italy

Marco Milone

22 Department of Clinical Medicine and Surgery, “Federico II” University of Naples, Naples, Italy

Ernest Moore

23 Ernest E Moore Shock Trauma Center at Denver Health, University of Colorado, Denver, CO USA

Andrew Peitzmann

24 University of Pittsburgh School of Medicine, Pittsburgh, PA USA

Patrick Pessaux

25 Visceral and Digestive Surgery, Nouvel Hôpital Civil, University of Strasbourg, Strasbourg, France

26 Institute for Image-Guided Surgery, IHU Strasbourg, Strasbourg, France

27 Institute of Viral and Liver Disease, INSERM U1110, Strasbourg, France

Manos Pikoulis

28 3Rd Department of Surgery, Attikon General Hospital, National and Kapodistrian University of Athens (NKUA), Athens, Greece

Michele Pisano

29 1St General Surgery Unit, Department of Emergency, ASST Papa Giovanni Hospital Bergamo, Bergamo, Italy

Frederic Ris

30 Division of Digestive Surgery, University Hospitals of Geneva and Medical School, Geneva, Switzerland

Massimo Sartelli

31 Department of Surgery, Macerata Hospital, Macerata, Italy

Giuseppe Spinoglio

32 IRCAD Faculty Member Robotic and Colorectal Surgery-IRCAD, Strasbourg, France

Michael Sugrue

33 Department of Surgery, Letterkenny University Hospital, Donegal, Ireland

34 Department of Surgery, Trauma Surgery, Radboud University Medical Center, Nijmegen, Netherlands

Paschalis Gavriilidis

35 Department of HBP Surgery, University Hospitals Coventry and Warwickshire NHS Trust, Clifford Bridge Road, Coventry, CV2 2DX UK

Dieter Weber

36 Department of Trauma Surgery, Royal Perth Hospital, Perth, Australia

Yoram Kluger

37 Department of General Surgery, Rambam Healthcare Campus, Haifa, Israel

Fausto Catena

38 Department of General and Emergency Surgery, Bufalini Hospital-Level 1 Trauma Center, Cesena, Italy

Associated Data

There are no data from individual authors that reach the criteria for availability.

Robotics represents the most technologically advanced approach in minimally invasive surgery (MIS). Its application in general surgery has increased progressively, with some early experience reported in emergency settings. The present position paper, supported by the World Society of Emergency Surgery (WSES), aims to provide a systematic review of the literature to develop consensus statements about the potential use of robotics in emergency general surgery.

This position paper was conducted according to the WSES methodology. A steering committee was constituted to draft the position paper according to the literature review. An international expert panel then critically revised the manuscript. Each statement was voted through a web survey to reach a consensus.

Ten studies (3 case reports, 3 case series, and 4 retrospective comparative cohort studies) have been published regarding the applications of robotics for emergency general surgery procedures. Due to the paucity and overall low quality of evidence, 6 statements are proposed as expert opinions. In general, the experts claim for a strict patient selection while approaching emergent general surgery procedures with robotics, eventually considering it for hemodynamically stable patients only. An emergency setting should not be seen as an absolute contraindication for robotic surgery if an adequate training of the operating surgical team is available. In such conditions, robotic surgery can be considered safe, feasible, and associated with surgical outcomes related to an MIS approach. However, there are some concerns regarding the adoption of robotic surgery for emergency surgeries associated with the following: (i) the availability and accessibility of the robotic platform for emergency units and during night shifts, (ii) expected longer operative times, and (iii) increased costs. Further research is necessary to investigate the role of robotic surgery in emergency settings and to explore the possibility of performing telementoring and telesurgery, which are particularly valuable in emergency situations.

Conclusions

Many hospitals are currently equipped with a robotic surgical platform which needs to be implemented efficiently. The role of robotic surgery for emergency procedures remains under investigation. However, its use is expanding with a careful assessment of costs and timeliness of operations. The proposed statements should be seen as a preliminary guide for the surgical community stressing the need for reevaluation and update processes as evidence expands in the relevant literature.

Robotics represents the most technologically advanced approach in minimally invasive surgery (MIS). Its application has progressively gained acceptance in several surgical fields, being routinely used for elective urology, gynecology, digestive, and hepato-bilio-pancreatic surgery [ 1 – 8 ]. Conversely, robotic surgery in the emergency setting has not been explored, although some early experience has been reported in the literature [ 9 – 12 ]. Consequently, the issue regarding the role and potential applications of robotics for emergency procedures remains open. However, it deserves to be continuously monitored and updated in the future as evidence would emerge.

Project rationale and design

The present position paper is supported by the World Society of Emergency Surgery (WSES) and aims to provide a systematic review of the literature investigating the use of robotics in emergency general surgery to develop consensus statements based on the currently available evidence and practice. The present document should be seen as a preliminary guide for the surgical community stressing the need for reevaluation and update processes as evidence expands in the relevant literature.

For the purpose of this WSES position paper, the organizing committee (composed of Fausto Catena, Nicola de’Angelis, and Jim Khan) constituted a steering committee (made up of 16 experts), who had the task of drafting the present position paper, and an international expert panel composed of 21 experts who were asked to critically revise the manuscript and position statements. The position paper was conducted according to the WSES methodology [ 13 ]. We shall present the systematic review of the literature and provide the derived statements upon which a consensus was reached, specifying the quality of the supporting evidence and suggesting future research directions.

Systematic review

Review question, selection criteria, and search strategy.

The systematic review of the literature was performed following the Cochrane Collaboration specific protocol [ 14 ] and was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [ 15 ].

The focus question was the following: what are the applications and outcomes of robotics for general surgery in emergency settings?

Studies reporting the use of a robotic surgical platform to manage general surgery emergencies and urgencies were searched in the following databases on June 30, 2021: MEDLINE (through PubMed), Embase, and the Cochrane Library. A specific research query was formulated for each database, using the following keywords and MeSH terms: emergency, emergency surgery, emergency setting, urgent, robotic surgery, robotic, robotics, robot-assisted, minimally invasive surgery, and minimally invasive surgery procedures.

According to the PICOS format, the following items were used as selection criteria for articles emerging from the literature search:

  • P, population: adult patients requiring surgery in emergent/urgent settings.
  • I, intervention: robotic or robot-assisted general surgery intervention.
  • C, comparisons: laparoscopy or open surgery or no comparison.
  • O, outcome(s): operative and postoperative surgical outcomes.
  • S, study design: due to the expected paucity of studies on the topic, all types of comparative study, but also case series and case reports were considered aiming to provide the most exhaustive picture of the current evidence and practice in robotic emergency general surgery.

The research was limited to studies published in English.

The literature search and selection were performed by two independent reviewers (GB and FM), who also screened the reference list of the selected articles to potentially include additional studies. First, all records from merged searches were reviewed for relevance concerning title and abstract. Records were removed when both reviewers excluded them. Otherwise, the disagreement was resolved via discussion or with the intervention of a tiebreaker (NdeA). Both reviewers then performed an independent full-text analysis, which allowed to finally include or exclude the preselected article.

Data extraction and synthesis

Data extraction was performed by filling in an electronic spreadsheet, which included the following items: first author’s name, year of publication, scientific journal, type of study, number of patients, pathological state requiring surgical intervention, type of surgical intervention, surgical approach, operative surgical outcomes, and postoperative surgical outcomes. The risk of bias in the selected studies was assessed by using validated systems according to the type of study design [ 16 – 18 ].

Literature search and selection

The initial search yielded 3767 results; after removing duplicates, 3662 articles were screened for eligibility based on title and abstract, and 31 articles were retrieved for a full-text evaluation. A total of 10 studies fulfilled the selection criteria and were finally included in the review (Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is 13017_2022_410_Fig1_HTML.jpg

Flowchart of the literature search and selection

Study characteristics

The selected 10 studies were published between 2012 and 2021. They consisted of 5 cohort studies and 5 case reports conducted in Europe ( n  = 3) and North America ( n  = 7). The characteristics of the examined studies are summarized in Table ​ Table1. 1 . Overall, they considered 279 patients.

Studies reporting on urgent/emergent general surgery interventions performed with a robotic approach

Three studies reported interventions of colorectal surgery [ 9 , 10 , 19 ], two studies reported on hiatal hernia surgery [ 20 , 21 ], two studies reported on gallbladder surgery [ 22 , 23 ], two studies reported on bariatric surgery [ 12 , 24 ], and one study reported on abdominal wall surgery [ 25 ]. Only one case was a cancer-related emergency [ 10 ].

Qualitative synthesis of the literature

  • Robotics in emergency colorectal surgery

An early preliminary report of an emergent robotic repair of a colonic iatrogenic perforation was published by Pedraza et al. in 2012 [ 19 ]. The authors showed that such a procedure was feasible and successful. In 2014, Felli et al. [ 10 ] described the case of an 86-year old woman who underwent a robotic right colectomy for a bleeding ascending colon neoplasia. The surgery was uneventful and the reported postoperative outcomes were excellent. More recently, Anderson et al. [ 9 ] published a matched case–control study focusing on the use of robotics for urgent subtotal colectomies in patients presenting with ulcerative colitis. The results showed similar short-term outcomes for robotic and laparoscopic approaches.

  • 2. Robotics in emergency hiatal hernia surgery

Over the last years, two groups published their early experience with robotic surgery for emergency hiatal hernia repair. In a case series of 3 patients undergoing robotic surgery for complicated giant hiatal hernia, Ceccarelli et al. [ 21 ] showed that postoperative outcomes were good. The authors suggested that the potential advantages of robotics over a conventional laparoscopic approach were mainly related to the surgeon’s comfort and precision during the intervention. Hosein et al. [ 20 ] performed a cohort-based analysis using data from the 2015–2017 Vizient clinical database, which included inpatient data from over 300 hospitals in the USA. Trend analysis demonstrated that laparoscopy was the most common approach in emergency hiatal hernia repair, representing 64.09% of cases, followed by the open (30.38%) and the robotic approach (5.53%). Concerning operative and postoperative outcomes, a trend was also observed for better outcomes in case of MIS (laparoscopy or robotic) hiatal hernia repair as compared to open surgery.

  • 3. Robotics in emergency gallbladder surgery

In 2016, Kubat et al. [ 22 ] published a retrospective case series of 76 elective and 74 urgent robotic single-site cholecystectomies. The authors reported good perioperative outcomes, concluding that this approach was safe and efficient. In 2019, Milone et al. [ 23 ] described a case series of 3 patients who underwent robotic cholecystectomy for acute cholecystitis. The reported perioperative outcomes were excellent and the authors recommended the introduction of robotics in emergency settings in order to validate their preliminary results.

  • 4. Robotics in emergency bariatric surgery

The first report of robotic emergency surgery after complicated robotic biliopancreatic diversion with duodenal switch was published by Sudan et al. in 2012 [ 24 ]. The robotic approach was preferred over open surgery in the management of postoperative complications in order to preserve the benefits of the previous MIS approach. The authors highlighted how the adoption of the robotic platform was useful in a patient in order to identify the damage and to repair it. More recently, Robinson et al. [ 12 ] published a retrospective cohort study comparing emergent laparoscopic and robotic gastrojejunal ulcer repair. The authors showed that in-room-to-surgery-start time was significantly reduced in the robotic group. Additionally, perioperative outcomes were in favor of the robotic approach, although not significantly different. However, robotic surgery was significantly more expensive than laparoscopy.

  • 5. Robotics in emergency abdominal wall surgery

In 2020, Kudsi et al. [ 25 ] published an article on the perioperative and mid-term outcomes of 34 patients who underwent emergency robotic ventral hernia repair with different techniques between 2013 and 2019. With a 20.5% rate of minor postoperative complications (Clavien-Dindo grades I-II), a 11.7% rate of major postoperative complications (Clavien-Dindo grades III-IV), and only one (2.9%) patient experiencing hernia recurrence, the authors concluded that robotic ventral hernia repair was associated with promising results and overall feasibility in emergency settings, to be tested in further long-term follow-up studies.

Evaluation of the quality of evidence

Five out of 10 selected studies were retrospective cohort studies and were evaluated according to the NOS [ 18 ]. Two studies received a score of 8/9 [ 9 , 12 ], one study was graded 7/9 [ 20 ], and two studies had a score of 6/9 [ 22 , 25 ] (Table ​ (Table2). 2 ). The remaining studies were evaluated according to the tool described by Murad et al. [ 16 ]. All studies received a score of 6/8 [ 10 , 19 , 21 , 23 , 24 ] (Table ​ (Table3 3 ).

Quality assessment for the selected retrospective cohort studies according to the Newcastle Ottawa Scale (NOS)

Quality assessment for the selected case series/case reports according to Murad et al. [ 16 ]

Position statements

Following a comprehensive literature review and the summary of current scientific evidence on the applications of robotics for emergency general surgery procedures, the following position statements (PS) were put forward. For each statement, the supporting literature, the level of evidence, and the strength of the consensus are indicated. The level of evidence is classified according to the GRADE system ( https://training.cochrane.org/introduction-grade ). For each statement, the consensus was assessed through a web survey (by means of a Google Form) open to all members of the steering committee and panel of experts and to the members of the Board of Governors of the WSES. If a statement reached < 70% of agreement, it was rediscussed via email or videoconference, modified, and resubmitted to the experts’ vote until a consensus was reached.

The experts involved were also asked to describe their current practice. The great majority (82.6%) worked in a hospital equipped with a robotic surgical platform. However, the access to the robotic surgical system for emergency procedures appeared to be limited, with difficult availability (39.1%) only during the day (13%), or not available at all (43.5%).

PS-1. Robotic surgery in emergency settings is highly dependent on the surgeon’s experience and should only be performed in an appropriately equipped operating room with trained nursing staff.

Supporting literature

Robotic surgery requires a high level of technical expertise when compared to open or even laparoscopic surgery. A complete specialized training is required to be proficient in performing standardized surgical interventions associated with acceptable operative and postoperative outcomes [ 26 ]. In a recent article, Thomas et al. [ 27 ] analyzed the robotic colorectal surgery activity of a tertiary colorectal unit and concluded that success relies on a structured training curriculum, a dedicated surgical team, the institution’s support, and many other variables in addition to the training at the robotic console itself. The adoption of the robot in the emergency setting does not change the rules of the game. Rather, it enhances the need for a safe and efficient strategy starting from the standardization of the robotic platform setting and docking, up to the execution of the surgical procedure. In order to successfully perform emergency cases with a robotic system, the on-call surgical team must be adequately trained with robotic technology. As reported by Robinson et al. [ 12 ] in a case series of 24 robotic emergency bariatric surgeries, which were compared to 20 laparoscopic procedures, the surgeon who adopted the robotic approach was the same in all cases. It is the proof that a specific attitude of the operator is fundamental. However, it also highlights the need for a “can do” attitude from the entire surgical team [ 28 ]. The importance of the shared viewpoint is reinforced by Sudan et al. [ 24 ] who described the adoption of the robotic platform during the night and during the weekend in order for the staff to be comfortable with this technology. In addition, proper team work and communication in such a challenging workspace are required [ 29 ] as much as the completion of the learning curve for the entire surgical team [ 30 ]. The ideal operating room team in an emergency setting should be made up of the first operating surgeon with an extensive expertise in robotic surgery, an assisting surgeon familiar with the robotic technology, and a scrub nurse dedicated to the robotic program. All team members should work in a simulation environment before starting a robotic emergency surgery program.

Limitations linked to the adoption of robotic surgery in emergency settings are related to the time required for robotic setting and docking and the accessibility of the robotic platform for emergency surgical units. Concerning the time issue, Robinson et al. [ 12 ] reported that, when the entire team is appropriately trained and prepared, the in-room-to-surgery-start time is reduced and has no significant impact on the overall duration of the scheduled emergency procedure. However, in this study, the authors highlighted how the majority of the staff were familiar with the robotic technology, and there were no limitations to its accessibility. This may not be the case for all emergency care units, and trained nursing staff may not be always available during night shifts. A good coordination between the hospital administration, the surgeons, and the staff is the key point to have an efficient and extensive organization for the use of robotic technology, also in emergency surgery scenarios.

  • Level of evidence: case reports and case series → expert opinion
  • Strength of consensus (based on the survey evaluation): 100%

PS-2. Robotic surgery in emergency settings may be considered in highly selected clinically stable patients only.

Due to the very limited evidence in the literature and the consensus that robotic surgery required a high level of expertise for the operating surgeon and the entire surgical team, particularly if performed in emergency settings, it should be considered for clinically stable patients only.

A recent review [ 31 ] on the anesthetic aspects of robotic surgery suggested that when the surgical team gains confidence, even more complex operations or patients with comorbidities can be considered candidates for the robotic approach. A precise preoperative assessment based on a case-by-case evaluation, and multidisciplinary decision-making are crucial to guarantee the choice of the most indicated surgical strategy. Even if a comprehensive preoperative assessment is not always possible in emergency situations, a careful patient selection is advised in order not to expose frail or unstable patients to longer emergency procedures or unnecessary complications related to the surgical technique.

Indeed, in unstable patients or patients with cardiopulmonary comorbidities, the adoption of MIS with the need for carbon dioxide insufflation may result in a higher intra-abdominal pressure and hypercarbia with metabolic and respiratory changes which may be deleterious [ 32 ]. Osagiede et al. [ 11 ] showed that the presence of a metastatic disease and the higher number of comorbidities negatively influenced the adoption of MIS in emergency colorectal cancer surgery. Likewise, Arnold et al. [ 33 ] demonstrated that the adoption of MIS is confined to physiologically clinically stable patients while those with abdominal gross contamination or severe infectious processes are more prone to undergo open surgery. Despite this selection bias, when the results are corrected for preoperative risk factors, the adoption of laparoscopy is associated with a reduced wound infection rate, risk of death, and length of hospital stay.

Recently, emergency laparoscopy was evaluated as a valid approach to the treatment of perforated diverticulitis with generalized peritonitis [ 34 ], iatrogenic colonoscopy perforations [ 35 ], and perforated peptic ulcers [ 36 ]. In addition, in simple cases of adhesive small bowel obstruction, a laparoscopic approach may be beneficial despite the considerable risk of conversion to open surgery and the higher probability of bowel injuries [ 37 ]. In all of the abovementioned pathological states, the prerequisite for a safe minimally invasive treatment is the selection of a stable patient.

In terms of anesthetic management in emergency settings, the robotic approach can be considered as an alternative to laparoscopy because it does not change the risk exposure but it may be associated with longer operative times if the surgical team is not properly trained. Additional costs must also be considered. Further studies are necessary in order to clarify the future role of a low pressure pneumoperitoneum in emergency robotic surgery [ 38 ].

  • Strength of consensus: 94.6%

PS-3. Robotic surgery may be considered in challenging situations, which are foreseen as a reason for conversion to open surgery if operating in laparoscopy.

The available literature suggests that the main potential advantages of robotic surgery over laparoscopy are related to suturing and dissection. In case of emergency robotic surgery, the published studies described the following procedural steps: hiatoplasties [ 20 , 21 ], ventral suturing or mesh fixations [ 25 ], colonic suturing [ 19 ], duodenal stump suturing [ 24 ], strictureplasty [ 24 ], and dissection of inflamed gallbladder [ 22 , 23 ] or colon [ 9 ]. All of these tasks are particularly challenging in laparoscopic surgery and they often lead to conversion to open surgery, which can also be a source of postoperative complications [ 39 , 40 ]. The technological advances of the robotic surgical platform, such as deep magnification, 3D stereoscopic vision, a stable field with elimination of physiological tremors, motion scaling, and improved ergonomics as compared to laparoscopy, may contribute to facilitate the performance of some difficult procedural steps and reduce the risk of conversion. However, this remains to be proven, especially for surgical interventions performed in emergency settings.

  • Level of evidence: case reports and case series → expert opinion.
  • Strength of consensus: 83.8% (based on the survey evaluation)

PS-4. In a near future, robotic surgery may offer the advantage of telementoring and telesurgery, which could be useful to promote a safe and standardized application of robotics, also in low-volume centers or specific environments.

One of the limitations of laparoscopic surgery is the absence of telementoring during a difficult procedure. Even if communicating systems dedicated to telementoring are available, no opportunity for the direct control of movements is present in laparoscopy. In robotic surgery, an in-person mentoring can be performed if a second robotic console is present in the hospital (such as telestration or tele-assisted surgery). In a near future, it can be expected to perform telementoring during elective and emergency robotic procedures. After the first transatlantic robot-assisted surgery performed by Jacques Marescaux in 2001 [ 41 ], the surgical community was waiting for a routine use of telesurgery which, however, was not feasible due to technical limitations. Today, thanks to the evolution of telecommunications, namely fifth generation (5G) networks, there is a growing opportunity for a surgeon with a proven expertise in the field to remotely operate on a distant patient [ 42 , 43 ]. A digital connection with a reference center which can evaluate the case, suggest a solution, and eventually manage the surgical situation if need be, represents a powerful tool, especially in emergency settings. Indeed, in emergency surgery where a maximal experience improves outcomes, it would be beneficial to have a mentor observing and remotely participating in the intervention. Additionally, this technology could be applied to provide surgical care to rural areas, to establish surgical collaborations, and to eliminate the shortage of surgeons. This is also applicable for specific environments, such as in the space station, where an emergency medical condition has to be managed by a trained component of the crew, or close to a battlefield, where the surgeon may operate at a safe distance, or again at the bottom of the ocean [ 44 ]. Telesurgery could well be an option in such situations.

However, these applications conceal some limitations in terms of global network development, legal and ethical issues, costs, and cyber security. These issues are under examination. However, despite the current skepticism, it is unquestionable that robotic surgery can have a pivotal role in developing telemedicine and telesurgery [ 45 , 46 ].

  • Strength of consensus: 89.2% (based on the survey evaluation)

PS-5. The use of robotic surgery for unscheduled and urgent operations needs to be implanted without creating scheduling conflicts in the occupation of the operating room. Moreover, the increased costs need to be justified in the context of an efficient implementation of robotic surgery. Currently, the availability and accessibility of the robotic platforms for emergency care surgical units are very limited.

A consistently growing number of hospitals, mainly tertiary care and university-based hospitals, are acquiring a robotic surgical platform in order to satisfy daily requests and advertise the most advanced technology. The robotic platform is often shared between different specialties, subsequently limited in terms of availability for a single surgical field and not adaptable to changing schedules. In this perspective, several reports suggested that the use of the robotic surgical platform by experienced teams could be prolonged to night hours and even to the weekend. This approach was called “after hours” by Sudan et al. [ 24 ], whose report aimed to highlight the potential of a robotic system which is available 24 h/7 days per week. The availability of the platform during the night shift could potentially favor a more cost-effective use of the robotic system. However, this remains very limited and, as previously highlighted, a proper attitude and excellent training of the entire team are key to guarantee surgical proficiency and efficiently implement robotic surgery for emergency procedures.

Concerns for the adoption of robotics for emergency surgeries also persist in relation to the increased costs that a robotic surgical procedure implies also need to be justified in the context of an efficient implementation of robotic surgery.

PS-6. The development of new modular robotic platforms may contribute to increase the applications of robotic surgery in emergency settings.

The surgical marketplace was recently enhanced with several different robotic platforms either approved for human use, such as the CMR Versius (Cambridge Medical Robotics, Cambridge, UK) and the Distalmotion Dexter (Distalmotion, Epalinges, Switzerland) or under approval, such as the Medtronic Hugo (Medtronic Inc., Minneapolis, USA). Most of them share the opportunity of switching from a conventional laparoscopic setting to a robot-assisted one. This key point, which could be less relevant in elective surgery, should be carefully considered when approaching emergency surgery. In fact, when no specific port placement is required, the surgeon can simply use a different approach depending on the procedural step and on his/her own ability. In addition, these robotic platforms offer an improved vision with advanced near-infrared imaging, not routinely available in laparoscopic surgery. The objective evaluation of tissue anatomy or perfusion could limit the surgical bias in emergency settings by mitigating the personal opinion [ 47 , 48 ].

In the future, advances in surgical technologies will offer multiple new opportunities, which are currently under development, like hyperspectral imaging [ 49 ] and robotic single-port surgery [ 50 ]. Their potential applications and outcomes in emergency surgery need to be evaluated and updated once evidence is available.

  • Strength of consensus: 94.6% (based on the survey evaluation)

Research agenda

The experts recognized that there is a substantial lack of evidence to support the use of robotic surgery for emerging general surgery procedures. For this reason, a research agenda has been proposed.

  • Observational (cohort study, case–control) and interventional studies are anticipated to investigate the applications and outcomes of robotic surgery in emergency settings and to compare them with those obtained with laparoscopy and open surgery.
  • Future studies should evaluate patient preferences considering patient-related outcome measures (PROMs), including pain evaluation and mid-/long-term quality of life.
  • Future studies should evaluate the cost-effectiveness of robotic surgery implementation in emergency settings at hospital level (e.g., scheduling conflict alleviation) and at the level of the healthcare system (e.g., length of hospital stay, productivity losses, reimbursement systems).
  • Future studies should evaluate the applicability of the robotic surgical platforms to perform telementoring and telesurgery, which are theoretically promising technologies to expand the applications of robotic surgery.

With the aim to enrich the available evidence and fill knowledge gaps, the WSES plans to launch an open registry on emergency robotic general surgery. The WSES calls for an international participation, which is essential to gather sufficient data and obtain generalizable results.

The establishment of a dedicated registry is also mandatory to perform a deep analysis on this technique, in order to define the following: characteristics of the patient candidate for emergency robotic procedures, operative and postoperative outcomes, PROMs, minimum requisites in terms of personnel and equipment, cost-effectiveness, and ethical issues.

Hospitals that are currently equipped with a robotic surgical platform need to implement it efficiently. The role of robotic surgery for emergency procedures remains under investigation. However, its use is expanding despite the lack of evidence-based guidelines. In this scenario, the WSES wished to provide this position paper to the surgical community. This position paper summarizes the current evidence and practice and proposes consensus statements to be reevaluated and updated as the evidence in the supporting literature emerges. For now, the experts recommend a strict patient selection while approaching emergent general surgery procedures with robotics. However, an emergency setting should not be seen as a contraindication for robotic surgery if adequate training of the operating surgical team is available. When such prerequisites are met, robotic surgery can be considered safe and feasible, and surgical outcomes related to an MIS approach are expected. Finally, the application of the robotic surgical platform may grow with improvements in telementoring and telesurgery, which are particularly valuable in emergency settings.

Acknowledgements

The authors are grateful to Guy Temporal and Christopher Burel, professionals in medical English proofreading, for their valuable help.

Abbreviations

Authors' contributions.

GB, FM, and NdeA conducted the systematic review of the literature and wrote the first draft of the manuscript. All authors were involved in the statement evaluation and consensus process. All authors critically reviewed the manuscript and approved the final version. All authors read and approved the final manuscript.

No funding or resources was received for the preparation of this article. The authors received a WSES institutional waiver for this publication.

Availability of data and materials

Declarations.

Not applicable.

P Pessaux declared that he received consulting fees from 3M and Integra and has stock-options of Virtualisurg. E Kouwenhoven is proctor for Intuitive Surgical. M Sugrue received consulting fee with 3M, Smith and Nephew and Novus Scientific. G Spinoglio received honoraria as proctor for Intuitive Surgical. F. Ris reports research funding from Quantgene, personal fees from Arthrex, Stryker, Hollister, Fresenius Kabi and Distal Motion, outside the submitted work. E Espin Bsany received honoraria as proctor for Intuitive Surgical. JS Khan is a proctor for Intuitive Surgical. All other authors have no conflicts of interest to declare in relation to the matter of this publication.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Nicola de’Angelis, Email: [email protected] .

Jim Khan, Email: moc.loa@207nahkm .

Francesco Marchegiani, Email: [email protected] .

Giorgio Bianchi, Email: [email protected] .

Filippo Aisoni, Email: moc.liamg@oppilifinosia .

Daniele Alberti, Email: [email protected] .

Luca Ansaloni, Email: moc.liamg@36ecaia .

Walter Biffl, Email: [email protected] .

Osvaldo Chiara, Email: moc.oohay@araihco .

Graziano Ceccarelli, Email: [email protected] .

Federico Coccolini, Email: [email protected] .

Enrico Cicuttin, Email: [email protected] .

Mathieu D’Hondt, Email: moc.oohay@0002tdnohdueihtam .

Salomone Di Saverio, Email: [email protected] .

Michele Diana, Email: [email protected] .

Belinda De Simone, Email: [email protected] .

Eloy Espin-Basany, Email: moc.em@nipsee .

Stefan Fichtner-Feigl, Email: [email protected] .

Jeffry Kashuk, Email: moc.liamg@kuhsakyrffej .

Ewout Kouwenhoven, Email: [email protected] .

Ari Leppaniemi, Email: [email protected] .

Nassiba Beghdadi, Email: [email protected] .

Riccardo Memeo, Email: moc.odracciroememrd@ofni .

Marco Milone, Email: [email protected] .

Ernest Moore, Email: [email protected] .

Andrew Peitzmann, Email: ude.cmpu@banamztiep .

Patrick Pessaux, Email: [email protected] .

Manos Pikoulis, Email: rg.aou.dem@luokipm .

Michele Pisano, Email: ti.32gp-tssa@onasipm .

Frederic Ris, Email: [email protected] .

Massimo Sartelli, Email: moc.liamg@illetrasomissam .

Giuseppe Spinoglio, Email: [email protected] .

Michael Sugrue, Email: moc.liamg@eurguseleahcim .

Edward Tan, Email: [email protected] .

Paschalis Gavriilidis, Email: moc.oohay@sidileirvagp .

Dieter Weber, Email: moc.liamg@rebewgreteid .

Yoram Kluger, Email: li.vog.htlaeh.mabmar@regulk_y .

Fausto Catena, Email: moc.liamg@anetacotsuaf .

Help | Advanced Search

Computer Science > Robotics

Title: research on robot path planning based on reinforcement learning.

Abstract: This project has conducted research on robot path planning based on Visual SLAM. The main work of this project is as follows: (1) Construction of Visual SLAM system. Research has been conducted on the basic architecture of Visual SLAM. A Visual SLAM system is developed based on ORB-SLAM3 system, which can conduct dense point cloud mapping. (2) The map suitable for two-dimensional path planning is obtained through map conversion. This part converts the dense point cloud map obtained by Visual SLAM system into an octomap and then performs projection transformation to the grid map. The map conversion converts the dense point cloud map containing a large amount of redundant map information into an extremely lightweight grid map suitable for path planning. (3) Research on path planning algorithm based on reinforcement learning. This project has conducted experimental comparisons between the Q-learning algorithm, the DQN algorithm, and the SARSA algorithm, and found that DQN is the algorithm with the fastest convergence and best performance in high-dimensional complex environments. This project has conducted experimental verification of the Visual SLAM system in a simulation environment. The experimental results obtained based on open-source dataset and self-made dataset prove the feasibility and effectiveness of the designed Visual SLAM system. At the same time, this project has also conducted comparative experiments on the three reinforcement learning algorithms under the same experimental condition to obtain the optimal algorithm under the experimental condition.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Sam Burden logo

Assoc Prof in UW ECE

Why can animals outrun robots.

It is obvious that animals outperform robots at running – really, any legged locomotion task involving significant momentum. But what causes this performance gap? Could it be better actuators? sensors? “compute”-ers? The answer to this question is important for determining the most fruitful lines of research for roboticists interested in closing the performance gap. This observation motivated my co-authors and I to write a review paper that definitively answers the question .

Spoiler: it’s not the parts that gives animals the advantage – it must be something about how the parts assemble into the whole.

For historical accuracy, I should point out that the observation initially motivated two of the co-authors to take up the challenge: Tom Libby and Max Donelan . Max was on sabbatical in Berkeley in 2014, so he had time to think big thoughts. Tom was a PhD student and Director of the CiBER center at the time, so he had the motivation to go after weighty problems. These two intellectual heavyweights set themselves on the monumental task of systematically comparing biological and engineering technologies at the level of every individual component but also subsystem and whole-system levels across scales spanning ants and cockroaches to cheetahs and elephants. The original vision was to create a “datasheet” containing a comprehensive comparison of every known metric – plus definitions of new, better metrics and corresponding experiments to characterize them in biomechanical and electromechanical locomotors.

Aaaand .. it went about as well as could be expected.

Which is to say: it went sloooowly. And dauntingly. Overwhelmingly humblingly challengingly. Ego-crushing panic-inducing existentially dreadfully.

That may be overstating the sitch a lil bit. I’m definitely projecting more than a lil, as those were the feels I personally felt once I weaseled my way into the project years later. But you get the idea: it was a big project that required a tremendous amount from its workers.

But back to my weaseling. I was lucky enough to recruit Tom as a postdoc through a (now sadly defunct) Institute for Neuroengineering (called “UWIN” :) in 2017. The “AvM” project (“animals v. machines”) was still in the mix, but so were a half dozen other wonderfullly fascinating projects – both old and new – that were on his plate. In the intervening years, the AvM project scope had been dialed back to “merely” a mega-review, rather than a mega-review-plus-half-dozen-PhD-theses as originally conceived. But it was still too much ground for a pair of researchers to cover.

To put a finer point on it, by this time the scope had been dialed down to focus on 5 subsystems deemed critical for running: power , frame , sensing , actuation , and control . So all that was needed was expertise in5 different Departments: energy systems, material science, sensory neuroscience, kinesiology / biomechanics, and control theory. I genuinely believe Tom has such astonishing breadth that he could have covered all this ground himself. But doing so to the degree of rigor sought by the team would require review of hundreds of papers to convince onesself that you weren’t missing some critical detail that would invalidate the paper’s whole premise.

Observing Tom grapple with all this from the perspective of a postdoc ( co- )advisor, I made the very sage and selfless observation that what they were missing was … me! In particular, I felt I could offer two key benefits: I could handle the control subsystem, and I could lower their standards help enforce a reasonable project scope and timeline.

Given that this was in 2019 and you, dear reader, are being regaled with this delightful tale in or after the year 2024, I clearly delivered on no more than half of my promises.

I think my real contribution to the project occured two years of frustrated false starts later when I declared that what we were really missing was … more experts! I had actually made this suggestion many times before. In fact, I’d suggested it to Tom before I joined the project, which is in all likelihood the only reason I have the privilege of writing this today. But – to my recollection – Max resisted bringing even more people in for the longest time . (Probably because he regretted the mistake he and Tom already made with bringing in someone new …)

But after Max became the BPK Chair, he had to acknowledge that something needed to change if we were ever to release this monster into the world. So after a little deliberation we agreed that what was missing was … our friends! We had decided that assigning one expert to each subsystem would be most effective. Between the three of us, Max had power covered, Tom could handily handle actuation , and I could muddle through control . So we were missing frame and sensing . Fortunately for us, our numero uno choices for each subsystem readily agreed to join the project, so we now had Kaushik Jayaram on frame and Simon Sponberg on sensing . An interesting historical note is that we all had a strong connection to the biomechanics group at Berkeley, and in particular with Bob Full , a towering figure in the integrative study of movement: Bob was Max’s sabbatical host, the founder of the center Tom directed, the PhD advisor to Kaushik and Simon, and a cherished mentor and collaborator to me. This project is built on Bob’s shoulders.

With this fresh injection of energy and renewed purpose, we made rapid progress … until we didn’t. Although we’d significantly decreased the workload on each of us individually, the mammoth scope of the endeavor continued to conflict with our many other obligations. It was just too hard to squeeze in thinking such big thoughts and making such sweeping claims among teaching, advising, grantwriting, service, and life.

It’s at this point where the story gets a lot less interesting and therefore quickly wraps up. The project had lain dormant for many months when I received an email notice about a Special Issue on Legged Locomotion in Science Robotics with a deadline 6 weeks out. We’d been targeting SciRob since getting positive feedback on a pre-submission inquiry 5 years prior (lol). And putting these ideas into a Special Issue that the community would be more likely to see was an opportunity we couldn’t afford to miss. I happened to have the good fortune of being on sabbatical at that moment, so I had the time in addition to the motivation to close . So we made it happen.

It’s amazing what a time constraint can do :)

It’s also amazing what a space constraint can do: the original conception was a 10,000 word, 200 cite monolith, but Science Robotics advises a svelte 5,000 words and 75 cites. Not wanting to antagonize the editor or reviewers, we brought our S-tier pithiness to the problem. I regard brevity as my gift, so it was a delightful challenge to boil the ideas down to their bones and serve up only the delicious marrow from with- .. this analogy is getting a little thin and macabre, so let’s move on …

I want to talk a bit informally about the ideas in the paper and give context for some of the decisions and considerations that went into the final product. I’ll work through the sections in order.

When considering System Performance, the original conception was to commit to a specific set of metrics and quantify performance of a suite of robot and animal runners – to create a “datasheet” of sorts that the community could continue to build out over the years. However, there two major problems with this idea: one scientific, and one sociological.

The scientific challenge is that the metrics we have for concepts like range , agility , and robustness are inadequate to capture what seems intuitively clear. One grand idea we tossed around was the conjecture that any metric for these concepts could be computed from the reachable set , that is, the set of states that can be achieved by a control system through an admissible input signal in a given distribution of environments and a given parameterization of designs. We ultimately abandoned mentioning this idea because, although potentially interesting, there’s no currently practical way to compute this set (and Bellman tells us there can’t be in general).

The sociological challenge is that we did not want to dunk on our colleagues, or get into endless debates about why we chose the specific metric we did and why their robot did so poorly with respect to it. We figured that no reasonable person would challenge the assertion that animals outperform robots in their range, agility, and robustness (however you define these terms) – what would surely be controversial is how existing robots stack up relative to one another. So we opted for the qualitative / coarsely-quantized comparison in the first Figure.

Regarding the central conclusion, that the difference in performance of parts does not explain the difference in performance of wholes, there are some caveats.

If you were building a cyborg to run as far as possible completely power autonomous, using metabolism would give an order-of-magnitude advantage in range over gas power (nearly two orders-of-magnitude w.r.t. batteries). So along that solitary dimension, defined in that specific way, the difference in the part does explain the difference in the whole. But as soon as you allow that there may be gas stations or electrical outlets along the way, this advantage disappears.

The biological distribution of sensors throughout a body is quite compelling from an agility and robustness perspective: richly sensing terrain or other interactions with the environment could be a real boon for those dimensions of performance. But the “simulated cyborg” thought experiment from the Discussion convinces us that, even in the presence of perfect state information about the locomotor and environment, we still lack the tools to integrate that information to make a high-performing runner.

Finally, there are a couple of points to make about biological and engineered controllers. To make the most apples-to-apples comparison, we looked at natural and artificial spiking neural networks. Of course robot controllers can be implemented using conventional von Neumann architectures. But there are no proof-of-concept high-performing controllers in that paradigm to compare to those in animals, and the comparison is difficult to make at a component level: although we can pack upwards of hundreds of billions of transistors into a chip (comparable to the number of neurons in the human brain), it seems clear that a single transistor has less computational power than an individual neuron, and we are not aware of any rigorous attempt to quantify their relative computational power. Even the comparison between natural and artificial spiking neural networks is probably unfair in the sense that ANN dynamics are vastly simpler (e.g. piecewise-linear) than their biological counterparts (NNN?). But it’s the best comparison we can make at present, and including these factors would only tip the outcome even further in biology’s favor.

HOWever, even allowing that brains can, in principle, implement vastly more complex transformations than chips (at any scale – cockroaches have more neurons and synapses than the biggest neural ICs), it is important to remember that the brain is doing a whole lot more than locomotion. I keep returning to the example we cite in the paper (citation 90) of a parasitic wasp that lyses more than 7000 of its approximately 7400 neurons during pupation. The upshot is that there are autonomous flyers that can identify and infect hosts using fewer than 400 neurons !!! If you gave me 400 neurons, I think I’d struggle to invert a pendulum ..

My takeaway from this example is that we could be doing a lot more (robust and agile behavior) with a lot less (computational power) if only (a) we had the right bodies and (b) we knew what to do with them.

The Discussion covers a lot of ground that doesn’t need to be retrod here. But there is one point I want to dwell on a bit more, because I personally find it very interesting and compelling: the need for better metrics. This problem came up a few paragraphs ago when I discussed the challenge in defining what we mean by “agility” and “robustness”. One way to view the results of our paper is that we are focusing on the wrong metrics when we evaluate performance at the subsystem level, as these are evidently not predictive of system performance. What’s needed are metrics for the integration of multiple components or subsystems – and these metrics must capture something about the whole-system behavior we seek. The reason good metrics could be so powerful is that the endeavor of engineering is driven by “specs”, i.e. performance criteria. Once you tell me how my artifact is going to be evaluated, I can bring the powerful machinery of prototyping, optimization, learning, et al. to bear on squeezing that metric for everything it’s got. In the absence of metrics, engineering becomes art.

As a final note for the history books, I want to acknowledge where this paper fits in my intellectual and academic trajectory. I got my start in research in the summer before my first year of undergrad working with Eric Klavins , who began his career in robotics before switching to synbio. In fact, Eric got his PhD with a luminary in legged robotics, Dan Koditschek , and it was through this connection (certainly not merit) that I had the tremendous good fortune to do an REU at UPenn the summer after my sophomore year. The REU was my first exposure to the interdisciplinary world of legged locomotion, and I was completely enraptured. (Actually, for historical accuracy, I have to acknowledge that my very first exposure to this world was as a high school student when I was part of the inaugural cohort of students at the Summer Institute for Mathematics at the University of Washington , where the inimitable Tom Daniel gave an afternoon lecture on biolocomotion that included a very memorable demo on passive dynamic walking . So I suppose I was primed to become enraptured.)

Biolocomotion was the driver behind my applications to grad school and fellowships, and legged locomotion in particular ended up as the focus of my PhD thesis . My postdoc took me in a completely new direction – human-in-the-loop control – so when I started my faculty position there were two main areas of focus. Over time, legged locomotion has shrunk from the dominant theme at the beginning to now, where I have only one PhD student in this area, and they will graduate in six months. So this review represents the closure of a major chapter in my career – a very satisfying closure to be sure, but bittersweet nonetheless.

With that, I’ll stop – this commentary has already run almost half as many words as you’ll find in the paper :)

Robotics Alliance Project

2024 VEX Worlds Championship

  • Apr 26, 2024

featured image

The 2024 VEX Robotics World Championship takes place from April 25 – May 3, 2024 at the Kay Bailey Hutchison Convention Center in Dallas, Texas. Over the course of nine days there will be continual robotics competitions taking place.

April 25 – 27, 2024 – VRC HS April 28 – 30, 2024 – VRC MS / JROTC/VEX U May 1 – 3, 2024 – VIQRC ES & MS

+ Watch the whole event live on VEX TV + More information about the event. + More information about the REC Foundation

Avatar for dprice

2021-2022 FRC Sponsorship Grants

2022-2023 frc sponsorship grants, 2021-2022 nasa’s first robotics competition....

featured image

VEX VRC 2020-2021 Challenge: “Change...

Blogging on Mars

Blogging on Mars

featured image

2018-2019 FIRST FLL Research Project

featured image

2022 FIRST Robotics Competition Kickoff...

Follow rap on twitter, search by tag.

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

How to Thrive as You Age

Got tinnitus a device that tickles the tongue helps this musician find relief.

Allison Aubrey - 2015 square

Allison Aubrey

robotics research paper 2021

After using the Lenire device for an hour each day for 12 weeks, Victoria Banks says her tinnitus is "barely noticeable." David Petrelli/Victoria Banks hide caption

After using the Lenire device for an hour each day for 12 weeks, Victoria Banks says her tinnitus is "barely noticeable."

Imagine if every moment is filled with a high-pitched buzz or ring that you can't turn off.

More than 25 million adults in the U.S., have a condition called tinnitus, according to the American Tinnitus Association. It can be stressful, even panic-inducing and difficult to manage. Dozens of factors can contribute to the onset of tinnitus, including hearing loss, exposure to loud noise or a viral illness.

There's no cure, but there are a range of strategies to reduce the symptoms and make it less bothersome, including hearing aids, mindfulness therapy , and one newer option – a device approved by the FDA to treat tinnitus using electrical stimulation of the tongue.

The device has helped Victoria Banks, a singer and songwriter in Nashville, Tenn., who developed tinnitus about three years ago.

"The noise in my head felt like a bunch of cicadas," Banks says. "It was terrifying." The buzz made it difficult for her to sing and listen to music. "It can be absolutely debilitating," she says.

Tinnitus Bothers Millions Of Americans. Here's How To Turn Down The Noise

Shots - Health News

Tinnitus bothers millions of americans. here's how to turn down the noise.

Banks tried taking dietary supplements , but those didn't help. She also stepped up exercise, but that didn't bring relief either. Then she read about a device called Lenire, which was approved by the FDA in March 2023. It includes a plastic mouthpiece with stainless steel electrodes that electrically stimulate the tongue. It is the first device of its kind to be approved for tinnitus.

"This had worked for other people, and I thought I'm willing to try anything at this point," Banks recalls.

She sought out audiologist Brian Fligor, who treats severe cases of tinnitus in the Boston area. Fligor was impressed by the results of a clinical trial that found 84% of participants who tried Lenire experienced a significant reduction in symptoms. He became one of the first providers in the U.S. to use the device with his patients. Fligor also served on an advisory panel assembled by the company who developed it.

"A good candidate for this device is somebody who's had tinnitus for at least three months," Fligor says, emphasizing that people should be evaluated first to make sure there's not an underlying medical issue.

Tinnitus often accompanies hearing loss, but Victoria Banks' hearing was fine and she had no other medical issue, so she was a good candidate.

Banks used the device for an hour each day for 12 weeks. During the hour-long sessions, the electrical stimulation "tickles" the tongue, she says. In addition, the device includes a set of headphones that play a series of tones and ocean-wave sounds.

The device works, in part, by shifting the brain's attention away from the buzz. We're wired to focus on important information coming into our brains, Fligor says. Think of it as a spotlight at a show pointed at the most important thing on the stage. "When you have tinnitus and you're frustrated or angry or scared by it, that spotlight gets really strong and focused on the tinnitus," Fligor says.

"It's the combination of what you're feeling through the nerves in your tongue and what you're hearing through your ears happening in synchrony that causes the spotlight in your brain to not be so stuck on the tinnitus," Fligor explains.

robotics research paper 2021

A clinical trial found 84% of people who used the device experienced a significant reduction in symptoms. Brian Fligor hide caption

A clinical trial found 84% of people who used the device experienced a significant reduction in symptoms.

"It unsticks your spotlight" and helps desensitize people to the perceived noise that their tinnitus creates, he says.

Banks says the ringing in her ears did not completely disappear, but now it's barely noticeable on most days.

"It's kind of like if I lived near a waterfall and the waterfall was constantly going," she says. Over time, the waterfall sound fades out of consciousness.

"My brain is now focusing on other things," and the buzz is no longer so distracting. She's back to listening to music, writing music, and performing music." I'm doing all of those things," she says.

When the buzz comes back into focus, Banks says a refresher session with the device helps.

A clinical trial found that 84% of people who tried Lenire , saw significant improvements in their condition. To measure changes, the participants took a questionnaire that asked them to rate how much tinnitus was impacting their sleep, sense of control, feelings of well-being and quality of life. After 12 weeks of using the device, participants improved by an average of 14 points.

"Where this device fits into the big picture, is that it's not a cure-all, but it's quickly become my go-to," for people who do not respond to other ways of managing tinnitus, Fligor says.

One down-side is the cost. Banks paid about $4,000 for the Lenire device, and insurance doesn't cover it. She put the expense on her credit card and paid it off gradually.

Fligor hopes that as the evidence of its effectiveness accumulates, insurers will begin to cover it. Despite the cost, more than 80% of participants in the clinical trial said they would recommend the device to a friend with tinnitus.

But, it's unclear how long the benefits last. Clinical trials have only evaluated Lenire over a 1-year period. "How durable are the effects? We don't really know yet," says audiologist Marc Fagelson, the scientific advisory committee chair of the American Tinnitus Association. He says research is promising but there's still more to learn.

Fagelson says the first step he takes with his patients is an evaluation for hearing loss. Research shows that hearing aids can be an effective treatment for tinnitus among people who have both tinnitus and hearing loss, which is much more common among older adults. An estimated one-third of adults 65 years of age and older who have hearing loss, also have tinnitus.

"We do see a lot of patients, even with very mild loss, who benefit from hearing aids," Fagelson says, but in his experience it's about 50-50 in terms of improving tinnitus. Often, he says people with tinnitus need to explore options beyond hearing aids.

Bruce Freeman , a scientist at the University of Pittsburgh Medical Center, says he's benefitted from both hearing aids and Lenire. He was fitted for the device in Ireland where it was developed, before it was available in the U.S.

Freeman agrees that the ringing never truly disappears, but the device has helped him manage the condition. He describes the sounds that play through the device headphones as very calming and "almost hypnotic" and combined with the tongue vibration, it's helped desensitize him to the ring.

Freeman – who is a research scientist – says he's impressed with the results of research, including a study published in Nature, Scientific Reports that points to significant improvements among clinical trial participants with tinnitus.

Freeman experienced a return of his symptoms when he stopped using the device. "Without it the tinnitus got worse," he says. Then, when he resumed use, it improved.

Freeman believes his long-term exposure to noisy instruments in his research laboratory may have played a role in his condition, and also a neck injury from a bicycle accident that fractured his vertebra. "All of those things converged," he says.

Freeman has developed several habits that help keep the high-pitched ring out of his consciousness and maintain good health. "One thing that does wonders is swimming," he says, pointing to the swooshing sound of water in his ears. "That's a form of mindfulness," he explains.

When it comes to the ring of tinnitus, "it comes and goes," Freeman says. For now, it has subsided into the background, he told me with a sense of relief. "The last two years have been great," he says – a combination of the device, hearing aids and the mindfulness that comes from a swim.

This story was edited by Jane Greenhalgh

  • ringing in ears
  • hearing loss

IMAGES

  1. Robotics Topics For Research Paper

    robotics research paper 2021

  2. Robotics Assignment

    robotics research paper 2021

  3. Intro To Robotics, Research Paper

    robotics research paper 2021

  4. (PDF) Medical Robotics: State-of-the-Art Applications and Research

    robotics research paper 2021

  5. Madridge Publishers

    robotics research paper 2021

  6. World Robotics 2021 report: record of 3 million industrial robots

    robotics research paper 2021

VIDEO

  1. A glimpse into the future: Robotics and AI

  2. Robotnik Portfolio: Autonomous Mobile Robots & Mobile Manipulator Robots

  3. How Can I Effectively Read a Robotics Research Paper?

  4. Empowering the Future of Robotics With Boston Dynamics and MIT's Leg Lab

  5. Revolutionizing Robotics: Exploring Breakthroughs from Home to Industry!

  6. China's Mind Blowing and LARGEST Robot Exhibition

COMMENTS

  1. The International Journal of Robotics Research: Sage Journals

    International Journal of Robotics Research (IJRR) was the first scholarly publication on robotics research; it continues to supply scientists and students in robotics and related fields - artificial intelligence, applied mathematics, computer science, electrical and mechanical engineering - with timely, multidisciplinary material... This journal is peer-reviewed and is a member of the ...

  2. Swarm Robotics: Past, Present, and Future [Point of View]

    Swarm robotics deals with the design, construction, and deployment of large groups of robots that coordinate and cooperatively solve a problem or perform a task. It takes inspiration from natural self-organizing systems, such as social insects, fish schools, or bird flocks, characterized by emergent collective behavior based on simple local interaction rules [1], [2]. Typically, swarm robotics ...

  3. (PDF) ARTIFICIAL INTELLIGENCE IN ROBOTICS: FROM ...

    This research paper explores the integration of artificial intelligence (AI) in robotics, specifically. focusing on the transition from automation to autonomous systems. The paper provides an ...

  4. Science Robotics

    ONLINE COVER Special Issue on Legged Robots. Developing legged robots capable of complex motor skills is a major challenge for roboticists. Haarnoja et al. used deep reinforcement learning to train miniature humanoid robots, Robotis OP3, to play a game of one-versus-one soccer. The robots were capable of exhibiting not only agile movements, such as walking, kicking the ball, and rapid recovery ...

  5. Growth in AI and robotics research accelerates

    Five countries — the United States, China, the United Kingdom, Germany and France — had the highest AI and robotics Share in the Nature Index from 2015 to 2021, with the United States leading ...

  6. T-RO

    The IEEE Transactions on Robotics (T-RO) publishes research papers that represent major advances in the state-of-the-art in all areas of robotics. The Transactions welcomes original papers that report on any combination of theory, design, experimental studies, analysis, algorithms, and integration and application case studies involving all aspects of robotics.

  7. Augmented Reality Meets Artificial Intelligence in Robotics: A

    Concurrently, augmented reality (AR) applications are providing solutions to a myriad of robotics applications, such as demystifying robot motion intent and supporting intuitive control and feedback. In this paper, research papers combining the potentials of AI and AR in robotics over the last decade are presented and systematically reviewed.

  8. Table of Contents 2021

    Design and Performance Simulation of Computer Control System for Automatic Monitoring of Upper Computer Communication Operation State. Li Yang | Huitao Zhang. 06 Dec 2021. PDF. Citation. Journal of Robotics -. Volume 2021. - Article ID 4943316. - Research Article.

  9. Robotics: Science and Systems (RSS) 2020

    Robotics Research 2021, Vol. 40(12-14) 1329-1330 The Author(s) 2021 ... This special issue features papers presented at the Robotics: Science and Systems (RSS) 2020 conference. This confer-ence was the first RSS held virtually due to the COVID-19 pandemic and was held from 12 to 16 July 2020. ... 1330 The International Journal of Robotics ...

  10. Articles

    Space Robotics (Y Gao, Section Editor) Open access 19 June 2021 Pages: 251 - 263. Part of 1 collection: Topical Collection on Space Robotics. 1. 2. 3. Next. Current Robotics Reports aims to offer expert review articles on the most significant recent developments in the field of robotics. By providing clear, ...

  11. Robotics

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... Robotics 2021, 10(1), 51 ...

  12. Home

    Overview. Current Robotics Reports aims to offer expert review articles on the most significant recent developments in the field of robotics. By providing clear, insightful, balanced contributions, the journal intends to serve all those who use robotic technologies in medicine, defense, service, and agriculture.

  13. A decade retrospective of medical robotics research from 2010 to 2020

    The number of papers on medical robotics has grown exponentially from less than 10 published in 1990 to more than 5200 in 2020. Consequently, the fraction of papers published during the past decade is more than 80% of the total. These publications span the entire range of the research pipeline.

  14. Call for Papers · Robotics: Science and Systems

    It's a pleasure to invite you to submit your best research to the 2021 Robotics: Science and Systems Conference, a virtual meeting connecting researchers working on all aspects of robotics including scientific foundations, mechanisms, algorithms, applications, and analysis of robotic systems. The paper submission deadline is March 1st.

  15. ASV station keeping under wind disturbances using neural network

    The Journal of Field Robotics is an applied robotics journal publishing theoretical and practical papers on robotics used in real-world applications. Abstract Station keeping is an essential maneuver for autonomous surface vehicles (ASVs), mainly when used in confined spaces, to carry out surveys that require the ASV to keep its position or in c...

  16. Robotics

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... Cosmin Copot and Steve ...

  17. Robotics and Autonomous Systems

    About the journal. Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.

  18. Towards next generation digital twin in robotics: Trends, scopes

    Given the importance of this research, the paper seeks to explore the trends of DT incorporated robotics in both high and low research saturated robotic domains in order to discover the gap, rising and dying trends, potential scopes, challenges, and viable solutions. ... Handling and Industrial Robotics 2021. 2022. Web service for point cloud ...

  19. (PDF) Research Paper on Robotics-New Era

    All content in this area was uploaded by Sachin Shankar Bhosale on Jun 17, 2021 . ... RESEARCH PAPER ON ROBOTICS-NEW ERA . Mrs. Ashwini Sheth. 1, Mr. Sachin Bhosale. 2, Mr. Muabid Burondkar. 3.

  20. Artificial Intelligence and Robotics: Impact & Open issues of

    This paper shows the significant blend of Artificial Intelligence and robotics which transform entire industries, technological improvement of robotics application & utilization. ... in any organizational design give impact on overall economy and infrastructure provide a wider direction for further research on Robotics and IoT are two terms ...

  21. (PDF) The future of Robotics Technology

    Abstract. In the last decade the robotics industry has created millions of additional jobs led by consumer electronics and the electric vehicle industry, and by 2020, robotics will be a $100 ...

  22. Smart Agriculture and Agricultural Robotics: Review and ...

    The analysis of smart agriculture topic with its subtopics shows that the annual number of articles published in the research topics of ARSA and AISA increased, on average, by 150% and 113% respectively, and that about 30% and 20% of the publication in smart agriculture during 2021 were related to robotics or AI.

  23. Robotic surgery in emergency setting: 2021 WSES position paper

    This position paper summarizes the current evidence and practice and proposes consensus statements to be reevaluated and updated as the evidence in the supporting literature emerges. For now, the experts recommend a strict patient selection while approaching emergent general surgery procedures with robotics.

  24. Research on Robot Path Planning Based on Reinforcement Learning

    This project has conducted research on robot path planning based on Visual SLAM. The main work of this project is as follows: (1) Construction of Visual SLAM system. Research has been conducted on the basic architecture of Visual SLAM. A Visual SLAM system is developed based on ORB-SLAM3 system, which can conduct dense point cloud mapping. (2) The map suitable for two-dimensional path planning ...

  25. Why can animals outrun robots?

    The answer to this question is important for determining the most fruitful lines of research for roboticists interested in closing the performance gap. This observation motivated my co-authors and I to write a review paper that definitively answers the question. It is obvious that animals outperform robots at running - really, any legged ...

  26. 2024 VEX Worlds Championship

    The 2024 VEX Robotics World Championship takes place from April 25 - May 3, 2024 at the Kay Bailey Hutchison Convention Center in Dallas, Texas. Over the course of nine days there will be continual robotics competitions taking place. April 25 - 27, 2024 - VRC HS April 28 - 30, 2024 - VRC MS / JROTC/VEX U May 1 - 3, 2024 - VIQRC ES ...

  27. An FDA approved device offers a new treatment for ringing in the ears

    More than 25 million adults in the U.S. have tinnitus, a condition that causes ringing or buzzing in the ears. An FDA approved device that stimulates the tongue, helped 84% of people who tried it.