Artificial Intelligence

  • Data Science
  • Hardware & Sensors

Machine Learning

Agriculture.

  • Defense & Cyber Security
  • Healthcare & Sports
  • Hospitality & Retail
  • Logistics & Industrial
  • Office & Household
  • Write for Us

paper presentation on robotics

Empowering children through STEAM: A Conversation with Jenny Young, CEO of…

How to secure open-source robotics, autonomous mining technologies (amt) revolutionizing mining, how dental robotics is poised to revolutionize dentistry, how is digital health revolutionizing oncology, why businesses should invest in decentralized apps, a close watch: how uk businesses benefit from advanced cctv systems, empowering small businesses: the role of it support in growth and…, product marketing: leveraging photorealistic product rendering services, everything tech: how technology has evolved and how to keep up….

  • Technologies

500 research papers and projects in robotics – Free Download

paper presentation on robotics

The recent history of robotics is full of fascinating moments that accelerated the rapid technological advances in artificial intelligence , automation , engineering, energy storage, and machine learning. The result transformed the capabilities of robots and their ability to take over tasks once carried out by humans at factories, hospitals, farms, etc.

These technological advances don’t occur overnight; they require several years of research and development in solving some of the biggest engineering challenges in navigation, autonomy, AI and machine learning to build robots that are much safer and efficient in a real-world situation. A lot of universities, institutes, and companies across the world are working tirelessly in various research areas to make this reality.

In this post, we have listed 500+ recent research papers and projects for those who are interested in robotics. These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision, artificial intelligence, collaborative robotics, nano robotics, social robotics, cloud, swan robotics, sensors, mobile robotics, humanoid, service robots, automation, autonomous, etc. Feel free to download. Share your own research papers with us to be added into this list. Also, you can ask a professional academic writer from  CustomWritings – research paper writing service  to assist you online on any related topic.

Navigation and Motion Planning

  • Robotics Navigation Using MPEG CDVS
  • Design, Manufacturing and Test of a High-Precision MEMS Inclination Sensor for Navigation Systems in Robot-assisted Surgery
  • Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment
  • One Point Perspective Vanishing Point Estimation for Mobile Robot Vision Based Navigation System
  • Application of Ant Colony Optimization for finding the Navigational path of Mobile Robot-A Review
  • Robot Navigation Using a Brain-Computer Interface
  • Path Generation for Robot Navigation using a Single Ceiling Mounted Camera
  • Exact Robot Navigation Using Power Diagrams
  • Learning Socially Normative Robot Navigation Behaviors with Bayesian Inverse Reinforcement Learning
  • Pipelined, High Speed, Low Power Neural Network Controller for Autonomous Mobile Robot Navigation Using FPGA
  • Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology
  • Optimality and limit behavior of the ML estimator for Multi-Robot Localization via GPS and Relative Measurements
  • Aerial Robotics: Compact groups of cooperating micro aerial vehicles in clustered GPS denied environment
  • Disordered and Multiple Destinations Path Planning Methods for Mobile Robot in Dynamic Environment
  • Integrating Modeling and Knowledge Representation for Combined Task, Resource and Path Planning in Robotics
  • Path Planning With Kinematic Constraints For Robot Groups
  • Robot motion planning for pouring liquids
  • Implan: Scalable Incremental Motion Planning for Multi-Robot Systems
  • Equilibrium Motion Planning of Humanoid Climbing Robot under Constraints
  • POMDP-lite for Robust Robot Planning under Uncertainty
  • The RoboCup Logistics League as a Benchmark for Planning in Robotics
  • Planning-aware communication for decentralised multi- robot coordination
  • Combined Force and Position Controller Based on Inverse Dynamics: Application to Cooperative Robotics
  • A Four Degree of Freedom Robot for Positioning Ultrasound Imaging Catheters
  • The Role of Robotics in Ovarian Transposition
  • An Implementation on 3D Positioning Aquatic Robot

Robotic Interactions

  • On Indexicality, Direction of Arrival of Sound Sources and Human-Robot Interaction
  • OpenWoZ: A Runtime-Configurable Wizard-of-Oz Framework for Human-Robot Interaction
  • Privacy in Human-Robot Interaction: Survey and Future Work
  • An Analysis Of Teacher-Student Interaction Patterns In A Robotics Course For Kindergarten Children: A Pilot Study
  • Human Robotics Interaction (HRI) based Analysis–using DMT
  • A Cautionary Note on Personality (Extroversion) Assessments in Child-Robot Interaction Studies
  • Interaction as a bridge between cognition and robotics
  • State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction
  • Eliciting Conversation in Robot Vehicle Interactions
  • A Comparison of Avatar, Video, and Robot-Mediated Interaction on Users’ Trust in Expertise
  • Exercising with Baxter: Design and Evaluation of Assistive Social-Physical Human- Robot Interaction
  • Using Narrative to Enable Longitudinal Human- Robot Interactions
  • Computational Analysis of Affect, Personality, and Engagement in HumanRobot Interactions
  • Human-robot interactions: A psychological perspective
  • Gait of Quadruped Robot and Interaction Based on Gesture Recognition
  • Graphically representing child- robot interaction proxemics
  • Interactive Demo of the SOPHIA Project: Combining Soft Robotics and Brain-Machine Interfaces for Stroke Rehabilitation
  • Interactive Robotics Workshop
  • Activating Robotics Manipulator using Eye Movements
  • Wireless Controlled Robot Movement System Desgined using Microcontroller
  • Gesture Controlled Robot using LabVIEW
  • RoGuE: Robot Gesture Engine

Obstacle Avoidance

  • Low Cost Obstacle Avoidance Robot with Logic Gates and Gate Delay Calculations
  • Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance
  • Controlling Obstacle Avoiding And Live Streaming Robot Using Chronos Watch
  • Movement Of The Space Robot Manipulator In Environment With Obstacles
  • Assis-Cicerone Robot With Visual Obstacle Avoidance Using a Stack of Odometric Data.
  • Obstacle detection and avoidance methods for autonomous mobile robot
  • Moving Domestic Robotics Control Method Based on Creating and Sharing Maps with Shortest Path Findings and Obstacle Avoidance
  • Control of the Differentially-driven Mobile Robot in the Environment with a Non-Convex Star-Shape Obstacle: Simulation and Experiments
  • A survey of typical machine learning based motion planning algorithms for robotics
  • Linear Algebra for Computer Vision, Robotics , and Machine Learning
  • Applying Radical Constructivism to Machine Learning: A Pilot Study in Assistive Robotics
  • Machine Learning for Robotics and Computer Vision: Sampling methods and Variational Inference
  • Rule-Based Supervisor and Checker of Deep Learning Perception Modules in Cognitive Robotics
  • The Limits and Potentials of Deep Learning for Robotics
  • Autonomous Robotics and Deep Learning
  • A Unified Knowledge Representation System for Robot Learning and Dialogue

Computer Vision

  • Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot
  • Non-Euclidean manifolds in robotics and computer vision: why should we care?
  • Topology of singular surfaces, applications to visualization and robotics
  • On the Impact of Learning Hierarchical Representations for Visual Recognition in Robotics
  • Focused Online Visual-Motor Coordination for a Dual-Arm Robot Manipulator
  • Towards Practical Visual Servoing in Robotics
  • Visual Pattern Recognition In Robotics
  • Automated Visual Inspection: Position Identification of Object for Industrial Robot Application based on Color and Shape
  • Automated Creation of Augmented Reality Visualizations for Autonomous Robot Systems
  • Implementation of Efficient Night Vision Robot on Arduino and FPGA Board
  • On the Relationship between Robotics and Artificial Intelligence
  • Artificial Spatial Cognition for Robotics and Mobile Systems: Brief Survey and Current Open Challenges
  • Artificial Intelligence, Robotics and Its Impact on Society
  • The Effects of Artificial Intelligence and Robotics on Business and Employment: Evidence from a survey on Japanese firms
  • Artificially Intelligent Maze Solver Robot
  • Artificial intelligence, Cognitive Robotics and Human Psychology
  • Minecraft as an Experimental World for AI in Robotics
  • Impact of Robotics, RPA and AI on the insurance industry: challenges and opportunities

Probabilistic Programming

  • On the use of probabilistic relational affordance models for sequential manipulation tasks inrobotics
  • Exploration strategies in developmental robotics: a unified probabilistic framework
  • Probabilistic Programming for Robotics
  • New design of a soft-robotics wearable elbow exoskeleton based on Shape Memory Alloy wires actuators
  • Design of a Modular Series Elastic Upgrade to a Robotics Actuator
  • Applications of Compliant Actuators to Wearing Robotics for Lower Extremity
  • Review of Development Stages in the Conceptual Design of an Electro-Hydrostatic Actuator for Robotics
  • Fluid electrodes for submersible robotics based on dielectric elastomer actuators
  • Cascaded Control Of Compliant Actuators In Friendly Robotics

Collaborative Robotics

  • Interpretable Models for Fast Activity Recognition and Anomaly Explanation During Collaborative Robotics Tasks
  • Collaborative Work Management Using SWARM Robotics
  • Collaborative Robotics : Assessment of Safety Functions and Feedback from Workers, Users and Integrators in Quebec
  • Accessibility, Making and Tactile Robotics : Facilitating Collaborative Learning and Computational Thinking for Learners with Visual Impairments
  • Trajectory Adaptation of Robot Arms for Head-pose Dependent Assistive Tasks

Mobile Robotics

  • Experimental research of proximity sensors for application in mobile robotics in greenhouse environment.
  • Multispectral Texture Mapping for Telepresence and Autonomous Mobile Robotics
  • A Smart Mobile Robot to Detect Abnormalities in Hazardous Zones
  • Simulation of nonlinear filter based localization for indoor mobile robot
  • Integrating control science in a practical mobile robotics course
  • Experimental Study of the Performance of the Kinect Range Camera for Mobile Robotics
  • Planification of an Optimal Path for a Mobile Robot Using Neural Networks
  • Security of Networking Control System in Mobile Robotics (NCSMR)
  • Vector Maps in Mobile Robotics
  • An Embedded System for a Bluetooth Controlled Mobile Robot Based on the ATmega8535 Microcontroller
  • Experiments of NDT-Based Localization for a Mobile Robot Moving Near Buildings
  • Hardware and Software Co-design for the EKF Applied to the Mobile Robotics Localization Problem
  • Design of a SESLogo Program for Mobile Robot Control
  • An Improved Ekf-Slam Algorithm For Mobile Robot
  • Intelligent Vehicles at the Mobile Robotics Laboratory, University of Sao Paolo, Brazil [ITS Research Lab]
  • Introduction to Mobile Robotics
  • Miniature Piezoelectric Mobile Robot driven by Standing Wave
  • Mobile Robot Floor Classification using Motor Current and Accelerometer Measurements
  • Sensors for Robotics 2015
  • An Automated Sensing System for Steel Bridge Inspection Using GMR Sensor Array and Magnetic Wheels of Climbing Robot
  • Sensors for Next-Generation Robotics
  • Multi-Robot Sensor Relocation To Enhance Connectivity In A WSN
  • Automated Irrigation System Using Robotics and Sensors
  • Design Of Control System For Articulated Robot Using Leap Motion Sensor
  • Automated configuration of vision sensor systems for industrial robotics

Nano robotics

  • Light Robotics: an all-optical nano-and micro-toolbox
  • Light-driven Nano- robotics
  • Light-driven Nano-robotics
  • Light Robotics: a new tech–nology and its applications
  • Light Robotics: Aiming towards all-optical nano-robotics
  • NanoBiophotonics Appli–cations of Light Robotics
  • System Level Analysis for a Locomotive Inspection Robot with Integrated Microsystems
  • High-Dimensional Robotics at the Nanoscale Kino-Geometric Modeling of Proteins and Molecular Mechanisms
  • A Study Of Insect Brain Using Robotics And Neural Networks

Social Robotics

  • Integrative Social Robotics Hands-On
  • ProCRob Architecture for Personalized Social Robotics
  • Definitions and Metrics for Social Robotics, along with some Experience Gained in this Domain
  • Transmedia Choreography: Integrating Multimodal Video Annotation in the Creative Process of a Social Robotics Performance Piece
  • Co-designing with children: An approach to social robot design
  • Toward Social Cognition in Robotics: Extracting and Internalizing Meaning from Perception
  • Human Centered Robotics : Designing Valuable Experiences for Social Robots
  • Preliminary system and hardware design for Quori, a low-cost, modular, socially interactive robot
  • Socially assistive robotics: Human augmentation versus automation
  • Tega: A Social Robot

Humanoid robot

  • Compliance Control and Human-Robot Interaction – International Journal of Humanoid Robotics
  • The Design of Humanoid Robot Using C# Interface on Bluetooth Communication
  • An Integrated System to approach the Programming of Humanoid Robotics
  • Humanoid Robot Slope Gait Planning Based on Zero Moment Point Principle
  • Literature Review Real-Time Vision-Based Learning for Human-Robot Interaction in Social Humanoid Robotics
  • The Roasted Tomato Challenge for a Humanoid Robot
  • Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality

Cloud Robotics

  • CR3A: Cloud Robotics Algorithms Allocation Analysis
  • Cloud Computing and Robotics for Disaster Management
  • ABHIKAHA: Aerial Collision Avoidance in Quadcopter using Cloud Robotics
  • The Evolution Of Cloud Robotics: A Survey
  • Sliding Autonomy in Cloud Robotics Services for Smart City Applications
  • CORE: A Cloud-based Object Recognition Engine for Robotics
  • A Software Product Line Approach for Configuring Cloud Robotics Applications
  • Cloud robotics and automation: A survey of related work
  • ROCHAS: Robotics and Cloud-assisted Healthcare System for Empty Nester

Swarm Robotics

  • Evolution of Task Partitioning in Swarm Robotics
  • GESwarm: Grammatical Evolution for the Automatic Synthesis of Collective Behaviors in Swarm Robotics
  • A Concise Chronological Reassess Of Different Swarm Intelligence Methods With Multi Robotics Approach
  • The Swarm/Potential Model: Modeling Robotics Swarms with Measure-valued Recursions Associated to Random Finite Sets
  • The TAM: ABSTRACTing complex tasks in swarm robotics research
  • Task Allocation in Foraging Robot Swarms: The Role of Information Sharing
  • Robotics on the Battlefield Part II
  • Implementation Of Load Sharing Using Swarm Robotics
  • An Investigation of Environmental Influence on the Benefits of Adaptation Mechanisms in Evolutionary Swarm Robotics

Soft Robotics

  • Soft Robotics: The Next Generation of Intelligent Machines
  • Soft Robotics: Transferring Theory to Application,” Soft Components for Soft Robots”
  • Advances in Soft Computing, Intelligent Robotics and Control
  • The BRICS Component Model: A Model-Based Development Paradigm For ComplexRobotics Software Systems
  • Soft Mechatronics for Human-Friendly Robotics
  • Seminar Soft-Robotics
  • Special Issue on Open Source Software-Supported Robotics Research.
  • Soft Brain-Machine Interfaces for Assistive Robotics: A Novel Control Approach
  • Towards A Robot Hardware ABSTRACT ion Layer (R-HAL) Leveraging the XBot Software Framework

Service Robotics

  • Fundamental Theories and Practice in Service Robotics
  • Natural Language Processing in Domestic Service Robotics
  • Localization and Mapping for Service Robotics Applications
  • Designing of Service Robot for Home Automation-Implementation
  • Benchmarking Speech Understanding in Service Robotics
  • The Cognitive Service Robotics Apartment
  • Planning with Task-oriented Knowledge Acquisition for A Service Robot
  • Cognitive Robotics
  • Meta-Morphogenesis theory as background to Cognitive Robotics and Developmental Cognitive Science
  • Experience-based Learning for Bayesian Cognitive Robotics
  • Weakly supervised strategies for natural object recognition in robotics
  • Robotics-Derived Requirements for the Internet of Things in the 5G Context
  • A Comparison of Modern Synthetic Character Design and Cognitive Robotics Architecture with the Human Nervous System
  • PREGO: An Action Language for Belief-Based Cognitive Robotics in Continuous Domains
  • The Role of Intention in Cognitive Robotics
  • On Cognitive Learning Methodologies for Cognitive Robotics
  • Relational Enhancement: A Framework for Evaluating and Designing Human-RobotRelationships
  • A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering
  • Spatial Cognition in Robotics
  • IOT Based Gesture Movement Recognize Robot
  • Deliberative Systems for Autonomous Robotics: A Brief Comparison Between Action-oriented and Timelines-based Approaches
  • Formal Modeling and Verification of Dynamic Reconfiguration of Autonomous RoboticsSystems
  • Robotics on its feet: Autonomous Climbing Robots
  • Implementation of Autonomous Metal Detection Robot with Image and Message Transmission using Cell Phone
  • Toward autonomous architecture: The convergence of digital design, robotics, and the built environment
  • Advances in Robotics Automation
  • Data-centered Dependencies and Opportunities for Robotics Process Automation in Banking
  • On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications
  • Advances in RoboticsAutomation
  • Meshworm With Segment-Bending Anchoring for Colonoscopy. IEEE ROBOTICS AND AUTOMATION LETTERS. 2 (3) pp: 1718-1724.
  • Recent Advances in Robotics and Automation
  • Key Elements Towards Automation and Robotics in Industrialised Building System (IBS)
  • Knowledge Building, Innovation Networks, and Robotics in Math Education
  • The potential of a robotics summer course On Engineering Education
  • Robotics as an Educational Tool: Impact of Lego Mindstorms
  • Effective Planning Strategy in Robotics Education: An Embodied Approach
  • An innovative approach to School-Work turnover programme with Educational Robotics
  • The importance of educational robotics as a precursor of Computational Thinking in early childhood education
  • Pedagogical Robotics A way to Experiment and Innovate in Educational Teaching in Morocco
  • Learning by Making and Early School Leaving: an Experience with Educational Robotics
  • Robotics and Coding: Fostering Student Engagement
  • Computational Thinking with Educational Robotics
  • New Trends In Education Of Robotics
  • Educational robotics as an instrument of formation: a public elementary school case study
  • Developmental Situation and Strategy for Engineering Robot Education in China University
  • Towards the Humanoid Robot Butler
  • YAGI-An Easy and Light-Weighted Action-Programming Language for Education and Research in Artificial Intelligence and Robotics
  • Simultaneous Tracking and Reconstruction (STAR) of Objects and its Application in Educational Robotics Laboratories
  • The importance and purpose of simulation in robotics
  • An Educational Tool to Support Introductory Robotics Courses
  • Lollybot: Where Candy, Gaming, and Educational Robotics Collide
  • Assessing the Impact of an Autonomous Robotics Competition for STEM Education
  • Educational robotics for promoting 21st century skills
  • New Era for Educational Robotics: Replacing Teachers with a Robotic System to Teach Alphabet Writing
  • Robotics as a Learning Tool for Educational Transformation
  • The Herd of Educational Robotic Devices (HERD): Promoting Cooperation in RoboticsEducation
  • Robotics in physics education: fostering graphing abilities in kinematics
  • Enabling Rapid Prototyping in K-12 Engineering Education with BotSpeak, a UniversalRobotics Programming Language
  • Innovating in robotics education with Gazebo simulator and JdeRobot framework
  • How to Support Students’ Computational Thinking Skills in Educational Robotics Activities
  • Educational Robotics At Lower Secondary School
  • Evaluating the impact of robotics in education on pupils’ skills and attitudes
  • Imagining, Playing, and Coding with KIBO: Using Robotics to Foster Computational Thinking in Young Children
  • How Does a First LEGO League Robotics Program Provide Opportunities for Teaching Children 21st Century Skills
  • A Software-Based Robotic Vision Simulator For Use In Teaching Introductory Robotics Courses
  • Robotics Practical
  • A project-based strategy for teaching robotics using NI’s embedded-FPGA platform
  • Teaching a Core CS Concept through Robotics
  • Ms. Robot Will Be Teaching You: Robot Lecturers in Four Modes of Automated Remote Instruction
  • Robotic Competitions: Teaching Robotics and Real-Time Programming with LEGO Mindstorms
  • Visegrad Robotics Workshop-different ideas to teach and popularize robotics
  • LEGO® Mindstorms® EV3 Robotics Instructor Guide
  • DRAFT: for Automaatiop iv t22 MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • Autonomous Robot Design and Build: Novel Hands-on Experience for Undergraduate Students
  • Semi-Autonomous Inspection Robot
  • Sumo Robot Competition
  • Engagement of students with Robotics-Competitions-like projects in a PBL Bsc Engineering course
  • Robo Camp K12 Inclusive Outreach Program: A three-step model of Effective Introducing Middle School Students to Computer Programming and Robotics
  • The Effectiveness of Robotics Competitions on Students’ Learning of Computer Science
  • Engaging with Mathematics: How mathematical art, robotics and other activities are used to engage students with university mathematics and promote
  • Design Elements of a Mobile Robotics Course Based on Student Feedback
  • Sixth-Grade Students’ Motivation and Development of Proportional Reasoning Skills While Completing Robotics Challenges
  • Student Learning of Computational Thinking in A Robotics Curriculum: Transferrable Skills and Relevant Factors
  • A Robotics-Focused Instructional Framework for Design-Based Research in Middle School Classrooms
  • Transforming a Middle and High School Robotics Curriculum
  • Geometric Algebra for Applications in Cybernetics: Image Processing, Neural Networks, Robotics and Integral Transforms
  • Experimenting and validating didactical activities in the third year of primary school enhanced by robotics technology

Construction

  • Bibliometric analysis on the status quo of robotics in construction
  • AtomMap: A Probabilistic Amorphous 3D Map Representation for Robotics and Surface Reconstruction
  • Robotic Design and Construction Culture: Ethnography in Osaka University’s Miyazaki Robotics Lab
  • Infrastructure Robotics: A Technology Enabler for Lunar In-Situ Resource Utilization, Habitat Construction and Maintenance
  • A Planar Robot Design And Construction With Maple
  • Robotics and Automations in Construction: Advanced Construction and FutureTechnology
  • Why robotics in mining
  • Examining Influences on the Evolution of Design Ideas in a First-Year Robotics Project
  • Mining Robotics
  • TIRAMISU: Technical survey, close-in-detection and disposal mine actions in Humanitarian Demining: challenges for Robotics Systems
  • Robotics for Sustainable Agriculture in Aquaponics
  • Design and Fabrication of Crop Analysis Agriculture Robot
  • Enhance Multi-Disciplinary Experience for Agriculture and Engineering Students with Agriculture Robotics Project
  • Work in progress: Robotics mapping of landmine and UXO contaminated areas
  • Robot Based Wireless Monitoring and Safety System for Underground Coal Mines using Zigbee Protocol: A Review
  • Minesweepers uses robotics’ awesomeness to raise awareness about landminesexplosive remnants of war
  • Intelligent Autonomous Farming Robot with Plant Disease Detection using Image Processing
  • Auotomatic Pick And Place Robot
  • Video Prompting to Teach Robotics and Coding to Students with Autism Spectrum Disorder
  • Bilateral Anesthesia Mumps After RobotAssisted Hysterectomy Under General Anesthesia: Two Case Reports
  • Future Prospects of Artificial Intelligence in Robotics Software, A healthcare Perspective
  • Designing new mechanism in surgical robotics
  • Open-Source Research Platforms and System Integration in Modern Surgical Robotics
  • Soft Tissue Robotics–The Next Generation
  • CORVUS Full-Body Surgical Robotics Research Platform
  • OP: Sense, a rapid prototyping research platform for surgical robotics
  • Preoperative Planning Simulator with Haptic Feedback for Raven-II Surgical Robotics Platform
  • Origins of Surgical Robotics: From Space to the Operating Room
  • Accelerometer Based Wireless Gesture Controlled Robot for Medical Assistance using Arduino Lilypad
  • The preliminary results of a force feedback control for Sensorized Medical Robotics
  • Medical robotics Regulatory, ethical, and legal considerations for increasing levels of autonomy
  • Robotics in General Surgery
  • Evolution Of Minimally Invasive Surgery: Conventional Laparoscopy Torobotics
  • Robust trocar detection and localization during robot-assisted endoscopic surgery
  • How can we improve the Training of Laparoscopic Surgery thanks to the Knowledge in Robotics
  • Discussion on robot-assisted laparoscopic cystectomy and Ileal neobladder surgery preoperative care
  • Robotics in Neurosurgery: Evolution, Current Challenges, and Compromises
  • Hybrid Rendering Architecture for Realtime and Photorealistic Simulation of Robot-Assisted Surgery
  • Robotics, Image Guidance, and Computer-Assisted Surgery in Otology/Neurotology
  • Neuro-robotics model of visual delusions
  • Neuro-Robotics
  • Robotics in the Rehabilitation of Neurological Conditions
  • What if a Robot Could Help Me Care for My Parents
  • A Robot to Provide Support in Stigmatizing Patient-Caregiver Relationships
  • A New Skeleton Model and the Motion Rhythm Analysis for Human Shoulder Complex Oriented to Rehabilitation Robotics
  • Towards Rehabilitation Robotics: Off-The-Shelf BCI Control of Anthropomorphic Robotic Arms
  • Rehabilitation Robotics 2013
  • Combined Estimation of Friction and Patient Activity in Rehabilitation Robotics
  • Brain, Mind and Body: Motion Behaviour Planning, Learning and Control in view of Rehabilitation and Robotics
  • Reliable Robotics – Diagnostics
  • Robotics for Successful Ageing
  • Upper Extremity Robotics Exoskeleton: Application, Structure And Actuation

Defence and Military

  • Voice Guided Military Robot for Defence Application
  • Design and Control of Defense Robot Based On Virtual Reality
  • AI, Robotics and Cyber: How Much will They Change Warfare
  • BORDER SECURITY ROBOT
  • Brain Controlled Robot for Indian Armed Force
  • Autonomous Military Robotics
  • Wireless Restrained Military Discoursed Robot
  • Bomb Detection And Defusion In Planes By Application Of Robotics
  • Impacts Of The Robotics Age On Naval Force Design, Effectiveness, And Acquisition

Space Robotics

  • Lego robotics teacher professional learning
  • New Planar Air-bearing Microgravity Simulator for Verification of Space Robotics Numerical Simulations and Control Algorithms
  • The Artemis Rover as an Example for Model Based Engineering in Space Robotics
  • Rearrangement planning using object-centric and robot-centric action spaces
  • Model-based Apprenticeship Learning for Robotics in High-dimensional Spaces
  • Emergent Roles, Collaboration and Computational Thinking in the Multi-Dimensional Problem Space of Robotics
  • Reaction Null Space of a multibody system with applications in robotics

Other Industries

  • Robotics in clothes manufacture
  • Recent Trends in Robotics and Computer Integrated Manufacturing: An Overview
  • Application Of Robotics In Dairy And Food Industries: A Review
  • Architecture for theatre robotics
  • Human-multi-robot team collaboration for efficent warehouse operation
  • A Robot-based Application for Physical Exercise Training
  • Application Of Robotics In Oil And Gas Refineries
  • Implementation of Robotics in Transmission Line Monitoring
  • Intelligent Wireless Fire Extinguishing Robot
  • Monitoring and Controlling of Fire Fighthing Robot using IOT
  • Robotics An Emerging Technology in Dairy Industry
  • Robotics and Law: A Survey
  • Increasing ECE Student Excitement through an International Marine Robotics Competition
  • Application of Swarm Robotics Systems to Marine Environmental Monitoring

Future of Robotics / Trends

  • The future of Robotics Technology
  • RoboticsAutomation Are Killing Jobs A Roadmap for the Future is Needed
  • The next big thing (s) in robotics
  • Robotics in Indian Industry-Future Trends
  • The Future of Robot Rescue Simulation Workshop
  • PreprintQuantum Robotics: Primer on Current Science and Future Perspectives
  • Emergent Trends in Robotics and Intelligent Systems

RELATED ARTICLES MORE FROM AUTHOR

Empowering children through steam: a conversation with jenny young, ceo of brooklyn robot foundry, how ux design enhances human/robot collaboration, open source hardware platforms for robotics, how to build a winning robotics competition team, designing combat robots: essential tips for success, robot competitions: safety, pit etiquette, and troubleshooting tips, top online stores and retailers to buy tools and parts for your robot.

  • Privacy Policy
  • Terms & Conditions
  • Open access
  • Published: 05 January 2022

Robot lecture for enhancing presentation in lecture

  • Tatsuya Ishino 1 ,
  • Mitsuhiro Goto 2 &
  • Akihiro Kashihara   ORCID: orcid.org/0000-0002-7665-3900 1  

Research and Practice in Technology Enhanced Learning volume  17 , Article number:  1 ( 2022 ) Cite this article

3452 Accesses

6 Citations

3 Altmetric

Metrics details

In lectures with presentation slides such as an e-learning lecture on video, it is important for lecturers to control their non-verbal behavior involving gaze, gesture, and paralanguage. However, it is not so easy even for well-experienced lecturers to properly use non-verbal behavior in their lecture to promote learners’ understanding. This paper proposes robot lecture, in which a robot substitutes for human lecturers, and reconstructs their non-verbal behavior to enhance their lecture. Towards such reconstruction, we have designed a model of non-verbal behavior in lecture. This paper also demonstrates a robot lecture system that appropriately reproduces non-verbal behavior of human lecturers with reconstructed one. In addition, this paper reports a case study involving 36 participants with the system, whose purpose was to ascertain whether robot lecture with reconstruction could be more effective for controlling learners' attention and more beneficial for understanding the lecture contents than video lecture by human and robot lecture with simple reproduction. The results of the case study with the system suggest the effect of promoting learners’ understanding of lecture contents, the necessity of reconstructing non-verbal behavior, and the validity of the non-verbal behavior model.

Introduction

Recently, small communication robots such as Sota (Vstone Co. Ltd., 2010 ), Robohon (Sharp Corporation, 2016 ), NAO (Softbank Robotics Co. Ltd., 2018 ), and PALRO (FUJISOFT Inc., 2010 ) have become widespread in various contexts such as nursing care, education, and guidance service. There has been also an increasing interest in utilizing these robots, especially in the field of education. In this paper, we focus on using communication robots for small class lectures and e-learning lectures on video.

In a lecture, it is generally important to present the lecture contents as slides with oral explanation so that learners’ understanding could be promoted. This requires lecturers to control the attention of learners to slides and oral explanation by means of gaze, gesture, paralanguage, etc., which are viewed as non-verbal behavior (Collins, 2004 ). If lecturers want to attract learners’ attention to an important point in a slide, for example, they should direct their face to it, and point it out with pointing gesture in concurrence with its oral explanation. On the other hand, excessive and unnecessary non-verbal behavior would prevent learners from keeping attention to understanding the lecture contents. It is accordingly indispensable to properly use non-verbal behavior in lecture presentation (called lecture behavior) (Ishino et al., 2018 ).

However, it is not so easy even for well-experienced lecturers to continue making proper use of lecture behavior during their lecture presentation. If lecturers are inexperienced, in addition, they tend to focus on oral explanation prepared in advance without any non-verbal behavior. Learners would accordingly have difficulties in keeping their concentration, and finish the lecture with incomplete understanding.

Towards this issue, this paper proposes robot lecture, in which a communication robot substitutes for human lecturers. The main purpose of robot lecture is to reproduce their own lecture behavior as appropriately as possible with their lecture contents, and to reconstruct their improper and insufficient behavior for enhancing their lecture presentation. In order to make it possible, we have also designed a model of how lecturers should conduct lecture behavior to promote learners’ interest and understanding (Ishino et al., 2018 ). As for lecture behavior, it is important for lecturers to conduct it not at random but according to their intention (Arima, 2014 ). The lecture behavior model represents the relationships between lecture intentions and non-verbal behavior to be used for controlling learners' attention and promoting their understanding.

We have also developed a robot lecture system so far, which deals with face direction, and gesture (without paralanguage) as lecture behavior. This system records the presentation made by human lecturers to detect and reconstruct inappropriate or insufficient behavior by following the lecture behavior model (Ishino et al., 2018 ). The robot reproduces the reconstructed presentation, which could appropriately convey the lecture contents, control learners' attention, and promote their understanding. In our previous work (Ishino et al., 2018 ), we confirmed that the system could keep learners’ attention more effectively. We conducted a case study with the system. The results obtained from questionnaires suggest that gaze with face direction, and pointing gesture reconstructed by the robot are more acceptable and understandable in terms of keeping and guiding attention than non-verbal behavior in video lecture by human. Most of the participants also felt that their concentration on the lecture contents could be promoted with eye contact by the robot.

In this paper, we refine the robot lecture system, which can reconstruct lecture behavior including paralanguage. This paper also reports another case study whose purpose was to confirm whether the robot lecture promotes learners’ understanding. In this study, we compared three conditions: video lecture conducted by human, robot lecture simply reproducing the original one, and robot lecture reconstructing the original one. The results suggest that the reconstructed robot lecture significantly promotes learners’ understanding of the lecture contents more than the video lecture and the simply reproduced robot lecture.

This paper is organized as follows. Section “ Presentation in lecture ” outlines presentation in lecture. Section “ Robot lecture ” describes robot lecture involving the model of lecture behavior. The robot lecture system is minutely described in Sect. “ Robot lecture system .” Section “ Case study ” and “ Discussion ” describe about the case study with the system. Conclusions and suggestions for future work are presented in Sect. “ Conclusion .”

Presentation in lecture

Non-verbal behavior in lecture.

Small class lectures in university or e-learning lectures on video are often conducted with presentation slides, which represent the contents lecturers intend to present. These slides include illustrations, graphs, and keywords. Lecturers explain not only the slides, but also the contents that are not explicitly represented in the slides. This suggests that the lecture contents consist of lecture slides and oral explanation.

In making presentation, it is important for lecturers to attract learners' attention to either slides or oral explanation to promote their understanding by utilizing non-verbal behavior such as gaze, pointing, and pitch/volume of paralanguage. For example, it is possible for lecturers to hold eye contact with learners to attract their attention to oral explanation. It is also possible to face to, point at, and intensively explain an important point in a slide to control learners' attention to it.

On the other hand, it is not beneficial to confuse learners or disrupt their concentration using excessive and unnecessary non-verbal behavior. It is accordingly important to properly use lecture behavior. Melinger and Levelt ( 2005 ) confirmed that a speaker often used a hand gesture according to his/her intention. They argued that the speaker intended to complement his/her oral contents with it. Arima ( 2014 ) found that skillful teachers conducted more intentional gaze behavior in their class than novices. Goldin-Meadow and Alibali ( 2013 ) found that speakers often utilized gesture in communication so that they could promote communication partner’s understanding. Such related work claims the proper use of lecture behavior, and also suggests the necessity of intentionally conducting lecture behavior.

It is not necessarily easy for inexperienced lecturers to intentionally use non-verbal behavior to control learners' attention in lecture presentation. In addition, it would not be easy even for experienced lecturers to continue properly conducting lecture behavior during their presentation. There are also some lecturers who tend to fix their eyes on PCs without any gaze behavior. In such cases, learners could not keep their concentration and interest in lecture. As a result, they would finish the lecture with incomplete understanding.

Related work

There is a lot of work on non-verbal behavior in interaction between human users and robot, whose main intentions are to attract their attention and to promote their understanding (Witt et al., 2004). Huang et al. ( 2014 ), Liles et al. ( 2017 ) and Admoni et al. ( 2016 ) confirmed that robot gestures contributed to understandability and recall performance. These results suggest that gestures are effective for understanding and retaining lecture content. Tanaka et al. ( 2017 ) also suggest that when driving a car with a robot as a navigator, attention control by robot gestures is more effective than only voice or screen agent. Sauppé et al. ( 2014 ) confirmed that the robot successfully directed the attention of the collaborators to the object using a pointing gesture. In addition, Kamide et al. ( 2014 ) confirmed that a humanoid robot could attract audiences' attention to particular position using non-verbal behavior. These results suggest that non-verbal behavior of robot is effective in gathering audiences' attention in presentation. Mutlu et al. ( 2007 ) also proposed a gaze model for storytelling robot, and evaluated the effectiveness of the model-based behavior of the robot for telling a fairy tale to audiences. According to this result, the audiences tended to recall all of the tale contents when the number of having eye contacts with the robot was moderate. On the other hand, they tended to have difficulties in recalling the contents when they had a lot of eye contacts.

These findings from the above related work suggest that it is necessary to appropriately control non-verbal behavior to prevent learners from their incomplete understanding of the lecture contents (Belpaeme et al., 2018 ). In this paper, we aim to substitute a robot for human lecturers in actual lectures.

  • Robot lecture

Lecture behavior model

We have introduced robot lecture whose purpose is to enhance lecture behavior of human lecturers with a communication robot. In robot lecture, they are required to prepare oral and slide contents to make lecture presentation. Their lecture behavior is then enhanced by the robot. It needs making it clear how to use lecture behavior. We have accordingly designed a model of lecture behavior with reference to related work on non-verbal behavior (Kamide, 2014 , Mutlu, 2007 , McNeill, 1994 , Goto & Kashihara, 2016 ).

It can be useful for lecturers to consider aligning their non-verbal behavior with their intentions in lecture, which could be determined with learning states of learners. In this work, we divide the states into the following four:

Learning states

State 1 : Not listening to lecture presentation,

State 2 : Listening to lecture presentation,

State 3 : Noticing important points of the lecture contents, and

State 4 : Understanding the lecture contents.

Lecturers would intend to change learning states from state 1 to 4. We accordingly define lecture intention as changing learning states, and classify it into three as follows (Ishino et al., 2018 ):

Lecture intentions

Intention 1 (from state 1 to 2 ): Encouraging learners to get interested in lecture presentation,

Intention 2 (from state 2 to 3 ): Encouraging learners to pay attention to and get an understanding of important points in lecture contents, and

Intention 3 (from state 3 to 4 ): Encouraging learners to understand the details of the lecture contents.

Figure  1 shows the relationships between learning states and lecture intentions. Lecturers intend to conduct lecture behavior to change learning states from state 1 to 4. There are two contexts in determining lecture intention. First, lecturers dynamically determine their intention depending on learning states, which could also change during lecture presentation. Second, they assume learning states in advance when they prepare their presentation. In video lecture, lecture intention is usually determined in the second context, which we presume in this work.

figure 1

Relationships between learning states and lecture intentions

Let us explain lecture behavior corresponding to each lecture intention in the following.

Lecture behavior for intention 1

In order to help learners get interested in lecture presentation, it is necessary to give them an impression that lecturers talk to them, and to attract their attention to the presentation. For example, making eye contact with learners increases the impression. It is also possible to attract learners’ attention by means of multimedia such as sound effects or visual effects and lecturers’ over-actions.

Lecture behavior for intention 2

According to the findings from related work mentioned in Sect. “ Related work ”, a communication robot can use gesture and gaze to control learners’ attention to an important point in lecture contents that it wants them to concentrate on and understand it. As shown in Fig.  2 , for example, a lecturer could induce learners to pay their attention to the slide by gazing at it, and also induces them to focus on his/her oral explanation and gesture by gazing at them.

figure 2

Examples of attention control

Lecture behavior for intention 3

In order to help learners understand the details of lecture contents, lecturers need to explain and convey important points of the contents. In this case, it is effective to utilize gestures for making them conspicuous. McNeil ( 1994 ) have classified such gestures often used for communication into the following three:

Deictic Gestures expressing important points such as pointing.

Metaphoric Gestures expressing order or magnitude such as counting on fingers and moving hands up to down.

Iconic Gestures expressing size and length such as drawing shape with both hands.

In this work, lecturers are expected to use these gestures classified by McNeil during their lecture presentation, when they want to convey important points of the slide contents.

Referring to these lecture behaviors, we have designed a model of lecture behavior for reconstructing inappropriate or insufficient lecture behavior conducted by human lecturers as shown in Fig.  3 . The model is composed of three layers, which are lecture intention, behavior category, and basic components of lecture behavior. It derives lecture behavior appropriate to each lecture intention from the relationships among them.

figure 3

When lecturers have the intention 2, for example, the model suggests the necessity of non-verbal behavior for keeping attention , controlling attention , or promoting understanding of important points as behavior category. If they select controlling attention , the model induces them to select and combine the corresponding basic components to conduct behavior such as facing to the slide with deictic pointing gesture.

Model-based presentation reconstruction

The robot lecture aims to appropriately reproduce lecturers’ non-verbal behavior in their presentation. Related work has been taking two approaches towards reproduction of non-verbal presentation behavior with robot. One is to manually tag their own non-verbal behavior, which is used for the robot to reproduce (Vstone Co. Ltd., 2018 , Nozawa et al., 2004 ). The other is to follow oral explanation to automatically tag non-verbal behavior by means of machine learning methods (Nakano et al., 2004 , Ng-Thow-Hing et al., 2010 , Le & Pelachaud, 2011 ). However, the manual tagging is not so easy for lecturers. In addition, non-verbal behavior tagged is similarly reproduced by robot even if the corresponding non-verbal behavior conducted by individual presenters are slightly different.

In the robot lecture, on the other hand, the robot attempts to reproduce non-verbal behavior of lecturers by keeping their presentation individuality (timing and duration) as much as possible, and then to reconstruct their inappropriate or insufficient behavior with appropriate one to be derived from the lecture behavior model. The presentation reproduction with reconstruction is done as follows.

Lecturers are first expected to set their own lecture intention according to a learning state to be assumed when they prepare lecture presentation. We currently assume video lecture in which lecturers have learners in the learning state 2 with the lecture intention 2. The learning state and lecture intention are also supposed to be unchanged during lecture presentation. Second, the robot records lecturers’ presentation including lecture behavior and slide/oral contents, and detects important points which they want to emphasize in their slide/oral contents.

The robot next analyzes whether their lecture behavior for conveying the important points detected is included within the one the lecture behavior model can combine for the lecture intention set by them, and whether it is appropriately conducted in accomplishing the intention. This means diagnosing the sufficiency and appropriateness of lecture behavior conducted by the lecturers. If their lecture behavior is not included within the model, it is diagnosed as insufficient. In this case, the robot reconstructs it with appropriate behavior to be derived from the model. If the lecture behavior is inappropriate as for arm angle, face direction, etc., it is reconstructed with desirable angle or direction. The detail of the diagnosis procedure is described in the next section.

The robot then reproduces the lecture presentation with reconstructed behavior. The robot has fewer joints than human, and its movement is limited. In order to appropriately reproduce lecture behavior, we have accordingly converted human lecture behavior into robot one.

Robot lecture system

In order to reconstruct lecture behavior conducted by human, we have developed the robot lecture system. Figure  4 shows an overview of the system. The system records gestures as skeleton data, slide images, and oral explanation as audio data. The system next detects the important points in the lecture contents to diagnose the lecture behavior by following the lecture behavior model, and reconstructs insufficient or inappropriate behavior diagnosed. The system then reproduces the lecture presentation with the recorded lecture contents and the reconstructed lecture behavior. We currently use Sota as the robot, which is produced by Vstone Co.,Ltd.

figure 4

Overview of robot lecture system

Framework for reconstructing lecture behavior

As shown in Fig.  5 , this system implements the substitution of lecture presentation by the robot through the following three phases.

Phase 1 Presentation recording.

Phase 2 Lecture behavior diagnosis/reconstruction.

Phase 3 Presentation by the robot.

figure 5

In phase 1, slide data, slide transition timing, and oral explanation (audio) data are recorded. Gestures of human lecturers during presentation are also recorded as skeleton data using Kinect (Microsoft Corporation). In phase 2, the system analyzes slide data and audio data to detect the important points. Using the results, gestures obtained from the skeleton data are then diagnosed. We currently deal with face direction, pointing gesture, and paralanguage as lecture behavior to be diagnosed, which are necessary to keep/control attention and convey important points. If the system diagnoses lecture behavior as insufficient or inappropriate one, it is reconstructed with appropriate one. In phase 3, the robot reproduces synchronously the presentation with the reconstructed behavior, captured images of the slides, and oral explanation. The oral explanation is obtained from the speaking audio data, to which Text-To-Speech engine converts the text recognized from the recorded audio data.

Presentation recording

In the presentation recording phase, the system records the lecturers’ skeleton data including face direction and gesture via Kinect, and records the audio data via an external microphone at the same time. The system also obtains captured images of the slide data and transition timing of each slide via PowerPoint API, and extracts slide text data, decoration data such as character color and size from the slide data. The captured images are uploaded to the slide server that the robot can connect via the Internet, and the robot presents them to learners as the lecture slides. Since all of the recorded data are retained with timestamps, the robot can reproduce presentation behavior and oral explanation that are synchronized at the timing of the captured images presented.

Presentation behavior diagnosis/reconstruction

As shown in Fig.  6 , lecturers first set their intention and behavior category while watching the presentation video by themselves. Let us here describe the procedure for analyzing lecture behavior from the recorded data to reconstruct it as appropriate one.

Behavior analysis

In this system, skeleton data of lecturers are recorded in time series. Then, gestures conducted by the lecturers are recognized from the skeleton data using Visual Gesture Builder (Microsoft). Visual gesture builder is a tool to create a gesture recognition database by means of machine learning. This tool allows the system to detect specific gestures from time series of the recorded skeleton data with the recognition database. In constructing the database, we select the section of gesture that we want to recognize, and tag it. The tag represents the gesture in the recorded skeleton data. For example, we can construct a recognition database for pointing gesture by selecting the sections from time series of the recorded skeleton data corresponding to pointing, and by tagging as pointing. According to the gestures classified by McNeill ( 1994 ), we currently constructed recognition databases for pointing gesture, expressing of counting gesture, expressing of size gesture, and face direction. We particularly constructed 10 databases which include three for pointing gesture (high, middle, low), two for counting gesture (1st, 2nd, 3rd), two for size gesture (big, small), and two for face direction (at slide, at learner) In order to construct these databases, we prepared 10 short presentation data including skeleton data in our laboratory, and selected 100 sections per each gesture, which we wanted to recognize.

The system compares the intentions set by lecturers with the gestures recognized with these databases. If the system does not recognize the gestures corresponding to the intentions, it detects them as insufficient gestures. If the system recognized the corresponding gestures with inappropriate direction, it detects them as inappropriate gestures.

Slide analysis

The system extracts text and decorated data such as character color/form from the slide data, and detects the important points. As shown in Table 1 , the system weights the decoration data in four degrees from 0 to 3. As for weights of font color, it is necessary to change depending on the slide theme and lecturer preference. If the weight of the decorated data exceeds 3, it is regarded as an important point. In the case of multiple decorations of the data, the system sums up the weight of each decoration. If the weight exceeds 3, it becomes an important point.

Audio data analysis

The recorded audio data is converted into text by voice recognition. The recognition rate is about 50% in speech recognition. It is difficult to completely transform lecturers' oral explanation (audio) data into text by speech recognition. We accordingly modify the transformation results by hand. In addition, we use Praat (Boersma & Weenink, 2018 ), which is a free software for speech analysis in order to obtain paralanguage. It can obtain the values of pitch (voice high and low) and volume (strength of voice) with its timestamp. Currently, the system obtains the values of pitch and volume of each sentence in each slide, and calculates the maximum value of pitch and intensity in each slide. It also detects the sentence whose value of pitch and volume exceeds 80% of the maximum value as emphasized point.

Diagnosis/reconstruction

figure 6

User interface for intention setting

The system diagnoses insufficient or inappropriate points, and reconstructs the behavior while comparing the keywords in the slide contents and the ones in the oral contents to detect the corresponding ones as important points in the lecture contents.

Here are some examples of reconstructing lecture behavior. When lecturers explain an important point in a slide detected in the slide analysis, they should use gazing, paralanguage or pointing to the slide to attract learners’ attention to it. If they do not conduct the non-verbal behavior in this case, the system reconstructs it with face direction or pointing behavior. At the same time, paralanguage is also reconstructed by increasing the value of pitch and volume of oral explanation in the important point. When lecturers explain with the oral contents, they should also gaze at learners. If they have shifty eyes or looks at PC display in this case, the system reconstructs their behavior with the one for facing to them. In this way, it is possible to appropriately convey lecture contents to learners with reconstructed behavior, even if lecture behavior conducted by lecturers is insufficient or inappropriate.

Presentation by robot

In this phase, the system controls Sota and the display connected to the slide server by means of presentation scenario generated through the two phases of presentation recording and presentation behavior diagnosis/reconstruction. The reconstructed behavior, slide number, and oral explanation data are managed by time in the presentation scenario. It includes the behavior data (basic components recognized, start timing and duration), the text data for oral explanation (contents of explanation and paralanguage parameters) and slide number data. According to time, for example, the robot performs a gesture of pointing downward if the gesture is "Pointing at low". When the face direction is "To Learner", the robot faces towards the learner, and in the case of "To Slide" it turns to the slide.

Sota has a total of 8 freedom degrees of joint rotary (body 1 axis, arm 2 axis, shoulder 2 axis, neck 3 axis). Joints of Sota are fewer than joints of human lecturers. Sota also has no fingers. Therefore, we convert human behavior for Sota. As shown in (a) and (b) of Fig.  7 , for example, Sota represents big and small as iconic gesture by opening and closing the arm in front of its body.

figure 7

Expressions of lecture behavior by Sota

In conducting the robot lecture in an actual lecture, the system sends the behavior and oral explanation data to Sota, and slide number data to the slide server. Sota reproduces the presentation with the behavior, and the slide server presents the captured image corresponding to the oral explanation synchronously. The reconstructed behavior is converted into the behavior to be reproduced within the joints of Sota. Sota's oral explanation is also converted from the text data via NTT’s Text-To-Speech engine.

As this work assumes e-learning video lectures and small class lectures to be attended by a few students, we conducted a case study whose purpose was to ascertain whether robot lecture with reconstruction could be more effective for controlling learners' attention and more beneficial for understanding the lecture contents than video lecture by human and robot lecture with simple reproduction. By comparing the robot lecture between reconstruction and simple reproduction, it is possible to confirm the validity of reconstruction using the lecture behavior model. By comparing the reconstructed robot lecture with the video lecture, we can also confirm the advantages of robot lecture regardless of lecturer appearance.

Preparation

Participants were 36 graduate and undergraduate students. As shown in Table 2 , we prepared three video lectures whose topics were learning model, social learning , and learning technology , which were recorded from lectures by the same lecturer who was one of the authors. These lectures had the almost same numbers of slides and were about 5 to 6 min. We also prepared three robot lectures that reconstructed the corresponding lectures by following the lecture behavior model, and three robot lectures that simply reproduced the corresponding lectures without reconstruction. The reconstructed lecture behaviors were gestures, face orientation, and paralanguage. We set three conditions:

Robot-Reconstruction condition,

Lecture by robot involving reconstruction

Robot-Reproduction condition, and.

Lecture by robot involving simple reproduction.

Video condition.

Video lecture by human lecturer

In the following, we describe the details of reconstructed lecture behavior in the Robot-Reconstruction condition. Table 3 shows the numbers of gestures reconstructed, which included face direction and pointing gesture. In the lecture topic of Learning model , the system added one new gesture, and modified three gestures recorded. In Social learning , the system added no gesture, and modified three gestures recorded. In Learning technology , the system added no gesture, and modified four gestures recorded. The system also deleted no gesture in all of the lecture topics. Since the gesture and voice recognition of the system are not perfect, we manually checked each lecture presentation to add new gestures after gesture reconstruction by the system. In this manual checking, we looked into each slide to identify important points embedded in figures/illustrations that were not covered by the current system, and added pointing gestures to them according to the lecture behavior model. It took about 15 min for each lecture. As for appropriate gestures that was not reconstructed, there were two gestures in Learning model and Learning technology, and seven gestures in Social learning . Since the timing of the reconstructed gesture is not synchronized, we made modifications by hand.

Table 4 shows the details of paralanguage for emphasis in the Robot-Reproduction and Robot-Reconstruction conditions. The values in Table 4 represent the average numbers of paralanguage for emphasis per slide. In Learning model and Learning technology , the numbers in the reconstruction condition were less than in the reproduction condition because the system deleted the inappropriate paralanguage. In Social learning, the number in the reconstruction condition was the same in the reproduction condition because there was no inappropriate paralanguage.

As within-participant design, each participant took the three lectures under the three conditions. In order to counterbalance the order effects of the conditions, we randomly assigned 36 participants to six groups as shown in Fig.  8 . For example, Group 3 first took the lecture of learning model under the Video condition, then took the lecture of social learning under the Robot-Reconstruction condition, and took the lecture of learning technology under the Robot-Reproduction condition. Figure  9 shows how the participants took the video lecture and robot lecture.

figure 8

Procedure for taking lectures

figure 9

Examples of taking a lecture by participants

After taking each lecture, the participants were required to have an understanding test as objective evaluation including three in-slide questions and three between-slides questions. Since lecture behavior for encouraging learners to pay attention to important points is conducted for individual slides, it is expected to have a direct effect on understanding the slides. We accordingly used the in-slide questions to evaluate the direct effect, which were about the contents to be answered from individual slides. In addition, we assumed that the lecture behavior would also indirectly have an effect on understanding the relationships between slides. This indirect effect seems to play a crucial role in understanding the whole of lecture contents. In order to evaluate it, we also used the between-slides questions, which were about the contents to be answered from the relationship between multiple slides. Each question was scored one point (The perfect score of the test was six points). An answer consisting of multiple elements was scored as partial points by dividing one point. For example, if it consisted of two elements, each had 0.5 points.

After the understanding test, the participants were required to answer a 7 Likert scale questionnaire as subjective evaluation that asked the following 11 questions from 4 viewpoints (Table 5 ). Q1 to Q4 were about understandability. Q5 to Q7 were about concentration. Q8 and Q9 were about gazing. Q10 and Q11 were about motivation. The participants were required to answer on a scale of 1 to 7 (1: Extremely disagree < 4: Neither agree nor disagree < 7: Extremely agree) in each question. They were also required to write the reason why they selected in Q1, Q5, Q7, Q8 and Q9.

The hypotheses we set up in this study were as follows:

H1 Robot lecture involving reconstruction promotes understanding of the lecture contents including the slide contents and the relationships between slides more than robot lecture involving simple reproduction, and.

H2 Robot lecture involving reconstruction promotes understanding of the lecture contents including the slide contents and the relationships between slides more than lecture video.

Results and considerations

Statistical analyses of the three conditions were performed using one-way analysis of variance (ANOVA), with the Tukey–Kramer test for post hoc comparisons when significance was determined by ANOVA.

Objective evaluation: understanding test

Figure  10 shows the average scores of the understanding tests under each condition. The results of ANOVA revealed a statistically significant difference in the conditions (F(2, 35) = 3.855, p < 0.05). From post hoc comparisons, there was a significant difference between the Robot-Reconstruction condition and the Robot-Reproduction condition (p < 0.05), and was a marginally significant difference between the Robot- Reconstruction condition and the Video condition (p < 0.10), and was no significant difference between the Robot-Reproduction condition and the Video condition (p = 0.93).

figure 10

Average scores of understanding test

Figure  11 shows the average scores of in-slide questions, and Fig.  12 shows the average scores of between-slides questions. As for in-slide questions, there was a marginally significant difference between the Robot-Reconstruction condition and the Robot-Reproduction condition (p < 0.10) and were no significant difference between the Robot-Reconstruction condition and the Video condition (p = 0.37), and between the Robot-Reproduction condition and the Video condition (p = 0.72). As for between-slides questions, there were no significant differences between each condition (Robot-Reconstruction—Robot-Reproduction: p = 0.22, Robot-Reconstruction—Video: p = 0.20, Robot-Reproduction—Video: p = 0.98).

figure 11

Average scores of in-slide question

figure 12

Average scores of between-slides question

Table 6 shows the effect sizes (Cohen’s d (Jacob, 1998 )) between two conditions. The texts in parentheses represent interpretation for magnitudes of d defined by Cohen. As for between the Robot-Reconstruction and other conditions, the effect sizes d were 0.3 and more. On the other hand, the effect sizes between the Robot-reproduction and the Video were 0.2 and less.

From these results, the robot lecture involving reconstruction promotes understanding of the lecture contents more than the video lecture and the robot lecture involving simple reproduction, which overall supports H1 and H2. The results also suggest the necessity and importance of reconstructing lecture behavior with robot since the simple reproduction with robot did not significantly promote understanding compared to the video lecture. In addition, the results shown in Figs.  11 and 12 suggest that the lecture behavior reconstruction promotes understanding of the contents within slides rather than the relation between the slides. The current robot lecture system mainly deals with the lecture behavior for emphasizing the contents of each slide, not for emphasizing the relation embedded in multiple slides. These results show the validity of attention control and understanding promotion by means of the lecture behavior model.

Questionnaire

Figure  13 shows the results of all questions in the questionnaire. The vertical axis represents the average scores, the horizontal axis represents each question item, and the error bars represent the standard errors of mean values. The Robot-Reconstruction condition tended to be better than other conditions in 9 of 11 questions. In the remaining 2 questions (Q5: Robot-Reproduction tended to be better, Q6: Video tended to be better), the other conditions tended to be better. From the results of the ANOVA, there were significant differences or marginally significant differences in Q3 (F(2, 35) = 1.45, p < 0.10), Q6 (F(2, 35) = 6.67, p < 0.01), Q8 (F(2, 35) = 3.85, p < 0.05), Q9 (F(2, 35) = 5.04, p < 0.01), Q10 (F(2, 35) = 2.56, p < 0.10) and Q11(F(2, 35) = 6.19, p < 0.01). As for these question items in which there were significant differences, we also conducted Tukey–Kramer test as post hoc one. The results of the test reveal significant differences in Q6, Q8, Q9 and Q11, and a marginally significant one in Q10 as shown in Fig.  13 .

figure 13

Average scores of questionnaire

There were no significant differences from Q1 to Q4. These questions were about understandability of gestures or the lecture contents. Most participants commented on paralanguage in Q1. There were a few positive comments that “It was easier to get an explanation under the Robot conditions than the Video condition because voice of the robot was clearer and more fluent.” On the other hand, we obtained many negative comments that “It was harder to get explanation under the Robot conditions than the Video condition because voice of the robot had no strength, intonation and rhythm.”

From these comments, many participants suggested that the robot could not emphasize and explain important points with paralanguage even if the robot’s voices were emphasized at the important points that were detected under the Robot-Reconstruction condition. As a result, there were no significant differences in terms of understandability of gestures or the lecture contents. Meanwhile, some participants commented on lecture behavior in Q1. There were some positive comments that "Robot often attracted my attention since the number of lecture behavior under the Robot-Reconstruction condition were more than under the Video condition." These comments suggest that reconstructing the lecture behavior is effective for gathering attention.

Q5 to Q7 were about concentration, and there were no significant differences in these questions except Q6. As for Q5, one participant commented "I concentrated because the robot moved the face and spoke smoothly." As for Q6, the participant commented "I sometimes felt that Robot’s motor sound was noisy during lecture. It distracted me from understanding the lecture contents.”

Q8 and Q9 were about gazing, such as face direction and eye contact, and there were significant differences in these questions. In these questions, the Robot conditions obtained more scores than the Video condition. Some participants commented "We met eye to eye a lot since the robot moved its face direction to me." and "the lecturer in the video fixed his face direction but the robot tried to make eye contact." These comments suggest that the robot is addressing to the participants. From these results, it is suggested that the robot contributes to gathering attention and concentration using face direction.

Q10 and Q11 were about motivation, and there were also significant differences in these questions. The robot lecture contributed to keeping motivation of learners due to the novelty and presence of the robot.

Let us here discuss the functional restrictions and related considerations of robot lecture in comparison to human lecture. First, human components such as gestures, paralanguage, etc. necessary for conducting lecture behavior are obviously superior to robot ones. Although such components allow human lecturers to more precisely conduct lecture behavior, it is so difficult for them to properly use the components. They often fail in keeping and controlling learners’ attention in their lecture. On the other hand, robot has much difficulty in conducting precise lecture behavior due to limited components, but its behavior tends to be discriminating and recognizable.

In case of pointing gesture by human lecturers, for example, it is required to point to precise places. But, if it is imprecise, learners would be concerned about it and prevented from directing their attention. Pointing gesture by robot is apt to be rough by nature. Learners might be accordingly unconcerned and would be induced to give their attention to the rough direction. It could not be an obstacle for them to focus on the points.

In spite of limited robot components for lecture behavior, we need to consider how to complement lecture behavior by Sota. In order to complement its pointing gesture, for example, we can attach a laser pointer to Sota or add visual effect to the slide presented such as highlighting keywords that are synchronized with Sota's gestures.

Second, the current robot lecture system uses the gesture recognition databases to identify specific non-verbal behavior conducted by human lecturers. It is time-consuming to prepare such databases even if we can utilize machine learning techniques to tag lecture behavior. However, these are indispensable for conducting the robot lecture although we need to construct them from scratch.

Third, the robot lecture system presents lecture contents in the direction from Sota to learners, which could bring about their boredom during lecture presentation. In order not to get them bored, Sota accordingly needs to recognize learning states and to change lecture behavior depending on the states. For example, if there are learners who feel lecture presentation is difficult, Sota should present repeatedly with different non-verbal behavior.

Finally, the results of the case study with Sota suggest that the robot lecture promotes understanding of lecture contents, and that learners’ impression of the robot lecture are almost positive. These positive results might be brought about by a novelty effect provided that using Sota as a lecturer is novel for learners and they feel fascinated by Sota. The short-time lectures used in the case study might also have an influence on the effects of robot lecture. On the other hand, there are no significant differences between the robot lecture with simple reproduction and the video lecture, and there are significant differences between the robot lecture with reconstruction and the one with simple reproduction. These suggest that the positive results of the case study are not necessarily brought about by the novelty effect. As for the influence of lecture length on promotion of understanding a lecture, we need to ascertain if the robot lecture in a long time could bring about the same effects as the ones in the case study. As another option, nevertheless, we can consider a hybrid of robot and human lecture where robot gives an introduction in a short time and then human gives the remaining in each part of the lecture, since the robot lecture has positive effects in a short-time lecture.

The result of the questionnaire Q6, in addition, suggests that the video lecture is significantly better for concentration than the robot lectures. However, some learners often seem to be concerned with lecture behavior by Sota involving face direction and pointing gesture, and with its motor noise. Its lecture behavior is certainly conspicuous due to its embodiment, which would cause learners to distract their attention to lecture contents. If they get accustomed to Sota’s behavior, such distraction could be ignored. We accordingly need to re-evaluate the robot lecture after learners get accustomed to lecture behavior by Sota. It is also necessary to make the motor noise smaller. This requires the motion as to lecture behavior to be smaller or slower. Another approach may be to cover the noise with Sota's oral explanation or to add sounds to motions for distraction from noise. By giving sounds to motions, it is also possible to contribute to calling attention and keeping attention. We still have to consider these points as future work.

In this work, we have proposed robot lecture, and demonstrated a robot lecture system, which augments lecture by human, and which reconstructs lecture behavior conducted in the lecture. Towards such reconstruction, we have designed a model of lecture behavior.

In addition, we have conducted the case study that examined the effect of understanding promotion with the robot. The participants attended different lecture contents under the three conditions of video lecture, robot lecture with simple reproduction, and robot lecture with reconstruction. According to the results of understanding test, the robot lecture with reconstruction promoted learner's understanding of slide contents more than the video lecture and the robot lecture with simple reproduction. There was also no significant difference between the robot lecture with simple reproduction and the video lecture. According to the results of the questionnaire, the robot lecture with reconstruction also contributed to keeping and controlling learners’ attention. In addition, it became clear the importance of paralanguage for promoting understanding of the lecture contents. These results suggest the necessity and importance of the reconstruction of lecture behavior.

In future, we will consider how to more effectively present the lecture contents with lecture behavior of Sota. We will also aim to detect learning states to dynamically change lecture behavior for interactive lecture, although the current robot lecture system conveys the lecture contents to learners one-sidedly.

Availability of data and materials

Admoni, H., Weng, T., Hayes, B., & Scassellati, B. (2016). Robot nonverbal behavior improves task performance in difficult collaborations. In Proceedings of 11th ACM/IEEE international conference on human-robot interaction (HRI2016) (pp. 51–58). https://doi.org/10.1109/HRI.2016.7451733 .

Arima, M. (2014). An examination of the teachers’ gaze and self reflection during classroom instruction: comparison of a veteran teacher and a novice teacher. Bulletin of the Graduate School of Education, Hiroshima University, 63 (9–17), 2014. (in Japanese).

Google Scholar  

Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. Science Robotics, 3 , 21. https://doi.org/10.1126/scirobotics.aat5954

Article   Google Scholar  

Boersma, P., & Weenink, D. (2018). Praat: doing phonetics by computer [Computer program]. Version 6.0. 37, Retrieved Oct 26, 2020 from https://www.fon.hum.uva.nl/praat/ .

Collins, J. (2004). Education techniques for lifelong learning: Giving a PowerPoint presentation: The art of communicating effectively. Radiographics, 24 (4), 1185–1192. https://doi.org/10.1148/rg.244035179

FUJISOFT Inc. (2010). PALRO is A robot who cares. Retrieved Oct 26, 2020. https://palro.jp/en/

Goldin-Meadow, S., & Alibali, M. W. (2013). Gesture’s role in speaking, learning, and creating language. Annual Review of Psychology, 64 , 257–283. https://doi.org/10.1146/annurev-psych-113011-143802

Goto, M., & Kashihara, A. (2016). Understanding presentation document with visualization of connections between presentation slides. Procedia Computer Science, 96 , 1285–1293. https://doi.org/10.1016/j.procs.2016.08.173

Huang, C. M., & Mutlu, B. (2014). Multivariate evaluation of interactive robot systems. Autonomous Robots, 37 , 335–349. https://doi.org/10.1007/s10514-014-9415-y

Ishino, T., Goto, M., & Kashihara, A. (2018). A robot for reconstructing presentation behavior in lecture. In Proceedings of the 6th international conference on human-agent interaction (HAI2018) (pp. 67–75). https://doi.org/10.1145/3284432.3284460 .

Jacob, C. (1998). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge.

Kamide, H., Kawabe, K., Shigemi, S., & Arai, T. (2014). Nonverbal behaviors toward an audience and a screen for a presentation by a humanoid robot. Artificial Intelligence Research, 3 (2), 57–66. https://doi.org/10.5430/air.v3n2p57

Le, Q., & Pelachaud, C. (2011). Generating co-speech gestures for the humanoid robot NAO through BML. Gesture and Sign Language in Human-Computer Interaction and Embodied Communication . https://doi.org/10.1007/978-3-642-34182-3_21

Liles, K. R., Perry, C. D., Craig, S. D., & Beer, J. M. (2017). Student perceptions: The test of spatial contiguity and gestures for robot instructors. In Proceedings of the companion of the 2017 ACM/IEEE international conference on human-robot interaction (HRI2017) (pp. 185–186). https://doi.org/10.1145/3029798.3038297

Mcneill, D. (1994). Hand and mind: What gestures reveal about thought. Bibliovault OAI Repository, the University of Chicago Press. https://doi.org/10.2307/1576015

Melinger, A., & Levelt, W. (2005). Gesture and the communicative intention of the speaker. Gesture, 4 , 119–141.

Mutlu, B., Forlizzi, J., & Hodgins, J. (2007). A storytelling robot: modeling and evaluation of human-like gaze behavior. In Proceedings of the 2006 6th IEEE-RAS international conference on humanoid robots, HUMANOIDS (pp. 518–523). https://doi.org/10.1109/ICHR.2006.321322 .

Nakano, Y., Okamoto, M., Kawahara, D., Li, Q., & Nishida, T. (2004). Converting text into agent animations: Assigning gestures to text. In Proceedings of HLT-NAACL 2004: Short Papers (pp. 153–56).

Ng-Thow-Hing, V., Luo, P., & Okita, S. (2010). Synchronized gesture and speech production for humanoid robots. IEEE/RSJ International Conference on Intelligent Robots and Systems . https://doi.org/10.1109/IROS.2010.5654322

Nozawa, Y., Dohi, H., Iba, H., & Ishizuka, M. (2004). Humanoid robot presentation controlled by multimodal presentation markup language MPML. In: Proceedings of the 13th IEEE international workshop on robot and human interactive communication (pp. 153–158). https://doi.org/10.1109/ROMAN.2004.1374747 .

Sauppé, A., & Mutlu, B. (2014). Robot deictics: How gesture and context shape referential communication. In Proceedings of the 9th ACM/IEEE international conference on human-robot interaction (HRI2014) , 342–349.

Sharp Corporation. (2016). Robohon. Retrieved Oct 26, 2020 from https://robohon.com/global/ .

Softbank Robotics Co. Ltd. (2018). NAO the humanoid and programmable robot. Retrieved Oct 26, 2020 from https://www.softbankrobotics.com/emea/en/nao/ .

Tanaka, T., Fujikake, K., Takashi, Y., Yamagishi, M., Inagami, M., Kinoshita, F., Aoki, H., & Kanamori, H. (2017). Driver agent for encouraging safe driving behavior for the elderly. In Proceedings of the 5th international conference on human agent interaction (pp. 71–79). https://doi.org/10.1145/3125739.3125743 .

Vstone Co. Ltd. (2010). Social communication robot Sota. Retrieved Oct 26, 2020 from https://www.vstone.co.jp/products/sota/ .

Vstone Co. Ltd. (2018). Presentation Sota. Retrieved Oct 26, 2020 from https://sota.vstone.co.jp/home/presentation_sota/ .

Download references

Acknowledgements

Not applicable.

The work is supported in part by JSPS KAKENHI Grant Numbers 18K19836 and 20H04294.

Author information

Authors and affiliations.

The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo, 182-8585, Japan

Tatsuya Ishino & Akihiro Kashihara

NTT Human Informatics Laboratories, 1-1, Hikari-no-oka, Yokosuka, Kanagawa, 249-0847, Japan

Mitsuhiro Goto

You can also search for this author in PubMed   Google Scholar

Contributions

TI developed theoretical framework and system for robot lecture and conducted case study. MG modeled of robot lecture and developed robot lecture system. AK managed overall and made theory of robot lecture. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Akihiro Kashihara .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ishino, T., Goto, M. & Kashihara, A. Robot lecture for enhancing presentation in lecture. RPTEL 17 , 1 (2022). https://doi.org/10.1186/s41039-021-00176-6

Download citation

Received : 02 November 2020

Accepted : 28 November 2021

Published : 05 January 2022

DOI : https://doi.org/10.1186/s41039-021-00176-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Devices for learning
  • Human–Robot interaction
  • Non-verbal behavior
  • Robot presentation

paper presentation on robotics

Got any suggestions?

We want to hear from you! Send us a message and help improve Slidesgo

Top searches

Trending searches

paper presentation on robotics

11 templates

paper presentation on robotics

20 templates

paper presentation on robotics

holy spirit

36 templates

paper presentation on robotics

9 templates

paper presentation on robotics

25 templates

paper presentation on robotics

memorial day

12 templates

Robot Presentation templates

How would you demonstrate that this set of google slides themes and powerpoint templates about robots has been designed by humans and not machines to allay any concerns: we are human, even though great advances are being made in the world of robotics talk about artificial intelligence, new discoveries, or just enjoy these slides with aesthetic elements of robots..

Robotics Lesson for College presentation template

It seems that you like this template!

Premium template.

Unlock this template and gain unlimited access

Robotics Lesson for College

If you are a robotics professor at the university and you would like to prepare a different and original lecture that captures the attention of your students, take a look at this template from Slidesgo. It has a modern style, with geometric shapes. The background is black, but it is...

Robotic Workshop presentation template

Robotic Workshop

Robots are already a reality that we find in more and more places every time. It is an area with great development ahead. If you work in this field and need to prepare a robotics workshop, at Slidesgo we have created this modern and simple template with motifs in green...

Global Technology & Robotics Academy Center presentation template

Global Technology & Robotics Academy Center

Technology is part of our daily lives, and robotics academy centers are now more necessary than ever to train the next generation of inventors. Promote yours and get more students to enroll in your academy using this blue gradient template with robot illustrations. In it you will find the necessary...

Kawaii Robots Pitch Deck presentation template

Kawaii Robots Pitch Deck

Are you looking for something original and different to present your business plan? With this doodle style template with illustrations of Kawaii robots you will immediately capture the attention of your audience. Its pink and orange background colors and handwritten typography give it a fun, casual feel. It is perfect...

Humanoid Robot Pitch Deck presentation template

Humanoid Robot Pitch Deck

A humanoid robot is designed to mimic or simulate the shape and movements of a human being. We know that your company is innovative and wants to play a leading role in the future of technology, that's why we want to help you present yourself in a spectacular way and...

Humanoid Robot Project Proposal presentation template

Humanoid Robot Project Proposal

Technology is so far advanced that it has almost crossed the line. Can you tell whether this text has been written by a human or a robot? Well, maybe you should run this template through the Turing test, just to make sure… Present your project proposal for a robot company...

Robotic Process Automation (RPA) Project Proposal presentation template

Robotic Process Automation (RPA) Project Proposal

Who would have written the description of this template? A person or an artificial intelligence? It could be an AI, because thanks to the Robotic Process Automation (RPA) of a business, an action that was traditionally performed by a human being can now be carried out by a computer system....

Robotic Workshop Infographics presentation template

Robotic Workshop Infographics

Download the "Robotic Workshop Infographics" template for PowerPoint or Google Slides and discover the power of infographics. An infographic resource gives you the ability to showcase your content in a more visual way, which will make it easier for your audience to understand your topic. Slidesgo infographics like this set...

Mechanical Articulating Axes Project Proposal presentation template

Mechanical Articulating Axes Project Proposal

Make an impactful presentation of your mechanical articulating axes project with this modern, futuristic template. The design is inspired by robotics and technology, with a cool blue background, illustrations of mechanical articulating axes here and there, and a modern layout. Showcase the strengths of your project and explain the benefits...

Cost Reduction in Manufacturing Industry Consulting Toolkit presentation template

Cost Reduction in Manufacturing Industry Consulting Toolkit

Download the "Cost Reduction in Manufacturing Industry Consulting Toolkit" presentation for PowerPoint or Google Slides. Your business demands smart solutions, and this consulting toolkit template is just that! This versatile and ingenious toolkit will provide you with the essential tools you need to shape your strategies and make informed decisions....

Metaverse Mayhem Aesthetic Theme for Business presentation template

Metaverse Mayhem Aesthetic Theme for Business

If you've ever felt close to an AI, it's the perfect time to make your next business presentation feel metaverse mayhem aesthetic inspired! The latest in business presentation technology has that distinct robotic vibe: mysterious robot illustrations illuminated by dark backgrounds and highlighted with shades of purple, almost giving off...

Crobot Pitch Deck presentation template

Crobot Pitch Deck

Sometimes, you need to push forward with your ideas despite lacking financial aid. Well, how about using our template to try to impress some investors? It's structured as a pitch deck, ready to convey your message while showing what your project is about. A couple of wavy shapes on the...

Robotics in Healthcare presentation template

Robotics in Healthcare

Download the "Robotics in Healthcare" presentation for PowerPoint or Google Slides. Healthcare goes beyond curing patients and combating illnesses. Raising awareness about diseases, informing people about prevention methods, discussing some good practices, or even talking about a balanced diet—there are many topics related to medicine that you could be sharing...

AI Essentials Workshop presentation template

AI Essentials Workshop

Download the AI Essentials Workshop presentation for PowerPoint or Google Slides. If you are planning your next workshop and looking for ways to make it memorable for your audience, don’t go anywhere. Because this creative template is just what you need! With its visually stunning design, you can provide your...

AI Chatbot App Pitch Deck presentation template

AI Chatbot App Pitch Deck

Download the AI Chatbot App Pitch Deck presentation for PowerPoint or Google Slides. Whether you're an entrepreneur looking for funding or a sales professional trying to close a deal, a great pitch deck can be the difference-maker that sets you apart from the competition. Let your talent shine out thanks...

Design of a Humanoid Robot: PhD Dissertation presentation template

Design of a Humanoid Robot: PhD Dissertation

That's it. It's the day humanity surrenders to robots. How could you possibly tell whether this text is being written by a human or not? Do you know who definitely wrote your dissertation on humanoid robots? You! In order to impress the assessment committee, this is the slide design you...

Robotics Engineering Company Profile presentation template

Robotics Engineering Company Profile

Robotics engineering is the science of the future and development. Present the profile of your wonderful company with this attractive template with illustrations of robotic arms, which has all the necessary resources for you to promote your company and take it to the next level. Personalize your information in this...

Dystopian Film Research Paper presentation template

Dystopian Film Research Paper

First of all, congratulations on choosing this topic for your research thesis - there's nothing like a good dystopia! A movie full of futuristic elements, who's not going to like it? This template looks like it's a dystopia itself, as it has a futuristic, brutalist style and includes illustrations and...

  • Page 1 of 4

Great presentations, faster

Slidesgo for Google Slides :

The easy way to wow

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Seminar report Presentation on Robotics

Profile image of Adebayo Suleiman

Related Papers

SHARMENDRA YADAV (PGP Mumbai 2015-17)

paper presentation on robotics

thanga karuppiah

The robotic many places in the world in many different ways to identify important role in many industries for many purposes. So this kind of robotics, many of them are doing a full review, Are mapped to the field of robotics, robotic kind used in the application, find out What kind of a new kind of robotic in their efforts to come into this world, it is taken in this study. Moreover, such studies can make the world a new kind of robots Hardware robots and software used to create the account and this is what, how to use them, how to use the software in a new hardware and, most of all, what are the ways of this study can also come.

Richard Balogh

Model of the simple ball-following robot is presented. The robot with a camera follows a white ping-pong ball, trying to hold its position in the centre of the camera image. Using the classical control theory we try to explain and predict the behaviour of the mobile robot controlled by an agent-space architecture controller. Some problems with different control algorithms are described, design of the simple feedback proportional controller is compared with the agent-space architecture.

International Journal of Scientific and Engineering Research

Chirag sharma

Robot is a mechanical system which performs automated actual tasks, either according in order to direct human supervision, a pre-defined program, or maybe a set of general guidelines using artificial intelligence tactics. In this article, firstly a brief introduction on robotics and its applications are discussed. This article is an outcome of the study oriented project and consist the basic information of Roboics and its possible applications in all fields of our modern world. The article consist a brief detail regarding the laws of robotics, their types and basic information about their origin and invention.

Roshan Ranjan

Artificial intelligence (AI), is a division of computer science that explores intelligent behavior, learning and adaptation in machines. Artificial intelligence is a young science and is still a fragmented collection of subfields. The field of robotics is closely related to Artificial intelligence. Robotics is the science and technology of robots, their design, manufacture, and application. Robots play an important role in day to day fields. The first truly modern robot, digitally operated, programmable, and teachable, was invented by George Devol in 1954 and was ultimately called the Unimate. This paper deals about the clear knowledge of artificial intelligence in robotics, how do robots work, their components, some types of robots, how robots interact with human, their properties and some of their uses.

Journal of Polytechnic

enes cengiz

In this study, the position control simulation of a 3 Degree of Freedom (3DOF) robot arm was compared with machine learning and inverse kinematic analysis separately. The considered robot arm is designed in RRR pattern. In the inverse kinematic analysis of the robot arm, the geometric approach and the analytical approach are used together. Multi-Layer Perceptron (MLP) was used as a machine learning method. Some of the coordinate data that the robot arm can reach in the working space are selected and the MLP model is trained with these data. When training was done with MLP machine learning method, the correlation coefficient (R2) was obtained as 1. Coordinates of 3 different geometric models (helix, star and daisy) that can be included in the working space are used as test data of the MLP model. These tests are simulated in 3D in MATLAB environment. The simulation results were compared with the inverse kinematics analysis data. As a result, Mean Relative Error (MRE) values for helix,...

Trevor Holden

Saurabh Meena

Proceedings of the Second International Conference on Computational Science, Engineering and Information Technology - CCSEIT '12

kartik sharma

Design, Modelling and Fabrication of Advanced Robots

Mariselvam Monisa

The robot, any self-propelled machine that modifies human effort, does not resemble humans in appearance or perform human-like functions. Robotics, design, construction and use of machinery traditionally robots have been widely used to perform manual and repetitive tasks in industries such as automobile manufacturing, and to manufacture and assemble robots in industries where humans have to perform hazardous work. Contexts: Widely used in assembly, transportation, earth and space exploration, surgery and weapons. Robots eliminate jobs that are dangerous to humans because they are capable of working in hazardous environments. They can handle heavy loads, toxic substances and repetitive tasks. This helps companies prevent many accidents and saves time and money. Until they get tired, they can do the same thing over and over again. They are very precise - up to fractions of an inch, for example, in a microelectronic product that requires a man-like machine and performs mechanical, rout...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Psychology of Music

Martin Saldaña

Crítica Marxista

Sávio Cavalcante

Optics Express

Cid B de Araújo

Journal of Electroanalytical Chemistry

V. Mareček

Distributor Obat Penghalus Rambut

2020 ASEE Virtual Annual Conference Content Access Proceedings

Matt Graham

Journal of the Mechanics and Physics of Solids

José Rodríguez

Ileana Versace

International Journal of Low-Carbon Technologies

Siddig Omer

Journal of Medical Genetics

Wendy Hutchison

Khawar Awan

AIP Conference Proceedings

Proceedings of the National Academy of Sciences

Jelena Petrovic

Techne. Journal of Technology for Architecture and Environment

Riccardo Pollo

Anton Didikin

Signa: Revista de la Asociación Española de Semiótica

Elena Ramazza

Magdolna Sass

Husam Baalousha

Acta Physica Polonica A

Lukasz Scislo

Nonlinear Analysis: Theory, Methods & Applications

Kin Ming Hui

Revista INFAD de Psicología. International Journal of Developmental and Educational Psychology.

MANUEL Ortega Caballero

LC Revue de recherches sur Le Corbusier

Jorge Torres Cueco

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Robotics, IoT, and AI in the Automation of Agricultural Industry: A Review

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. ROBOTICS Paper Presentation

    paper presentation on robotics

  2. Paper Presentation ON Robotics: Presented

    paper presentation on robotics

  3. A Poster Presentation On Robotics

    paper presentation on robotics

  4. Robotics Paper Template Poster Vector Illustration Stock Vector

    paper presentation on robotics

  5. Robotic process automation powerpoint presentation slides

    paper presentation on robotics

  6. Robotics Presentation

    paper presentation on robotics

VIDEO

  1. Paper Presentation Tips for Board Exams😎 Get 5-8 MARKS Extra🔥#shorts #boardexam #class10

  2. Spring 2023 6.8210 Lecture 1: Robot dynamics and model-based control

  3. ENG011 presentation " Robotics & AI " Fall 22

  4. 🔴How to Write Economics Paper

  5. Lecture 03 for MIT 6.832 (Underactuated Robotics)

  6. Lecture 1 Introduction to Robotics

COMMENTS

  1. (PDF) ARTIFICIAL INTELLIGENCE IN ROBOTICS: FROM ...

    This research paper explores the integration of artificial intelligence (AI) in robotics, specifically. focusing on the transition from automation to autonomous systems. The paper provides an ...

  2. Review of Robotics Technologies and Its Applications

    Abstract: Robots are automatic equipment integrating advanced technologies in multiple disciplines such as mechanics, electronics, control, sensors, and artificial intelligence. Based on a brief introduction of the development history of robotics, this paper reviews the classification of the type of robots, the key technologies involved, and the applications in various fields, analyze the ...

  3. ROBOTICS Paper Presentation

    ROBOTICS paper presentation - Free download as PDF File (.pdf), Text File (.txt) or read online for free.

  4. Robots in Industry: The Past, Present, and Future of a Growing

    Robots have been part of automation systems for a very long time, and in public perception, they are often synonymous with automation and industrial revolution perse. Fueled by Industry 4.0 and Internet of Things (IoT) concepts as well as by new software technologies, the field of robotics in industry is currently undergoing a revolution on its own. This article gives an overview of the ...

  5. Advancements in Humanoid Robots: A Comprehensive Review and Future

    Abstract: This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview ...

  6. (PDF) Advanced Applications of Industrial Robotics: New ...

    three main functions in which robots replace hum ans: (1) extraction of useful information. from massive data flow; (2) accu rate movements to manipulate with an object or tool; and. (3 ...

  7. (PDF) The future of Robotics Technology

    Abstract. In the last decade the robotics industry has created millions of additional jobs led by consumer electronics and the electric vehicle industry, and by 2020, robotics will be a $100 ...

  8. PDF Introduction to Robotics

    Laws of Robotics • Asimov proposed three "Laws of Robotics" and later added the "zeroth law" • Law 0: A robot may not injure humanity or through inaction, allow humanity to come to harm • Law 1: A robot may not injure a human being or through inaction, allow a human being to come to harm, unless this would violate a higher order law

  9. Artificial Intelligence and Robotics

    Artificial Intelligence and Robotics - arXiv.org

  10. Robotics and Autonomous Systems

    About the journal. Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.

  11. IEEE Paper Presentation On Autonomous Robotics

    IEEE Paper Presentation on Autonomous Robotics - Free download as Powerpoint Presentation (.ppt / .pptx), PDF File (.pdf), Text File (.txt) or view presentation slides online. 1) Robotics is the branch of technology that deals with the design, construction, operation and application of robots and computer systems for their control, sensory feedback, and information processing.

  12. 500 research papers and projects in robotics

    In this post, we have listed 500+ recent research papers and projects for those who are interested in robotics. These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision ...

  13. Robotics Technology Thesis

    Robotics Technology Thesis Presentation . Technology . Free Google Slides theme and PowerPoint template . If your thesis is on robotics technology, you're in luck, because this is the template you need! With gray, blue and white as its main colors and with its retro computer typography, the design brings to mind the evolution of robotics and ...

  14. PDF Welcome to the presentation of World Robotics 2022

    Re- and nearshoring of production. Securing supply chains. Increasing resilience and flexibility (logistics, politics) "Democratizing" robotics. Low-cost robotics opens up new customer segments. Easy setup and installation (out-of-the-box solutions) New distribution channels. Ongoing trend to high mix-low volume production.

  15. Robot lecture for enhancing presentation in lecture

    In lectures with presentation slides such as an e-learning lecture on video, it is important for lecturers to control their non-verbal behavior involving gaze, gesture, and paralanguage. However, it is not so easy even for well-experienced lecturers to properly use non-verbal behavior in their lecture to promote learners' understanding. This paper proposes robot lecture, in which a robot ...

  16. Robotics presentation

    it is presentation for future of robotics in 4 industrial revolutions. It has the content all about the mechatronics engineering. Again, I did a collection for all the resources together. here I use this info in a presentation for a seminar. here I share this to all the people who need this for technological resources.

  17. Robotics

    This literature review presents a comprehensive analysis of the use and potential application scenarios of collaborative robots in the industrial working world, focusing on their impact on human work, safety, and health in the context of Industry 4.0. The aim is to provide a holistic evaluation of the employment of collaborative robots in the current and future working world, which is being ...

  18. How to Evaluate and Present Your Robotics Project

    2 Plan your structure. A good presentation has a clear and logical structure that guides the audience through your robotics project. A typical structure consists of three main parts: introduction ...

  19. Swarm Robotics: Past, Present, and Future [Point of View]

    Swarm robotics deals with the design, construction, and deployment of large groups of robots that coordinate and cooperatively solve a problem or perform a task. It takes inspiration from natural self-organizing systems, such as social insects, fish schools, or bird flocks, characterized by emergent collective behavior based on simple local interaction rules [1], [2]. Typically, swarm robotics ...

  20. Free Google Slides and PowerPoint Templates on robots

    Make an impactful presentation of your mechanical articulating axes project with this modern, futuristic template. The design is inspired by robotics and technology, with a cool blue background, illustrations of mechanical articulating axes here and there, and a modern layout. Showcase the strengths of your project and explain the benefits...

  21. Seminar report Presentation on Robotics

    The field of robotics is closely related to Artificial intelligence. Robotics is the science and technology of robots, their design, manufacture, and application. Robots play an important role in day to day fields. The first truly modern robot, digitally operated, programmable, and teachable, was invented by George Devol in 1954 and was ...

  22. PDF Paper Presentation

    Paper Presentation Robotics 2024 - ShanghaiTech University • Every student presents one paper (individual presentation)! ... • Submit presentation pdf or ppt AND paper pdf to your group repository (in "doc/presentations/email") till Wednesday, April 10 22:00 to repo! Late submissions (or if you

  23. Robotics, IoT, and AI in the Automation of Agricultural Industry: A

    This paper presents a review on various agricultural practices and aspects that can be or currently are automated, using robotics, IoT and Artificial Intelligence (AI) more prolifically. Alongside, the current and future perspectives are dealt with, covering major technology innovations focused around smart farming, precision agriculture, vertical farming, modern greenhouse practices ...