AI-based Drone Assisted Human Rescue in Disaster Environments: Challenges and Opportunities

In this survey we are focusing on utilizing drone-based systems for the detection of individuals, particularly by identifying human screams and other distress signals. This study has significant relevance in post-disaster scenarios, including events such as earthquakes, hurricanes, military conflicts, wildfires, and more.

These drones are capable of hovering over disaster-stricken areas that may be challenging for rescue teams to access directly, enabling them to pinpoint potential locations where people might be trapped. Drones can cover larger areas in shorter timeframes compared to ground-based rescue efforts or even specially trained search dogs. Unmanned aerial vehicles (UAVs), commonly referred to as drones, are frequently deployed for search-and-rescue missions during disaster situations. Typically, drones capture aerial images to assess structural damage and identify the extent of the disaster. They also employ thermal imaging technology to detect body heat signatures, which can help locate individuals. In some cases, larger drones are used to deliver essential supplies to people stranded in isolated disaster-stricken areas.

In our discussions, we delve into the unique challenges associated with locating humans through aerial acoustics. The auditory system must distinguish between human cries and sounds that occur naturally, such as animal calls and wind. Additionally, it should be capable of recognizing distinct patterns related to signals like shouting, clapping, or other ways in which people attempt to signal rescue teams. To tackle this challenge, one solution involves harnessing artificial intelligence (AI) to analyze sound frequencies and identify common audio “signatures”. Deep learning-based networks, such as convolutional neural networks (CNNs), can be trained using these signatures to filter out noise generated by drone motors and other environmental factors. Furthermore, employing signal processing techniques like the direction of arrival (DOA) based on microphone array signals can enhance the precision of tracking the source of human noises.

Index Terms:

I introduction.

Natural disasters represent dire situations that imperil the lives of numerous individuals. In these circumstances, telecommunication networks often suffer damage, impeding the effectiveness of search and rescue operations and necessitating significant time and resources for their restoration. In times of natural disasters and extreme weather conditions, search and rescue operations become paramount in mitigating risks to both human lives and the environment. Consequently, there is an urgent need to swiftly repair impaired telecommunication systems, enabling search and rescue teams to exchange critical information and facilitate real-time infrastructure recovery efforts.

The investigative process serves as a versatile system that enhances the efficiency of operations during extensive wildfires. Its advantage over existing methods lies in its ability to deliver quicker results. Continuous monitoring of the disaster can produce instantaneous outcomes, particularly by swiftly adjusting incident locations and coordinates, a task that surpasses human capabilities due to its high visibility and flexibility. Once the detection occurs, the coordinates are promptly decoded, data is analyzed, and processed information is transmitted to the appropriate institutions. Prior to the arrival of relevant rescue teams, taking the initiative to mitigate and minimize loss of life is imperative.

Continuous monitoring empowers us to maintain constant awareness and control over situations, enabling a responsive approach in every circumstance. This capability serves as a preventive measure against potential catastrophes. The implementation of this approach will involve the deployment of multiple drones, with a minimum requirement of at least two to ensure uninterrupted surveillance of the territory.

Related Work : Several works have been undertaken in the field of search and rescue operations. Alsamhi et al. [ 1 ] , assess the network performance of UAV-supported intelligent edge computing to accelerate Search And Rescue (SAR) operations. This technology offers rapid deployment capabilities and can aid in disaster-related rescues.

Alawad et al. [ 2 ] , propose a crisis and disaster management system that leverages a swarm optimization algorithm (SOA) to enhance disaster and crisis management efforts. Within this system, the UAV search and rescue team operates under the delay tolerant network (DTN) strategy, enabling efficient exploration capabilities.

Erdelj and Natalizio [ 3 ] , identify the primary applications of UAV networks in disaster management and examine key research challenges within the domain of UAV-assisted disaster management.

In our previous work, [ 4 ] , we have created an integrated system featuring two distinct sensing mechanisms, enabling real-time detection and precise localization of humans and animals even in thick smoke, enhancing the situational awareness of firefighters on-site. An extended version of this work has been submitted for a patent [ 5 ] .

In this article, we illustrate our strategy for combatting large-scale wildfires through the utilization of unmanned aerial vehicles (UAVs) and neural network algorithms, with a primary focus on significantly reducing human casualties [ 6 ] . UAV technology plays a pivotal role in mitigating the impact of emergency situations and expediting rescue and recovery efforts [ 7 , 8 , 9 ] . Our foremost objective is the swift detection of emerging disasters and prompt reporting to all relevant law enforcement agencies. A swift response to such incidents presents a valuable opportunity to contain the wildfire’s spread, with every moment gained being crucial in limiting the disaster’s extent.

Moreover, we conduct a comprehensive exploration of speech recognition algorithms, perform comparative assessments, and uncover their potential applications in diverse environmental conditions. Our UAVs are outfitted to receive audio signals and discern the source’s orientation via an advanced microphone array.

Once the information is received, we initiate signal processing using specialized neural network algorithms designed for speech recognition. After extensive testing with UAVs, we have verified the efficiency of microphone array technology, which depends on sound and voice recognition algorithms. This technology accurately pinpoints the direction and location of sound sources.

In the monitoring process, the use of cameras and advanced image processing algorithms, leveraging deep neural networks, enables us to pinpoint the origins of the disaster, evaluate its scale, and approximate the initial coordinates of the incident location.

After the initial detection of the disaster, to obtain more precise location coordinates, we will utilize techniques for detecting, analyzing, and processing audio signals. Employing a microphone array, we will capture specific information related to the disaster’s location. Subsequently, this data will be processed using deep learning algorithms, including speech recognition techniques.

In our research, we harnessed two distinct neural network algorithms—one crafted for image processing and the other for sound recognition. Each of these algorithms serves a unique purpose, and their combined utilization allows us to complement and address each other’s limitations effectively.

The rest of the paper is organized as follows. In Section II , we present UAV-assisted disaster management with applications, followed by the situation awareness and logistics and evacuation support in Section III , and with the SAR missions in IV . In Section V , we discuss the medical material delivery, telemedicine platform application. In Section VI , we present the state-of-the-art technologies using UAVs to localize people in emergency situations. In Section VII , we present the advantages and disadvantages of the hardware, followed by design challenges for UAV-assisted emergency networks in Section VIII and the design architecture in Section IX . Finally, we present future directions in Section X and draw the main conclusions in Section XI .

II UAV Assisted Disaster Management Applications

Ii-a monitoring, forecasting, and early warnings.

Environmental monitoring and the proactive identification of potential emergency situations constitute the primary line of defense in protecting the population from both natural and human-induced hazards. Our capacity to rapidly and effectively locate sources of risk empowers us to anticipate potential disruptions within specific regions and pre-emptively recognize imminent threats. This forward-looking strategy equips us to put in place all essential safeguards, preventing these hazards from evolving into full-fledged emergencies. The accuracy of our predictions related to weather conditions, natural calamities, or human-generated mishaps directly influences their effectiveness in preserving human lives and welfare.

To address these challenges, a range of monitoring, data analysis, and processing methods are employed. The contemporary system for emergency monitoring and forecasting is designed to observe, supervise, and predict hazardous weather events and natural occurrences, as well as technological developments and their evolving dynamics. Emergency forecasting serves the purpose of assessing the magnitude of potential disasters and orchestrating efficient preventive measures in response to them.

Utilizing UAVs, we can effectively search for individuals who are lost in forests, inspect remote and challenging-to-access areas, and carry out other essential operations. Nevertheless, drones do have a notable limitation: their flight duration. To overcome this constraint and cover extensive areas while gathering the additional data required for modeling and predicting various emergency situations, we will address this issue by deploying multiple UAVs instead of relying on a single one.

Monitoring: Our primary objective involves the use of a surveillance camera mounted on a drone. By leveraging the capabilities of the Arducam IMX477 camera, we will maintain constant awareness of the ongoing situation and have the capacity to document even the slightest alterations. Furthermore, the thermal camera “Seek Thermal” ensures that visibility remains unimpeded during nighttime events.

Forecasting: Following the reception of data collected by the cameras, this phase will commence. The information will be transmitted to the Jetson Nano microcomputer, which will undertake image processing using advanced deep-learning techniques. The Jetson Nano microcomputer is renowned for its superior performance, especially when working with artificial intelligence algorithms, outperforming even the Raspberry Pi. Upon completion of the algorithm, we will obtain a forecast indicating the presence of a disaster and its likelihood.

Early warning: This constitutes a crucial juncture, as it is the very reason for conducting monitoring. Without this step, the purpose of terrain exploration and forecasting would be rendered meaningless. Upon obtaining the probability forecast, we will promptly juxtapose it with our predefined threshold value. If the result surpasses this threshold, we will expeditiously trigger an event and transmit the pertinent information to the relevant authorities, ensuring no time is wasted in taking necessary actions. The event will encompass the following information:

The event will include the time at which the disaster was detected. (day/month/year: hour/minute/second)

The event will provide precise coordinates, along with details regarding the scope and scale of the fire.

Additionally, the event will convey information about the direction in which the fire is spreading.

Furthermore, the event will furnish data concerning the speed at which the fire is advancing.

The elements mentioned encompass all the essential details required by the relevant authorities. Armed with information regarding the scale and direction of the incident, responders can act swiftly and efficiently to prevent and extinguish the fire.

The precision of monitoring and forecasting emergency situations significantly impacts the effectiveness of mitigating the risks associated with their occurrence, propagation, and suppression of potential catastrophes that have not yet fully developed.

II-B Disaster Information Fusion and Sharing

In responding to emergency situations, the integration of data from multiple sources is of paramount importance, as it enables the delivery of the most up-to-date, precise, and timely information across various scales to support disaster risk reduction services.

Information integration involves amalgamating data from diverse sources, each with varying conceptual, contextual, and typographic representations. This process is employed for tasks such as data mining and consolidating information from unstructured or semi-structured resources. While information integration primarily pertains to textual knowledge representation, it can also extend to multimedia content. Noteworthy technologies utilized for information integration encompass deduplication and string metrics, which facilitate the identification of similar text across different data sources through fuzzy matching. Additionally, some methods rely on causal outcome estimates based on a source model to enhance the integration process.

Information fusion and sensor fusion, often referred to as multi-sensor fusion, are interconnected concepts related to information integration.

Information fusion encompasses the process of amalgamating data from multiple sources to generate a new dataset that accurately represents a true value for a particular data item, especially when different data sources offer conflicting information. A variety of algorithms have been developed to address this challenge, spanning from straightforward techniques like majority voting to more intricate approaches capable of assessing the reliability of various data sources. These methods play a vital role in enhancing the accuracy and reliability of integrated information for decision-making and problem-solving in various fields, including sensor networks and data analysis.

Sensor fusion is the procedure of merging data obtained from sensors or diverse sources to produce information that has reduced uncertainty compared to using these sources individually. This approach aims to achieve a result that is not only “more accurate” but also “more comprehensive” when compared to relying on a single sensor.

For instance, it is conceivable to enhance the precision of determining the location of an object within an environment by integrating data from multiple sources, such as video cameras, GPS systems, and other sensor inputs. In our project, we will promptly integrate data from GPS, cameras, and microphone arrays to improve the overall accuracy and completeness of our information gathering and analysis.

Refer to caption

This method provides a significant advantage when utilizing data collected from separate sources, compared to relying solely on their individual results. Both camera and sound data processing yield outcomes in the form of probabilities, enabling us to determine a positive result when our probability exceeds a predefined threshold. For instance, if our minimum threshold for both the camera and microphone is set at 0.8, any values below this threshold will be disregarded. When using the data sources independently, any value below 0.8 would result in a negative determination.

As an example, if the camera data yields a probability of 0.7 and the microphone data yields 0.6, under standard circumstances, we would conclude that there is no event. However, through the information integration method, we depart from the conventional approach and combine the results, resulting in 0.7 and 0.6 becoming components of a single assessment. In this way, we arrive at the truth of the event, a conclusion that would not be attainable without the application of information integration principles. This approach allows us to leverage the strengths of multiple data sources and increase the reliability of our results.

III Situation Awareness and Logistics and Evacuation Support

Situational awareness (SA) entails perceiving elements and events in the environment concerning time and space, comprehending their significance, and anticipating their future developments. At its core, SA involves understanding what is transpiring in the environment and how it impacts both the present and the future. While this may appear straightforward in stable and controllable settings, it can pose substantial challenges in swiftly evolving and intricate circumstances. Consequently, situational awareness is particularly pertinent in scenarios characterized by a high degree of variability, uncertainty, complexity, and ambiguity. One of the principal objectives of situational awareness is the identification and prevention of errors, making it an indispensable concept in various domains where precision and adaptability are crucial.

The formal definition of SA can be divided into three key components: the perception of elements within the environment, the comprehension of the situation, and the ability to forecast future developments. Research in the field of SA has primarily focused on three aspects: States of SA, Systems of SA, and Processes of SA.

States of SA : This aspect pertains to the actual level of awareness an individual possesses regarding a given situation.

Systems of SA : This aspect concerns the distribution of SA among objects within the environment and the exchange of SA between various components of a system.

Processes of SA : This aspect involves the continuous updating of SA states and the factors that drive instantaneous changes in SA.

Numerous models have been developed to describe and understand situational awareness, with one of the most widely recognized and practical models being the three-level Endsley model. This model provides a structured framework for comprehending and assessing situational awareness in complex environments.

III-A Perception of SA

The first level of situational awareness is closely linked to the perception of pertinent information. This has two significant implications. Firstly, individuals must have access to relevant information, which they must recognize and grasp within seconds of accessing it. Consequently, one of the fundamental prerequisites for achieving the first level of SA is effective communication and proper visualization. For instance, if a project manager is not informed about a potential issue that could lead to project delays, they cannot initiate corrective actions to address it. Consequently, they lack the first level of SA, which hinders their ability to make informed decisions and steer the project in the right direction.

III-B Understanding the SA

The second level of SA involves the essential task of accurately comprehending pertinent information. Depending on the specific situation, this necessitates having the appropriate knowledge to effectively interpret the received information.

Mental models play a pivotal role in achieving the second level of SA. This is because individuals construct new mental models or modify existing ones based on how they interpret information. When crucial information is absent or the data is incorrect, it implies that the mental model is flawed. Consequently, individuals encounter difficulties at the second level of SA, as their understanding of the situation is compromised.

III-C Forecasting

The third and final level of SA pertains to the ability to predict future states based on perceived and relevant information. This becomes especially crucial when dynamic processes are anticipated in the future, often relying on assumptions.

In complex systems, marked by a high degree of interdependence, it becomes challenging to predict how changes in one variable might influence the overall state of the system. This underscores the significance of accurate forecasting at the third level of SA.

Moreover, incorrect or outdated mental models can also lead to shortcomings at the third level of SA, as they can result in flawed assumptions about future developments and hinder the ability to make well-informed decisions in rapidly evolving situations.

IV SAR Missions – WAVs Can Search Dor and Rescue People Lost, Injured or Trapped by Debris

Search and Rescue (SAR) is a voluntary non-profit organization dedicated to locating and rescuing individuals who are lost, missing, or injured, particularly in outdoor and street-related emergencies. SAR is an extensive emergency service, involving highly trained military specialists, local law enforcement agencies, and dedicated civilian volunteers. The primary objective of SAR is to identify, provide initial medical care, and safely evacuate individuals in distress.

Efficiently saving lives during and after an incident or a natural disaster is a time-consuming and perilous task for rescue teams. The longer the rescue operation takes, the greater the risk to those in need. In such critical scenarios, drones offer a ray of hope. They significantly reduce the time required to search for individuals or objects, decrease operational costs, and, most importantly, minimize the risks faced by search and rescue personnel.

V Medical Material Delivery/Telemedicine Platform

The future use of drones in healthcare presents a range of opportunities to enhance safety and healthcare delivery. Here’s how the industry can best utilize this technology:

Disaster Response and Relief : Drones can play a critical role in delivering food aid and medical supplies to disaster-stricken areas. Rapid delivery of essential items directly to disaster zones can help prevent outbreaks of life-threatening infectious diseases. This includes delivering communication equipment, mobile medical units, and portable shelter, particularly in situations where damage to critical infrastructure disrupts ground or conventional air transportation.

Remote Medical Care : Drones facilitate more effective medical care for patients in remote or mobile settings. They can deliver medicines and supplies to patients receiving home care instead of being in a hospital. For example, when a healthcare worker visits a homebound patient, blood samples can be collected and sent by drone to a laboratory for analysis. Medications, antibiotics, and prescribed treatments can be delivered to patients’ homes. This technology can extend the duration of home care for individuals in nursing homes, enhancing the independence of the aging population. Drones can also monitor patients with conditions like dementia or deliver food to those who cannot cook independently.

Incorporating drones into healthcare operations not only expedites critical responses in emergencies but also expands access to medical services for patients in remote locations or receiving home care. It has the potential to revolutionize healthcare delivery by making it more efficient, convenient, and responsive to the needs of diverse patient populations.

Radar technology is used to determine physiological parameters such as breathing and heart rate by emitting periodic, known narrowband pulses that interact with a person. The time delay and frequency modulation in the received signal are directly related to the target’s unknown range and speed, respectively. By analyzing these parameters in the received signal, radar systems can deduce critical health information. This includes detecting the micro-Doppler effect caused by tiny movements of body parts like the lungs and heart; for example, breathing causes millimeter-level displacement, while heartbeat results in sub-millimeter displacement.

In healthcare, drone systems have the potential to address a wide range of challenges faced by healthcare workers, from emergency responders in underserved regions to busy hospital staff. Drones can rapidly transport vital medical supplies like blood, vaccines, contraceptives, and antivenom to remote areas, reaching patients in need of immediate care within minutes, which can be a life-or-death situation.

They can also streamline logistics within hospital premises, transferring medications between different hospital buildings and providing tools to assist elderly patients in aging in place. Drones offer numerous promising opportunities for the healthcare sector, not only in terms of saving lives but also in reducing costs.

One significant advantage of drones in healthcare is their ability to cover longer distances at higher speeds when transporting blood products and laboratory samples. Currently, ground vehicles are often used for such transport, which can result in accidents and delays. Drones can mitigate these issues and enhance the efficiency of healthcare logistics.

However, like any industry adopting drone technology, healthcare faces various challenges, including payload capacity, battery life, and regulatory compliance. Addressing these issues will be crucial for realizing the full potential of drones in healthcare.

VI State-of-The-Art Technologies Using UAVs to Localize People in Emergency Situations

Vi-a sensors accelerometer, magnetometer, gyroscope, barometer.

A sensor is a device designed to detect and respond to various types of input from the physical environment. These inputs can encompass a wide range of phenomena, including light, heat, motion, humidity, pressure, and numerous other environmental factors. Sensors serve as a crucial link between the physical world and the digital realm by essentially acting as the “eyes and ears” for computing systems. They collect data from their surroundings, which is then analyzed and acted upon by the computing infrastructure.

Sensors can be categorized in several ways, one of which is the division between active and passive sensors:

Active Sensor : An active sensor is a type of sensor that necessitates an external power supply to respond to environmental inputs and generate outputs. For instance, sensors used in meteorological satellites require a power source to provide meteorological data about the Earth’s atmosphere.

Passive Sensor : In contrast, a passive sensor doesn’t rely on an external power source to detect environmental influences. It harnesses the energy present in the environment itself, such as light or heat energy, to function. A classic example is the mercury glass thermometer, where temperature changes cause the level of mercury in a glass tube to rise or fall, providing an easy-to-read gauge for temperature measurement.

Certain sensor types, like seismic and infrared sensors, are available in both active and passive forms, offering versatility in various applications.

Another classification criterion for sensors is whether they are analog or digital, based on the type of output they produce:

Analog Sensor : Analog sensors convert environmental inputs into analog outputs, which are continuous and variable. These sensors provide data in a continuous manner, representing a range of values.

Digital Sensor : Digital sensors, on the other hand, convert environmental inputs into discrete digital signals, which are transmitted in binary format (comprising 1s and 0s). Digital sensors provide data in a binary, on/off manner, allowing for more precise and straightforward data processing and transmission.

The choice between analog and digital sensors depends on the specific application and the requirements for data accuracy, resolution, and processing.

VI-B Video Camera/RGB Depth Camera

The term “RGB” refers to a color model that combines the primary colors of light, namely red, green, and blue, to produce a wide range of colors that humans perceive. An RGB camera is a type of camera used to capture color images by recording light in these red, green, and blue wavelengths (RGB). This camera operates within the visible light spectrum, typically ranging from 400 to 700 nanometers (nm).

In contrast, an RGBD camera is a specialized type of depth camera that provides both depth (D) and color (RGB) data in real time. The depth information is obtained from a depth map or image generated by a 3D depth sensor, such as a stereo or time-of-flight sensor. RGBD cameras merge RGB color data and depth information at a pixel level, allowing them to convey both types of data within the same frame.

RGBD cameras are highly preferred in certain embedded vision systems for several reasons:

Enhanced Object Identification : Combining RGB color data with depth information enables more efficient and accurate object recognition and pattern detection. This is particularly valuable in applications that require identifying and characterizing objects within a scene, as well as measuring their distance from the camera.

Applications : RGBD cameras find applications in various fields, including anti-spoofing systems based on face recognition and people-counting devices, where the ability to capture both color and depth information simultaneously is critical for accurate results.

The Arducam IMX477 synchronized stereo camera kit shown in Fig. 2 is designed for use with the Nvidia Jetson Nano which is portrayed in Fig. 3 . This kit facilitates the simultaneous operation of two 12-megapixel IMX477 camera modules through a single MIPI CSI-2 camera socket on the Jetson Nano. The kit includes two high-quality camera modules and an Arducam HAT stereo camera, which allows the ArduChip to present a dual camera connection as a single camera recognized by single-board computers. Such a kit is ideal for constructing a stereo camera system for depth vision applications, capitalizing on the benefits of RGBD technology.

Refer to caption

VI-C Audio/Microphone Array

While the majority of research efforts have centered on the development of video-based solutions, UAV-embedded audio-based localization has garnered comparatively less attention [ 10 , 11 ] . Nevertheless, the utilization of UAVs equipped with microphone arrays holds the potential to be of paramount importance in locating individuals during emergency situations. This potential becomes particularly evident when video sensors are hindered by factors such as poor lighting conditions (e.g., at night or in fog) or obstacles that restrict their field of view [ 12 ] .

When recording audio, unwanted background noise can often degrade the quality of the recording. To address this issue, digital processing is used to enhance the sound, eliminate noise, and retain only the desired audio. This process often involves the creation of multiple recordings from different locations and then comparing these recordings to distinguish between the desired sound source and the various noise sources. Subsequently, the unwanted noise can be removed. A microphone array, consisting of two or more microphones used together, is a valuable tool for identifying and isolating individual sound sources.

The human auditory system serves as a natural example of a microphone array. Human ears allow individuals to determine the approximate direction of a sound source in real time. Without conscious effort, people instinctively know where to turn their heads to hear a voice, where to focus their attention when an unexpected sound occurs, and which way to move to avoid potential threats. This innate ability is possible because humans have two ears, positioned on each side of their head.

Microphones are employed to replicate this sound source recognition process by digitally capturing audio. Sound travels as pressure waves through the air, and just as the human ear has an eardrum that senses changes in nearby air pressure, microphones are equipped with a thin diaphragm that vibrates in response to variations in air pressure. When the diaphragm moves, the coil of wire wound around a magnet also moves, generating an electrical signal that faithfully reproduces the recorded sound. This electronic signal is then stored in a computer for subsequent analysis. In a microphone array, each microphone captures a signal that is stored and can be compared with the signals from other microphones in the array. Each microphone, being at a different distance from the sound source, experiences a unique time delay, a characteristic depicted in the accompanying figure. This time delay information is crucial for sound localization and source separation in microphone array systems.

Refer to caption

The microphone array system utilizes precise calculations based on time delay differences to determine the location of sound sources. This allows the system to gather valuable information about its environment, including the positions of sound sources and the distinct characteristics of noise-free sounds.

To calculate the location of a sound source, the microphone array employs specialized software. Initially, it estimates the time delay between a primary or main microphone and each of the other microphones in the array. The calculation involves holding the primary audio recording in a fixed position while overlaying a secondary recording on it and repeatedly shifting it across for a brief duration. At each shifting point, the system calculates the difference between the two audio signals. When the difference between two recordings is minimal, it indicates that they are closely aligned in time.

The time delay, defined as the amount by which the secondary microphone recording needs to be shifted to align with the primary recording, is then determined for each secondary microphone. With these time delay measurements, the system can approximate the position of the sound source using geometric principles. By knowing the speed of sound, it can calculate the distance from each microphone to the source based on the time delay. The figure illustrates that a set of distances, starting at the same point and ending at each microphone, only corresponds to one possible source location. Therefore, if the time delay estimates are accurate, the system can pinpoint the source location with high precision.

This process is repeated for various combinations of audio recordings, allowing the microphone array to identify multiple sources and their respective locations as shown in Fig. 5 .

Refer to caption

By doing so, unwanted sounds can be effectively and strategically removed, leaving behind improved sound quality for further analysis and processing. This capability enables the microphone array to enhance sound clarity and precision in various applications. Fig. 6 shows ReSpeaker microphone array that can be suitable for such applications.

Refer to caption

VI-D Radars

Intelligent onboard sensors, such as cameras, microphone arrays, and radar, which are particularly effective during the early stages of disaster response. However, radar’s unique ability to penetrate through objects and operate in low-visibility conditions makes it indispensable for detecting occluded human subjects, both on and under debris, when other sensing abilities may falter [ 13 ] . A radar is an electromagnetic sensor utilized for the detection, localization, tracking, and identification of various objects over significant distances. Its operation involves the transmission of electromagnetic energy towards objects, commonly referred to as targets, and the subsequent observation of the echoes reflected from these objects. Radar not only determines the presence, location, and speed of these objects but in certain cases can also ascertain their size and shape. What sets radar apart from optical and infrared sensors is its ability to detect distant objects under adverse weather conditions and accurately measure their distance.

Radar is an “active” sensor because it generates its own electromagnetic signals (transmitter) to detect targets. Typically, radar operates in the microwave portion of the electromagnetic spectrum, with frequencies ranging from 400 megahertz (MHz) to 40 gigahertz (GHz), measured in hertz (cycles per second).

UAV radars offer the advantage of working irrespective of weather conditions and lighting, a capability not shared by many electro-optical sensors. They are capable of detecting autonomous drones, while radio frequency sensors rely on intercepting signals transmitted between drones and their human operators. The ease of detecting a UAV with radar depends on its effective scattering area commonly referred to as radar cross section (RCS), which is influenced by the UAV’s size and the amount of reflective material it contains. UAVs with larger RCS values can be detected at greater distances.

Ground surveillance radars play a crucial role in scenarios where aircraft operating in civilian airspace must be equipped to detect and avoid other aircraft. This is typically accomplished with human pilots on board manned aircraft. However, for unmanned aerial vehicles, ground-based surveillance and avoidance systems (GBSAA) can be employed. These systems utilize ground surveillance radars to eliminate the need for human correction or the presence of manned chase aircraft. GBSAA surveillance radars can detect movements in airspace and provide real-time tracking based on known GPS coordinates of the radar, as well as distance and angle to the target. This information can be relayed to the UAV operator, enhancing situational awareness for safe flight operations at all times.

VI-E Infra-Red Thermography

Infrared thermography involves utilizing a thermal imager to capture radiation (heat) emitted from an object, converting it into temperature data, and presenting an image representing the temperature distribution. These recorded temperature distribution images are known as thermograms and enable the visualization of heat-emitting objects that are not visible to the naked eye. Because all objects emit thermal infrared energy above absolute zero (-459.67 degrees Fahrenheit), thermal imagers can detect and display infrared waves irrespective of ambient lighting conditions. A classic example of this technology is the use of night vision goggles to observe objects in darkness.

An infrared thermometer, in its simplest configuration, consists of a lens that focuses infrared thermal radiation onto a detector, which then translates the radiation energy into a color-coded signal. Infrared thermometers are designed for non-contact temperature measurement, eliminating the need for physical contact with the object being measured. Nowadays, there are various types of infrared thermometers tailored for specific applications. The three most common categories of infrared thermometers include:

Spot infrared thermometers : A spot infrared thermometer, often resembling a handheld radar device, is utilized for detecting and measuring the temperature at a specific point on the surface of an object. These thermometers are particularly well-suited for applications where it may be challenging to access the target object or when the object operates in extreme conditions. Spot infrared thermometers offer the advantage of providing quick and accurate non-contact temperature readings, making them valuable tools in various industries and scenarios.

Infrared scanner systems: Infrared scanning systems, which are designed to scan large areas, find extensive use in manufacturing plants with conveyor systems or continuous processes. These systems excel at scanning and monitoring objects on a conveyor belt or sheets of materials, such as glass or metal, emerging from industrial furnaces. This application is a prime example of how infrared scanning technology can be employed to ensure product quality and process control in manufacturing environments. By capturing temperature data across large surfaces, these systems help maintain product consistency and identify anomalies or defects promptly, making them invaluable tools in industrial settings.

Infrared thermal-imaging cameras: Thermal imaging cameras represent an advanced category of radiation thermometers that are employed to measure temperatures at multiple points across a wide area, ultimately producing two-dimensional (2D) thermographic images. These cameras are considerably more complex, both in terms of their hardware and software, compared to spot thermometers. They typically offer live image displays and can be connected to specialized software for in-depth analysis, enhanced accuracy, and comprehensive reporting. Modern thermal imaging cameras are designed to be portable, allowing users to capture temperature data in various settings and conditions. These cameras often feature multiple-color palettes that assist in interpreting temperature differences more effectively.

Infrared thermal imaging cameras provide users with the flexibility to switch between various color palettes, including options like the hot iron palette, black and white palette, and rainbow palette, which aid in distinguishing temperature variations.

Refer to caption

We utilized the Seek Thermal Compact PRO XR camera, shown in Fig. 8 , which excels at detecting people and animals during low-light conditions, such as dawn and dusk, and can identify them at considerable distances where visible light is insufficient. Additionally, its extended range detection capabilities enhance visual perception and situational awareness.

Refer to caption

VII Advantages and disadvantages of the hardware

Vii-a video camera.

Arducam IMX477 advantages:

The Seek Thermal Compact PRO XR, as mentioned, has the capability to support 12-megapixel digital still images at a resolution of 4056×3040 pixels.

Full resolution at 60fps (normal), 4K2K at 60fps (normal), 1080p at 240fps.

The Arducam IMX477 camera module features a motor that can be controlled through software, enabling more intelligent focusing. This means that users no longer have to manually adjust the camera’s focus by physically turning the lens with their hands.

The Arducam IMX477 camera module not only features motorized focusing but also supports autofocus functionality.

The presence of pins on the camera board dedicated to a mechanically switchable IR filter is a notable feature of the Arducam IMX477 camera module. This switchable IR filter allows for flexibility in applications where both visible light and infrared light are required, either separately or in different conditions.

Arducam IMX477 disadvantages:

The Arducam IMX477 camera module cannot physically move the camera and lens for remote direction and zoom control. This means that the camera’s orientation and zoom level are fixed and cannot be adjusted remotely.

It’s important to note that one of the camera modules in the Arducam IMX477 camera kit is designed to work in conjunction with the Camarray HAT (Hardware Attached on Top). This means that this particular camera module cannot be directly connected to the Jetson Nano board alone, and it requires the Camarray HAT for proper functionality.

It’s important to note that not all Jetson boards are compatible with the Arducam IMX477 camera module. Currently, the camera module is supported by the Jetson Nano and Xavier NX boards specifically. Other Jetson boards may not have the necessary hardware or software support to work with this particular camera module.

VII-B Microphone Array

Advantages:

Identifies each direction of the sound source.

Using the Microphone Array, we can identify the type of sound source.

Undesirable sounds can be intentionally eliminated to a large extent, allowing for a more precise and clear analysis of the improved audio.

Limitations:

Microphones introduce challenges when dealing with audio signals, especially in noisy environments.

Extended processing time

Real-time audio processing can be demanding in terms of computational resources, especially when dealing with complex signal processing tasks, such as those involved in microphone array applications.

Performing cross-correlation calculations between multiple microphone signals, especially when using a large number of microphones, can be computationally intensive and time-consuming.

VII-C Radar

One of the significant advantages of radar technology is its ability to penetrate various environmental conditions (clouds, fog, haze, and snow) that can hinder or limit the effectiveness of other sensors, such as optical or infrared sensors.

The radar signal can penetrate insulators (materials that are considered insulating, such as rubber and plastic).

Can determine the target’s speed.

Can measure the distance to an object.

Can tell the difference between stationary and moving targets.

Short range (200 feet).

The radar can interfere with several objects and media in the air.

Cannot distinguish or resolve multiple targets.

Cannot detect targets covered with conductive material.

VII-D Infra-Red Thermography

Seek Thermal CompactPRO XR benefits:

Allows you to detect objects in conditions of insufficient visibility.

Allow users to switch between multiple color palettes.

High infrared resolution, detecting objects at a distance of 550 meters.

Waterproof, dustproof (thanks to a protective cover).

Manual focus.

Does not require batteries or charging.

Seek Thermal CompactPRO XR limitations:

The camera itself is not waterproof or dustproof.

Field of view is only 24°.

Works only with an additional device (for example, a phone).

Limited range.

VIII Design challenges for UAV-assisted emergency networks

Viii-a synchronization among uavs.

UAVs, in general, offer advantages such as rapid deployment, flexible reconfiguration, and improved communication capabilities due to their short-range line-of-sight links. Nevertheless, the deployment of highly mobile and energy-constrained UAVs for wireless communications also brings forth a host of new challenges [ 14 ] .

UAVs have become indispensable in modern life, and achieving precise time synchronization is crucial for multi-UAV flight. However, the current approach, which relies on GNSS satellite navigation systems or ground control stations for time synchronization, has limitations and poses significant security risks [ 15 ] . The challenge lies in achieving highly accurate time synchronization for all drones within the network independently, without relying on external time sources.

In the context of a distributed architecture, such as a large-scale UAV formation, the traditional master-slave time synchronization method is no longer suitable. Instead, the firefly synchronization model, which has ancient origins and has been studied in various fields like biology, chemistry, and mathematics, offers a novel approach to address the issue of distributed time synchronization. The concept involves each UAV transmitting its current time information [ 16 ] . Once neighboring nodes receive this information, they perform a straightforward arithmetic averaging operation. This calculated average is then used as the countdown for the next transmission. This process is iterated several times until all nodes in the network eventually converge to identical clock values on average. This achieves distributed time synchronization for the entire formation network as depicted in Fig. 9 .

Refer to caption

In a UAV formation, multiple UAVs are interconnected and can interact with one another [ 17 ] . Each UAV serves as a node within the network, establishing a decentralized network structure without a central node. The relative positioning between these nodes is consistent.

Even if one node goes offline or becomes disconnected for various reasons, the remaining nodes can continue to maintain network communication and perform other tasks. This means that the loss of a node within the network does not necessarily disrupt network reconstruction. Additionally, each node in the network can utilize its assigned frequency and propagation code to modulate the propagation spectrum and transmit messages to other neighboring nodes. This transmission process follows a broadcast-style approach, enabling one-to-multipoint communication with multiple access capabilities.

VIII-B UAV Network Security

In recent years, there has been a notable increase in the malicious use of unmanned aerial vehicles (UAVs). These attacks have become more frequent and can have severe consequences. As a result, various industries and standardization bodies are actively exploring ways to enhance the security of UAV systems and networks [ 18 ] .

To address this issue comprehensively, threats and protective measures are categorized based on the characteristics of the first four layers of the OSI model: the physical layer, data link layer, network layer, and transport layer. In order to provide a deeper insight, the security mechanisms under examination are thoroughly assessed in terms of their security requirements and objectives [ 19 ] . These objectives encompass aspects such as availability, authentication, authorization, confidentiality, integrity, and non-repudiation.

The classification of security threats according to security requirements is outlined below. We will now proceed to discuss the three most prevalent types of attacks in this context.

VIII-C Active Interfering (Jamming)

In the context of wireless connections, active interference represents a form of disruption that primarily impacts availability. To combat this type of attack, various strategies are employed, such as frequency-hopping spread spectrum (FHSS) and direct sequence spread spectrum (DSSS).

FHSS operates by swiftly altering the channel frequency within a non-overlapping range while transmitting radio signals. This dynamic channel switching helps minimize interference with adjacent radio channels and reduces the vulnerability to jamming attacks by employing random patterns for channel transitions.

DSSS, on the other hand, combines radio frequency carriers with pseudo-noise digital signals to generate a broad transmission signal. This signal carries a greater amount of information by utilizing a wider bandwidth, enhancing resistance to active interference.

VIII-D Denial-of-Service (DoS)

Denial-of-Service (DoS) attacks pose a significant threat to the smooth operation of drones. To counteract these attacks, the CoMAD protocol can be employed for mitigation. Initially, redundant data originating from the same source UAV is eliminated. Additionally, any UAVs attempting to access the system with incorrect passwords or unauthorized authentication contexts are restricted from operation.

VIII-E GPS Spoofing

GPS spoofing presents a serious threat to UAV operations, potentially leading to hijacking. To promptly identify GPS spoofing attacks, the CUSUM algorithm is employed. This algorithm monitors the hit rate and detects GPS spoofing when the time offset surpasses the predefined no-hit zone. For instance, HID-RS deploys rule-based intrusion detection techniques on both UAVs and Ground Control Stations (GCSs) along with corresponding response mechanisms. Typically, attackers emit high-intensity signals (SSI) to gain control of UAVs. Thus, HID-RS equips UAVs with agents to collect SSI data from the source node. The collected SSI data is then assessed against a predetermined SSI threshold to determine the presence of GPS spoofing. The SSI threshold is dynamically adjusted using the Support Vector Machine (SVM) algorithm to maintain effective detection capabilities.

IX Design Architecture

Numerous advanced technology applications, such as employing a hexacopter for various purposes, necessitate a well-structured architecture comprising both hardware and software modules [ 20 ] . In the case of a hexacopter, this layered organization is instrumental in facilitating efficient functionality and task coordination. The design encompasses various hardware and software components that collaboratively operate to accomplish predefined objectives.

Hardware Modules :

Hexacopter Frame and Components: The hexacopter system centers on a robust yet lightweight carbon fiber frame meticulously engineered to accommodate a range of crucial elements, including motors, propellers, batteries, and the payload.

Flight Controller: Serving as the hexacopter’s central nervous system, the flight controller assumes responsibility for real-time aircraft stabilization and control. Employing cutting-edge algorithms, it dynamically regulates motor speeds to ensure flight stability. Furthermore, it collects telemetry data on flight status, battery levels, and sensor readings.

Power Distribution System: Responsible for efficiently distributing power from the batteries to all components, this system ensures a consistent and reliable electricity supply throughout the hexacopter.

Motors and Propellers: Equipped with six high-quality brushless motors, each linked to a propeller, the Hexacopter generates the necessary thrust to achieve flight and stability.

Battery System: The Hexacopter relies on a high-capacity lithium-polymer (LiPo) battery to supply the required energy for propulsion and onboard electronics.

Camera: The Arducam stereo camera HAT enables the easy integration of these camera modules with single-board computers like the Jetson Nano. It delivers high-resolution imagery, rendering it ideal for applications such as mapping, surveillance, and image-based inspections. Visual data plays a crucial role in tasks like navigation, object recognition, and environmental analysis.

Thermal Camera: Thermal sensing represents another crucial aspect of the application layer. It empowers the UAV to identify heat signatures, which prove invaluable for locating living subjects, even in low-light or obscured conditions.

Voice and Sound Recognition: An advanced microphone array sensor, combined with voice and sound recognition algorithms, aids in identifying human voices or distress signals. This capability is pivotal in search and rescue operations, facilitating the pinpointing of the locations of survivors or individuals requiring assistance.

Controller: The Jetson Nano controller gathers data from a diverse array of sensors integrated into the drone. These sensors encompass vision sensors, thermal sensors, and microphone sensors, each offering unique information about the drone’s surroundings. The Nano controller processes this sensor data to make well-informed decisions.

The prototype of the helicopter drone, complete with all sensors and an embedded Jetson Nano minicomputer, is depicted in Fig. 10 .

Refer to caption

Software Modules:

Flight Control Software: Operating on the PX4 PIX 2.4.8 Flight Controller, the flight control software assumes responsibility for stabilizing the hexacopter, overseeing its movements, and executing commands relayed from the RadioLink AT10 transmitter. This software holds critical significance in guaranteeing stable and precise flight performance.

Radio Communication Software: The RadioLink AT10 transmitter establishes communication with the hexacopter through 2.4GHz radio signals. The dedicated software governing this communication system ensures the dependable and real-time transmission of control commands.

Navigation and Control Algorithms: Within the hexacopter’s architecture, the navigation and control layer encompasses algorithms tasked with functions like path planning, altitude control, and obstacle avoidance. These algorithms collaborate closely with the flight controller to guarantee the safety and efficiency of flight operations.

User Interface: Although not explicitly highlighted, user interfaces (UI) or ground control software (GCS) frequently play a pivotal role in hexacopter operations. These interfaces empower users to oversee the hexacopter’s status, strategize flight missions, and fine-tune control parameters.

Sensor Fusion and Data Processing: Hexacopters come equipped with an array of sensors, encompassing accelerometers, gyroscopes, and potentially supplementary sensors dedicated to functions like obstacle detection and mapping. Sensor fusion algorithms harmonize data derived from these sensors, yielding a holistic perspective of the hexacopter’s surroundings.

Task-Specific Applications: Tailored to the specific purpose at hand, the hexacopter can feature dedicated software modules. For instance, when employed in aerial photography or mapping tasks, specialized software for camera control and image processing might be incorporated. Conversely, in search and rescue missions, essential software may revolve around object detection and tracking.

IX-A Loading Capability

Payload refers to the load that a drone carries, encompassing various items such as cameras, sensors, delivery packages, and other technologies tailored to specific requirements. While a drone’s ability to accommodate additional equipment and technologies increases its versatility, it’s important to note that carrying heavier payloads results in shorter flight times due to increased power consumption, which depletes the battery more quickly.

IX-B Flight Time and UAV Charging

For conducting extensive surveys of a specific area, longer flight durations become imperative. Similarly, in emergency scenarios, the need for frequent facility monitoring arises. Commercial UAVs often face constraints due to limited battery capacity, typically resulting in flight times of less than 1 hour. It’s worth noting that the overall drone design significantly influences flight duration. For instance, the Parrot Disco, featuring a fixed-wing single-rotor blade design, offers a continuous flight time of 45 minutes, surpassing the 30-minute continuous flight time of the similarly-weighted Da-Jiang Innovations (DJI) Mavic Pro drone, which employs a multi-rotor quadcopter design. This discrepancy is primarily attributed to the better aerodynamics and reduced thrust requirements of the former design.

IX-C Handling UAV Failures

Focusing on the engine control system, flight control systems, and human factors, which collectively account for approximately 80 % percent 80 80\% 80 % of UAV failures, offers a promising avenue to enhance reliability compared to other manned systems. It is imperative to keep the cost of UAVs at a minimum while ensuring they can execute their tasks with an acceptable level of safety, reliability, operability, and survivability. There are two primary approaches to enhancing UAV reliability: fault tolerance and failure prevention.

Traditionally, fault tolerance relies on hardware redundancy. These schemes are designed with redundant configurations, often triple or quadruple redundancy, to allow for continued operation in the event of a failure. However, fault tolerance via hardware redundancy introduces challenges such as increased maintenance costs, and additional requirements for space, weight, and power consumption, all of which are critical factors for operational UAVs. In cases where size, weight, and cost constraints are paramount, emphasizing crash prevention becomes a more favorable approach.

While there exists extensive literature on fault tolerance methods applied to UAVs [ 21 ] , the field of failure prevention techniques is relatively less detailed and somewhat outdated. Various methods, including fault tree analysis (FTA), failure modes and effects analysis (FMEA), and component analysis (CA), are employed in this context. FTA serves as a qualitative assessment method to guide designers, planners, or operators in understanding potential system failures and devising strategies to address their causes. It typically complements FMEA, which involves an exhaustive examination of failure modes for each system component, assessing their impact on system operation. FMEA is most effective when implemented during the design phase, allowing critical failure probabilities to be identified and mitigated early in the process.

The reliability methodologies presented here can be applied to virtually any aspect of UAVs, encompassing flight control systems, power systems, communications, and other critical components, enabling comprehensive reliability analysis.

X Future Directions

As the field of unmanned aerial vehicles (UAVs) continues to advance, the potential applications for these remarkable machines are expanding at an unprecedented pace. Looking ahead, we foresee a future where drones will play a pivotal role in addressing environmental disasters and humanitarian crises. Here are some possibilities for the future of UAV technology.

X-A Disaster Response and Search-and-Rescue Operations

One of the most promising directions for UAVs lies in their application in disaster response and search-and-rescue missions. Following earthquakes, hurricanes, or other catastrophic events, drones equipped with advanced sensors and imaging technology can be deployed to locate and assist survivors. Looking to the future, we envision the integration of cutting-edge capabilities, including the ability to analyze sound in disaster-stricken areas.

Consider a scenario in which drones not only capture images but also process audio data from their surroundings. Through the analysis of sound patterns, coupled with image recognition technology, UAVs can identify signs of life, such as cries for help or calls for assistance. This innovative approach has the potential to significantly improve the speed and accuracy of search-and-rescue missions, particularly in noisy environments or situations where visual cues alone may prove insufficient.

X-B Real-Time Predictive Analysis

The future of UAV technology extends beyond immediate disaster response to proactive disaster prevention. By collecting and analyzing real-time data, drones can become invaluable tools for predicting and mitigating environmental threats. For example, drones equipped with sensors can detect the early signs of wildfires, monitor their progression, and even predict their future trajectory.

In practice, this means that a drone patrolling a forest area could identify the presence of smoke or rising temperatures that indicate a potential fire outbreak. Utilizing advanced data analytics and machine learning algorithms, the drone could estimate the likely path of the fire and its rate of spread. This crucial information can then be transmitted in real-time to authorities and communities, enabling faster response and more effective evacuation efforts.

X-C Improving Sound Analysis in Noisy Environments

Enhancing the accuracy of sound analysis in noisy environments remains a critical area for improvement. We anticipate significant advancements in this field as we expand our database and refine our machine-learning models. Through the continuous collection and training of a broader range of sound data, our ultimate aim is to achieve the highest possible accuracy in sound detection, even within the most challenging acoustic environments.

The future of UAV technology holds immense promise, particularly in the domains of disaster response and environmental monitoring. By leveraging the capabilities of sound analysis, real-time predictive systems, and ongoing model refinement, drones have the potential to bring about a revolution in how we address environmental disasters and safeguard communities. As we strive for greater precision, reliability, and adaptability in UAV technology, we remain unwavering in our commitment to pushing the boundaries of innovation and making the world a safer and more resilient place.

XI Conclusion

In this article, we present a survey focused on the use of drone-based systems for detecting individuals, with a particular emphasis on identifying human screams and other distress signals. This study holds significant relevance in post-disaster scenarios, where drones traditionally capture aerial images to assess structural damage and determine the extent of the disaster. They also leverage thermal imaging technology to detect body heat signatures, aiding in locating individuals.

We delve into various challenges associated with pinpointing humans through aerial acoustics. Furthermore, we explore the application of signal processing techniques, such as direction of arrival (DOA) based on microphone array signals, to enhance the precision of tracking the source of human noises.

The architectural design of such a rescue system has been thoroughly examined and discussed. Additionally, we draw attention to concerns related to UAVs, including vulnerabilities of the wireless network to attacks, disruptions, and synchronization issues.

Lastly, we touch upon future directions, such as the development of real-time systems and the enhancement of sound analysis in noisy environments.

  • [1] S. H. Alsamhi, A. V. Shvetsov, S. Kumar, S. V. Shvetsova, M. A. Alhartomi, A. Hawbani, N. S. Rajput, S. Srivastava, A. Saif, and V. O. Nyangaresi, “UAV computing-assisted search and rescue mission framework for disaster and harsh environment mitigation,” Drones , vol. 6, no. 7, p. 154, 2022.
  • [2] W. Alawad, N. B. Halima, and L. Aziz, “An Unmanned Aerial Vehicle (UAV) System for Disaster and Crisis Management in Smart Cities,” Electronics , vol. 12, no. 4, p. 1051, 2023.
  • [3] M. Erdelj and E. Natalizio, “UAV-assisted disaster management: Applications and open issues,” in 2016 international conference on computing, networking and communications (ICNC) .   IEEE, 2016, pp. 1–5.
  • [4] H. Kulhandjian, A. Davis, L. Leong, M. Bendot, and M. Kulhandjian, “AI-based Human Detection and Localization in Heavy Smoke using Radar and IR Camera,” in 2023 IEEE Radar Conference (RadarConf23) .   IEEE, 2023, pp. 1–6.
  • [5] H. K. Kulhandjian, A. C. Davis, L. L. A. Leong, and M. P. R. Bendot, “SYSTEM AND METHOD FOR HUMAN AND ANIMAL DETECTION IN LOW VISIBILITY,” Nov. 17 2022, U.S. Patent App. 17/319,080.
  • [6] A. Albanese, V. Sciancalepore, and X. Costa-Pérez, “First Responders Got Wings: UAVs to the Rescue of Localization Operations in Beyond 5G Systems,” IEEE Commun. Magazine , vol. 59, no. 11, pp. 28–34, 2021.
  • [7] K. Namuduri, “Flying cell towers to the rescue,” IEEE Spectrum , vol. 54, no. 9, pp. 38–43, 2017.
  • [8] T. Tomic, K. Schmid, P. Lutz, A. Domel, M. Kassecker, E. Mair, I. L. Grixa, F. Ruess, M. Suppa, and D. Burschka, “Noncontact Vital Sign Detection With UAV-Borne Radars: An Overview of Recent Advances,” IEEE Robotics & Automation Magazine , vol. 19, no. 3, pp. 46–56, 2012.
  • [9] J. Grover, A. Jain, and N. S. Chaudhari, “Unmanned aerial vehicles operated emergency ad hoc networks,” in Proc. IEEE Int. Conf. on Commun. Systems and Network Techn. (CSNT) , Nagpur, India, Nov. 2017, pp. 92–96.
  • [10] M. Basiri, F. Schill, P. U. Lima, and D. Floreano, “Robust acoustic source localization of emergency signals from Micro Air Vehicles,” in Proc. IEEE Int. RSJ Int. Conf. on Intelligent Robots and Systems , Vilamoura-Algarve, Portugal, Oct. 2012, pp. 4737–4742.
  • [11] M. Strauss, P. Mordel, V. Miguet, and A. Deleforge, “DREGON: Dataset and Methods for UAV-Embedded Sound Source Localization,” in Proc. IEEE Int. RSJ Int. Conf. on Intelligent Robots and Systems , Madrid, Spain, Oct. 2018, pp. 1–8.
  • [12] A. Deleforge, D. Di Carlo, M. Strauss, R. Serizel, and L. Marcenaro, “Audio-Based Search and Rescue With a Drone: Highlights From the IEEE Signal Processing Cup 2019 Student Competition,” IEEE Signal Processing Magazine , vol. 36, no. 5, pp. 138–144, Sep. 2019.
  • [13] Y. Rong, R. Gutierrez, K. V. Mishra, and D. W. Bliss, “Noncontact Vital Sign Detection With UAV-Borne Radars: An Overview of Recent Advances,” IEEE Veh. Techn. Magazine , vol. 16, no. 3, pp. 118–128, 2021.
  • [14] Y. Zeng, R. Zhang, and T. J. Lim, “Wireless communications with unmanned aerial vehicles: opportunities and challenges,” IEEE Commun. Magazine , vol. 54, no. 5, pp. 36–42, 2016.
  • [15] K. G. Panda, S. Das, D. Sen, and W. Arif, “Design and Deployment of UAV-Aided Post-Disaster Emergency Network,” IEEE Access , no. 7, pp. 102 985–102 999, 2019.
  • [16] T. Liu, Y. Hu, Y. Hua, and H. Jiang, “Study on autonomous and distributed time synchronization method for formation UAVs,” in Proc. IEEE Int. Freq. Control Symposium & the European Freq. and Time Forum (IFCS) , Denver, Colorado, USA, Apr. 2015, pp. 293–296.
  • [17] F. Alsolami, F. A. Alqurashi, M. K. Hasan, R. A. Saeed, S. Abdel-Khalek, and A. Ben Ishak, “Development of Self-Synchronized Drones’ Network Using Cluster-Based Swarm Intelligence Approach,” IEEE Access , vol. 9, pp. 48 010–48 022, 2021.
  • [18] K.-Y. Tsao, T. Girdler, and V. G. Vassilakis, “A survey of cyber security threats and solutions for uav communications and flying ad-hoc networks,” Ad Hoc Networks , vol. 133, p. 102894, 2022.
  • [19] M. Erdelj, E. Natalizio, K. R. Chowdhury, and I. F. Akyildiz, “Help from the Sky: Leveraging UAVs for Disaster Management,” IEEE Pervasive Computing , vol. 16, no. 1, pp. 24–32, 2017.
  • [20] N. Hu, Z. Tian, Y. Sun, L. Yin, B. Zhao, X. Du, and N. Guizani, “Building Agile and Resilient UAV Networks Based on SDN and Blockchain,” IEEE Network , vol. 35, no. 1, pp. 57–63, 2021.
  • [21] D. Erdos, A. Erdos, and S. E. Watkins, “A survey of cyber security threats and solutions for UAV communications and flying ad-hoc networks,” IEEE Aerospace and Electronic Systems Magazine , vol. 28, no. 5, pp. 32–37, 2013.
Narek Papyan received his B.S. degree in Radio-physics from the Yerevan State University in 2019, and his M.S. degree in Wireless Communication and Sensors from Russian-Armenian University in 2023, respectively. Currently, Narek serves as an Information Security Specialist at the Central Bank of Armenia, where he plays a pivotal role in fortifying the institution’s cybersecurity framework. His responsibilities encompass the implementation and maintenance of rigorous cybersecurity measures to protect sensitive financial data and critical systems. Narek is a distinguished professional with a strong educational background and a diverse range of experiences that demonstrate his commitment to information security and technological innovation. Narek’s deep learning project on “Change Detection” resulted in a remarkable accuracy with a minimal loss, using a model created by himself on a SiamU-net network with million parameters. His research interests include machine learning, wireless communications, and cybersecurity.
Michel Kulhandjian (M’18-SM’20) received his B.S. degree in Electronics Engineering and Computer Science (Minor), with “Summa Cum Laude” from the American University in Cairo (AUC) in 2005, and the M.S. and Ph.D. degrees in Electrical Engineer from the State University of New York at Buffalo in 2007 and 2012, respectively. He was employed at Alcatel-Lucent, in Ottawa, Ontario, in 2012. In the same year he was appointed as a Research Associate at EION Inc. In 2016 he was appointed as a Research Scientist at the School of Electrical Engineering and Computer Science at the University of Ottawa. He was also employed as a senior embedded software engineer at L3Harris Technologies from 2016 to 2021. Currently, he is a Research Scientist at the Electrical and Computer Engineering Department at Rice University. He received the Natural Science and Engineering Research Council of Canada (NSERC) Industrial R&D Fellowship (IRDF). His research interests include wireless multiple access communications, adaptive coded modulation, waveform design for overloaded code-division multiplexing applications, RF and audio fingerprinting, channel coding, space-time coding, adaptive multiuser detection, statistical signal processing, machine learning, covert communications, spread-spectrum steganography, and steganalysis. He actively serves as a member of the Technical Program Committee (TPC) of IEEE WCNC, IEEE GLOBECOM, IEEE ICC, and IEEE VTC, among others. In addition, he serves as an Associate Editor of Annals of Telecommunications Journal, a guest editor for the Journal of Sensor and Actuator Networks (JSAN), and a Review Editor at Frontiers in Communications and Networks. He is a recipient of the best paper award at the 48th Wireless World Research Forum (WWRF) in 2022.
Hovannes Kulhandjian (S’14-M’15-SM’20) received the B.S. degree (magna cum laude) in electronics engineering from The American University in Cairo, Cairo, Egypt, in 2008, and the M.S. and Ph.D. degrees in electrical engineering from the State University of New York at Buffalo, Buffalo, NY, USA, in 2010 and 2014, respectively. From December 2014 to July 2015, he was an Associate Research Engineer with the Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA. He is currently an Associate Professor with the Department of Electrical and Computer Engineering, California State University, Fresno, Fresno, CA, USA. His current research interests include wireless communications and networking, with applications to underwater acoustic communications, visible light communications, and applied machine learning. He has served as a guest editor for IEEE Access - Special Section Journal on Underwater Wireless Communications and Networking. He has also served as a Session Co-Chair for IEEE UComms 2020 and session Chair for ACM WUWNet 2019. He actively serves as a member of the Technical Program Committee for ACM and IEEE conferences such as IEEE GLOBECOM 2015-2022, UComms 2022, PIMRC 2020, WD 2019, ACM WUWNet 2019, ICC 2015-2023, among others.
Levon Hakob Aslanyan received the B.S. degree from the Novosibirsk State University in 1968, and the Ph.D. and Doctor of Science (Russian) degrees from the Novosibirsk State University, in 1976 and 1997, respectively. He has been promoted to Professor position since 1997. He has been the corresponding member of the National Academy of Sciences of the Republic of Armenia (NAS RA), since 2014. Currently, he heads the Department of Discrete Modeling, Analysis, and Recognition Technologies at the Institute for Informatics and Automation Problems of the NAS RA. His research interests are in mathematical logic, discrete mathematics, the mathematical theory of pattern recognition, and artificial intelligence. He has made significant contributions in the field of complexity studies of loosely defined Boolean functions, describing sets encompassing all solutions to discrete isoperimetry and discrete tomography problems, advancing logic-combinatorial pattern recognition theory, and contributing to the field of bioinformatics. He authored more than papers and supervised doctoral and Ph.D. students.
  • Research article
  • Open access
  • Published: 05 December 2018

Search and rescue with autonomous flying robots through behavior-based cooperative intelligence

  • Ross D. Arnold   ORCID: orcid.org/0000-0003-1915-5857 1 ,
  • Hiroyuki Yamaguchi 2 &
  • Toshiyuki Tanaka 2  

Journal of International Humanitarian Action volume  3 , Article number:  18 ( 2018 ) Cite this article

26k Accesses

70 Altmetric

Metrics details

A Correction to this article was published on 01 June 2019

This article has been updated

A swarm of autonomous flying robots is implemented in simulation to cooperatively gather situational awareness data during the first few hours after a major natural disaster. In computer simulations, the swarm is successful in locating over 90% of survivors in less than an hour. The swarm is controlled by new sets of reactive behaviors which are presented and evaluated. The reactive behaviors integrate collision avoidance, battery recharge, formation control, altitude maintenance, and a variety of search methods to optimize the coverage area of camera and heart-beat locator sensors mounted on the robots. The behaviors are implemented in simulation on swarms of sizes from 1 to 20 robots. The simulation uses actual location data, including post-disaster satellite imagery, real locations of damaged and inundated buildings, and realistic victim locations based on personal interviews and accounts. The results demonstrate the value of using behavior-based swarming algorithms to control autonomous unmanned aerial vehicles for post-disaster search and assessment. Three examples of algorithms that have been effective in simulation are presented .

Introduction

WITH little warning, a powerful earthquake shatters the quiet calm of a coastal city, followed shortly by the periodic waves of a brutal tsunami strike. Within minutes, local rescue workers rush to disaster sites, where they are greeted with a morass of broken buildings, piled cars, and splintered debris. Where once streets and fields stretched peacefully, now sit water-inundated lagoons filled with hazardous material. Mobility is extremely limited. Conditions are harsh; it is cold, night is soon to fall, and it is starting to snow. There are debris everywhere; it is hard to even walk.

The workers pull their truck up to a roadblock of over-turned cars. Only a half dozen workers have made it to the site so far. But people are in the water, trapped in cars, trapped in buildings, and there is no time to wait. The rescue workers pull small, cheap quadcopter unmanned aerial vehicles (UAVs) out from the back of their truck. The workers are already cold and wet, thinking about finding casualties, and preparing equipment. They just want to know where to find people, but how can they find anyone in this devastation?

Fumbling with the UAVs, wearing fireman’s gloves, they manage to start flipping the UAVs on. They pause for a moment, trying to remember how to make the things work. But, they do not have to remember. As soon as they are turned on, the UAVs immediately launch and begin their search automatically. Remembering the apps on their mobile phones, the workers open up their “UAV Search” applications. Immediately, an overhead picture of the scenario appears on a map on their phones – it’s the camera feed from the first UAV.

While two of the workers are looking at their phones, a third and fourth are flipping on more UAVs. Three of the UAVs do not even turn on. They must have been damaged somehow. But it does not matter, seven were able to launch. One by one the UAVs fly up into the sky, flock together, and begin a systematic, targeted search of the inundated regions. At first, workers can only see the camera feeds from each of the UAVs. Able to see several feeds on their screen at once, the workers start look up to see where people are. Motion catches their eyes - there, in the top of the parking garage – a group of 12, waving their hands. The workers radio in for a helicopter, targeting the garage.

Then, among the swarm of cheaper UAVs, a better-equipped one is launched. Then another. Suddenly, on the screen, red dots appear. From the “UAV Search” app, a list of locations appears on the left side, organized from highest to lowest probability of a find, by number of people. As the UAVs continue their search, more and more locations are added. The UAVs move in and out of formation as they locate survivors. One worker clicks on the top find. A snapshot of the camera feed at the time of the find is displayed, along with an arrow pointing from the launch site to the location, and a distance measurement. Immediately, the workers know which direction to go, how far to go, and what the site looks like from the air. Seeing that the location is fallen building with no visible signs of a survivor, two rescue workers immediately set out in that direction, knowing the survivor is likely buried in the rubble.

The vignette above is a fictional “what-if” scenario based on real accounts of the 2011 Great Eastern Japan Earthquake and Tsunami (Editorial Office of the Ishinomaki Kahoku 2014 ). The purpose of the vignette is to share a vision of what could be a significant improvement to post-disaster search and rescue efforts by leveraging teams of autonomous flying robots.

Many sources indicate that the first 72 h of a rescue operation is the most critical (Erdelj et al. 2017 ) (Tait Communications 2012 ), though some studies reduce this window to 48 or even 24 h (Bartels et al. n.d. ). According to analyses of the 2011 Tōhoku tsunami in Japan, the first 24 h was the most critical (Editorial Office of the Ishinomaki Kahoku 2014 ). Studies across more than 1000 SAR missions show a survival rate dropping exponentially during the first 18 h after the onset of SAR efforts, dropping to a survival rate that levels off near 0% after 20 h (Adams et al. 2007 ).

Despite data showing that a concentrated effort to rescue trapped persons during the first few hours after a disaster would likely yield greater effect than any effort made later (Alley 1992 ) (Macintyre et al. 2006 ), these efforts are significantly hampered by lack of situational awareness (Editorial Office of the Ishinomaki Kahoku 2014 ) (Ochoa and Santos 2015 ) (Shimanski 2005 ). Indeed, the lack of situational awareness within this critical time frame is one of the most significant problems immediately following a natural disaster (Ochoa and Santos 2015 ) (Shimanski 2005 ) (Riley and Endsley 2004 ). Aid workers cannot rescue survivors if they do not know where survivors are.

Situational awareness, in this context, is the degree to which aid workers are aware of the state of the disaster environment. This state may include locations of survivors, wreckage, roads, weather, water and other hazards, or any other environmental factor that might affect the rescue effort. Situational awareness has been studied and applied in many different military, civil, commercial, and aerospace applications over the past several decades. Emergency services focus on situational awareness as a key factor in reducing risk and increasing safety, especially in disaster search and rescue situations (Shimanski 2005 ).

Rescue efforts are further hindered by lack of a trained, standing force of aid workers capable of handling the often-huge workload after a major disaster (Alley 1992 ). This is a challenging problem, as the logistical difficulties inherent to maintaining a highly trained standing workforce capable of handling mass-casualty natural disasters are numerous. The approach described in this article directly addresses these issues and, in particular, the situational awareness problem within the critical 20–24-h time frame using an automated, technical solution.

This article presents an approach to disaster search and rescue, data acquisition, and other types of post-disaster assessment using one or multiple heterogeneous autonomous UAVs. The robots work cooperatively as a swarm while controlled by behavior-based artificial intelligence (also called reactive AI). This research combines behavior-based artificial intelligence, swarm intelligence, pattern search theory, and existing disaster data into a theory of improved search and rescue through the use of autonomous flying robots, also called drones, Unmanned Aerial Vehicles (UAV), or Unmanned Aerial Systems (UAS).

Simulation results generated during the research show the approach described in this article to be both effective and time-efficient. The data show that a swarm of just five UAVs with standard parameters Footnote 1 equipped with the software and algorithms developed in this research can consistently achieve a 90% standard sensor coverage rate Footnote 2 over a 2 km 2 area in under 90 min, reaching nearly 99% coverage rate in under 2 h when operating in environments modeled after real tsunami disaster locations. The research shows that it is possible to search a wide range of area in a short time using a swarm of low-cost UAVs. The area can be searched continuously even if one or multiple UAVs in the swarm fail or crash. The swarm requires minimal operator input, freeing up rescue workers for other tasks. Performance using this method, measured as sensor coverage at a certain range over time, is improved compared to existing methods. Ultimately, this approach allows more data to be acquired faster, with less effort, than existing methods.

Actual data regarding the time it takes rescue workers to thoroughly search an area of 2 km 2 after a disaster without the use of UAVs varies greatly and is difficult to quantify. Moreover, it is impossible to say how many non-surviving victims may have survived, had they been found sooner. However, interviews suggest it can take days to search the most significantly affected areas (Editorial Office of the Ishinomaki Kahoku 2014 ). Although the use of individual, separately controlled UAVs is certainly an improvement over no use of UAVs, separately controlled UAVs require constant operator involvement and can still take many hours to achieve a high level of sensor coverage. Therefore, although direct quantitative comparison to existing methods is difficult to make, qualitative assessment supports the conclusion that the approach described in this article is likely to improve access to post-disaster assessment data by a significant margin over existing methods. Whether existing methods take 6 h, 12 h, or 3 days to cover 90% of the disaster area, the 1.5-h benchmark achieved by the five-UAV swarm in our simulation is significantly faster than any of these measures.

The emergence of complex traits and behaviors from interconnected sets of individual parts is a well-researched and documented phenomenon (Arnold and Wade 2015 ) (Koffka 1922 ) (Wiener 1948 ). The use of this phenomenon to create decentralized artificial intelligence (AI) in the control of robots was thoroughly described by Brooks (Brooks 1999 ). Brooks approaches artificial intelligence from the “bottom-up” by investigating the emergent intelligent patterns of robots equipped with individual, simple behaviors. These robots do not possess centralized control; rather, they react to stimuli (in the form of sensor input) in a variety of relatively simple ways. From these simple interactions, intelligent behavior emerges. This approach is known as behavior-based artificial intelligence. In behavior-based AI, a robot’s intelligence is based on a set of relatively simple, independent behaviors, rather than on a centralized control unit.

Brooks implements behavior-based artificial intelligence theory using an architecture he calls the “subsumption architecture.” In his work, robots’ behaviors “subsume” each other depending on the results of a variety of inputs, such as sonar and pressure sensor data. Only one behavior will be active at any given time. The active behavior varies based on sensor data. Brooks successfully implemented this architecture on a variety of applications requiring artificial intelligence, such as navigation and motor control (Brooks 1999 ). The subsumption architecture can be considered one implementation of behavior-based artificial intelligence, which is itself a broader concept.

The behavior-based approach was applied to research on swarm intelligence by Kennedy and Eberhart (Kennedy et al. 2001 ). Swarm intelligence is the resultant intelligent behavior of groups of independent heterogeneous entities behaving as a single system, such as a flock of birds, swarm of ants, or a hive of bees. Individually, the entities in the swarm may not have an understanding of the workings of the system as a whole. There may not be a single focal point of control over the swarm. However, in some way, the swarm still manages to work together as a single system to accomplish a goal. An ant swarm finds food sources, gathers food, and even builds complex structures at times. A flock of birds avoids predators and successfully migrates. Bees gather nectar for the hive over a wide range of conditions and environments. Theories of behavior-based, or reactive, intelligence apply to these swarms of entities. Swarms often function in an intelligent manner through the reactive behaviors implemented by their entities. Through the reactive behaviors of many individual entities, intelligence emerges (Kennedy et al. 2001 ).

Behavior-based formation control was applied to groups of robots by Balch and Arkin (Balch and Arkin 1998 ). They successfully integrated formation behaviors with navigation and hazard avoidance both in simulation and on a set of land-based ground vehicles. The robots’ speeds and turn directions were influenced through a system of votes based on sensory inputs and communication between robots in the group. Several other related papers on formation control for groups of robots were published around the same time frame (Balch and Arkin 1998 ).

Virágh and Vásárhelyi applied principles of flocking behavior to UAVs (Virágh et al. 2014 ) (Vásárhelyi et al. 2014 ). Virágh applied agent-based models to the control of flocks of UAVs, incorporating principles of time delay in communication as well as inaccuracy of onboard sensors. Two decentralized algorithms are proposed in their research: one based on the collective motion of a flock, the other based on collective target tracking. A principle of their research is to use a realistic simulation framework to study the group behavior of autonomous robots.

Swarm algorithms for controlling groups of UAVs are also under exploration for defense systems by the US Department of Defense (Frelinger et al. 1998 ). Their purposes range from combat search and rescue to ballistic missile defeat, in which many of the fundamental techniques used for targeting in defense systems are similar in principle to disaster search and rescue. In both scenarios, swarms of UAVs build upon cooperative behavior-based intelligence to efficiently locate one more multiple targets.

A team from the Naval Postgraduate School designed a swarm control framework called the Service Academy Swarm Challenge (SASC) architecture. The SASC architecture is used to control swarms of heterogeneous robots using the C++ and Python programming languages. SASC has undergone successful field tests deployed on swarms of fixed-wing and quadrotor UAVs.

Additionally, a programming language called Buzz has been specifically designed to facilitate heterogeneous swarm robotics (Pinciroli and Beltrame  2016 ). Buzz allows behaviors to be defined from the perspective of a single robot or from the overall swarm. This programming language is capable of running on top of other frameworks and can be extended to add new types of robots.

For the purpose of disaster search and rescue, behavior-based control of land-based robots was implemented in the HELIOS system (Guarnieri et al. 2009 ). The HELIOS system consists of five land-based, tracked robots used for urban search and rescue. Two of the robots are equipped with manipulators to perform physical tasks, and the other three are equipped with cameras and laser range finders and are utilized to create virtual maps of the environment. The robots can be used separately or as a team for more complex missions. The three robots equipped with laser range finders can move autonomously in unknown environments using a collaborative positioning system. The system as a whole requires control by a human operator.

The use of unmanned aerial systems in search and rescue is an area of high interest (Erdelj et al. 2017 ) (Molina et al. 2012 ) under consideration by a number of high profile organizations, including the American Red Cross, NASA, and the Japanese Ministry of Defense (American Red Cross 2015 ). Many efforts in this area have included the use of individually piloted UAVs, rather than autonomous swarms of robots (Erdelj et al. 2017 ). For example, the European CLOSE-SEARCH project includes the deployment of a single UAV with a ground-based control station to locate someone lost outdoors (Molina et al. 2012 ). The value of UAVs for information-gathering and situational awareness acquisition has been expressed by a number of sources (Erdelj et al. 2017 ) (Molina et al. 2012 ) (American Red Cross 2015 ). Researchers at Carnegie Mellon are investigating the use of swarms of tiny UAVs to map the interiors of buildings after disasters (Williams 2015 ). However, research into the use of swarms of autonomous UAVs to aid in locating survivors during exterior search and rescue appears to be minimal.

Although UAVs and Unmanned Ground Vehicles (UGVs) are already in use for disaster search and rescue (Erdelj et al. 2017 ) (Molina et al. 2012 ) (American Red Cross 2015 ), the use of swarms of UAVs optimized to autonomously cover a disaster area, streaming useful data to operators and each other while avoiding collisions, weaving over and around obstacles, and returning to charge batteries, has been largely absent. This absence seems to be due to a combination of air traffic regulations, laws restricting the use of UAVs, and technical limitations which, until recently, have been difficult to overcome.

Due to these challenges, the control of autonomous swarms of UAVs is a relatively new phenomenon. The Naval Post-Graduate School in Monterey, California, flew a swarm of 50 UAVs controlled by a single operator in 2015 as part of their Zephyr system. At the time, this event is believed to have set the world record for the most UAVs under single operator control (Hambling, 2015 ). The use of swarms of UAVs to aid in post-disaster assessment was imagined in 2016, in a report describing a human-machine interface to control the UAV swarm.

The Orchid disaster response system under development by the UK appears to be the closest to the approach described in this article (Ramchurn et al. 2016 ). It uses decentralized control of a swarm of UAVs to enhance disaster rescue efforts. The Orchid system is designed to interpret crowd-sourced data, building a picture of a situation and providing recommendations for resource allocation. In contrast, this article describes behavior sets and algorithms used to control UAVs to maximize sensor coverage over areas of land and water. This article also presents the results of simulated time trials using swarms of UAVs. The UAVs are controlled by three different behavior sets to search a realistically designed post-disaster location. Data of this particular nature does not appear to be present in the literature.

Distributed coordination is key to enhancing the scope and level of detail of post-disaster assessment. By distributing the workload among many units, the amount of work and the time it takes to do the work is significantly reduced. This also allows scaling the system to larger or smaller areas by simply adding or subtracting units from the swarm. Controlling these individual units through behavior-based artificial intelligence allows them to react successfully to a variety of challenging, changing situations with minimal or no operator input. The behavior-based method of robot control has been a staple of robotics for the last several decades and has a proven track record of success.

Recent technological developments have made modern UAVs more capable and cost-effective, enabling the use of coordinated swarms at reasonable cost. Footnote 3 UAVs can be equipped with built-in hover and maneuver capabilities as well as high definition (HD) and/or infrared (IR) cameras, wireless capabilities to stream live data, and the ability to carry small payloads or additional sensors. This combination of traits has now enabled the practical use of swarms of small, cost-effective UAVs for post-disaster assessment. In order to propel these efforts forward, it is important to demonstrate the significant time-saving effects that the use of such swarms can produce in post-disaster situations. Furthermore, developing and assessing different algorithms to control the swarm as a single, distributed system while also maintaining the individual capability of each separate unit in the swarm is key to the success of this type of system on the whole.

The research described in this article applies the concepts of behavior-based and swarm-based intelligence to control groups of UAVs to locate survivors in disaster search and rescue scenarios. By using data gathered from town records, in-person interviews, survey data, and site visits, several scenarios were built out that depict the post-tsunami environment in 2011 Sendai City, Japan, with a large degree of accuracy. The heights and placement of structures are accurate, and the locations and behaviors of survivors within the scenario are based on real accounts (Editorial Office of the Ishinomaki Kahoku 2014 ) (Municipal Development Policy Bureau 2017 ) (Post-Disaster Reconstruction Bureau 2015 ) (Sato 2015 ) (The Center for Remembering 3.11 2015 ) (Tohoku Regional Development Association n.d. ).

The algorithms used in this research allow the UAVs to dynamically respond to changes in the environment, as well as unknown scenarios and unforeseen circumstances. For example, sensors can malfunction and the UAVs will still retain some measure of utility. A building can be “dropped” in front of a UAV in the simulation, and the UAV will successfully navigate around or over the building, then continue its task.

A dynamically changing environment is a key part of a disaster scenario. Unless injured or safe, survivors do not often stay still. People move to higher floors in buildings. They move towards lights, sounds, higher ground, helicopters, and safety (Editorial Office of the Ishinomaki Kahoku 2014 ). The weather gets cold, it may start to snow or rain, and the sun may go down (Editorial Office of the Ishinomaki Kahoku 2014 ). Night falls, day breaks, visibility changes. Any rescue approach needs to have the flexibility to accommodate these dynamic changes and respond to unknown environments. Our approach demonstrates this flexibility.

A swarm of standard, commercially available autonomous UAVs controlled by behavior-based, cooperative artificial intelligence software may significantly improve the data set containing known victim locations during disaster search and rescue efforts with minimal operator input required. For the purposes of this research, several requirements are imposed on the algorithm sets used to achieve this hypothesis. The intent of these requirements is to provide a practical, flexible system:

Performance —Gather more data faster

Achieve a simulated standard sensor coverage (30 m range) of 90% across 2 km 2 within 24 h.

Achieve a simulated precise sensor coverage (15 m range) of 90% across 2 km 2 within 24 h using a simulated, miniaturized FINDER sensor. Footnote 4

Scalability —Support any number of robots

Supports an arbitrary number of UAVs in the swarm. Due to computational limits during simulation executions, a maximum of 20 of UAVs was used in this research.

Heterogeneity —Support mixed groups of robots and sensor configurations

Different capabilities and sensor configurations supported within the same swarm.

Different UAV types and models supported within the same swarm.

Behavior-based artificial intelligence

Behavior-based artificial intelligence is the concept that intelligence can emerge through the interactions of simple, individual behaviors lacking centralized control. Combining several well-defined but separate behaviors can result in the emergence of intelligent systemic behavior. When used in software and robotics, this approach can provide a high level of robustness, as failed behaviors can be ignored while default behaviors are activated (Brooks 1999 ). The division of logic between behavior modules can allow the system to scale to a high level of complexity without imposing an unmanageable cognitive load on software developers.

Although there are many ways to design robust systems, systems designed with a behavior-based approach to AI are well-suited to reacting to environments dynamically based on sensor inputs without prior knowledge (Brooks 1999 ). These properties are highly desirable in a post-disaster assessment system operating in a volatile environment where the failure of individual parts of a system may be common due to hazardous external factors.

Proposed technique

To enhance post-disaster assessment, search and rescue, and information gathering, we propose using a technique that combines behavior-based artificial intelligence with cooperative swarm behavior. Individual units of a swarm equipped with behavior-based AI are well-suited to perform cooperative tasks (Kennedy et al. 2001 ), as the results of their own behaviors combine together to emerge as individual unit behaviors, and these unit behaviors combine together to emerge as collective swarm intelligence (systemic behavior).

We implement behavior-based AI and cooperative behavior in a simulated swarm of UAVs to search for disaster survivors in a post-disaster environment. We measure the effectiveness of the approach by recording the detection rates over time of the survivors by the swarm. Our goal is to reach a 90% detection rate in under 24 h in simulation.

This approach can be applied to any sort of information gathering and is not limited to just search and rescue. However, using search and rescue gives a direct, tangible way to understand the benefits and effectiveness of the approach.

Proposed algorithms and control methods

To enhance survivor detection through the use of UAV swarms, several control methods are considered (Fig.  1 ). These methods are all implementations of behavior-based AI. Each control method, also referred to as a method or an algorithm , is simply a set of ordered behaviors conceived of and developed during the research. The order of the behaviors within each method is critical as it determines the priority level at which they are executed. As behaviors can be grouped and ordered in many different ways, it is important to figure out which set of behaviors, and in which order, is most effective. The three sets of behaviors (methods) were selected based on the anticipated effectiveness of each set of behaviors as determined by the researchers.

Standard method —UAVs all follow the same pattern.

Spiral method —Upon locating a “critical mass” concentration of survivors, a single UAV moves outward in a spiral pattern, then returns to previous search method.

Scatter method —Each UAV simultaneously moves to a different location in the search pattern.

figure 1

UAV search methods. Actual patterns are more complex; the patterns depicted here are simplified for clarity. Blue dots are UAVs, gray areas are destination targets, and red triangle is a concentration of survivors. From left to right: standard, spiral, and scatter

The behaviors in the behavior-based software architecture used in this research are all original and were conceived of and created by the researchers. They are implemented as separate, named, plug-and-play software modules. Each of the three control methods consists of some subset of the following 12 behavior modules. These modules are described in detail in the “ Method implementation ” section and briefly here:

Launch —Take off from a stationary position

Avoid —Avoid collisions with buildings and obstacles

Climb —Climb over obstacles

Recharge —Recharge batteries

Height —Maintain a certain height above the ground, buildings, or large objects

Spiral —Move out in an expanding spiral

Form —Maintain distance between other UAVs

Repel —Move away from other UAVs when too close

Seek —Move directly to a specified GPS location

Waypoint —Move towards a preset pattern of waypoints

Scatter —Move individually towards an unallocated waypoint among a set

Wander —Choose a random location and move towards it

These behaviors were conceived based on deductive reasoning, literature search (Brooks 1999 ) (Kennedy et al. 2001 ), and extensive trial and error in simulation. Each behavior is assigned a priority. The UAV control software arranges priorities by the order the behavior modules are loaded into the software. Earlier behaviors, when triggered, prevent later behaviors from occurring at the same time. That is to say, if the avoid behavior is active at a given time, no behaviors at a lower priority than avoid in the list will be activated (such as height or recharge ). A given time in this situation refers to a given tick in the software, which is approximately 15–16 ms. This measure is consistent with the duration of a tick used in personal computers running Microsoft Windows, Apple macOS, or Linux, and mobile operating systems used in UAVs such as the Google Android operating system and iOS.

The UAV re-checks its sensor input at a rate of roughly 60 Hz (or 60 frames per second), or every 16 ms; thus, reactions that result in the activation of different behaviors occur quickly and often blend together in the eye of the watcher to seem integrated. Perhaps, this type of behavior is even at the core of evolved intelligence (Brooks 1999 ) (Kennedy et al. 2001 ).

The order of behaviors is critically important to the overall operation of the system. For example, if the height behavior was prioritized over the recharge behavior, the robot would never be able to charge its batteries. Every time it tried to land at the battery charging station, the height module would make it climb again! If the avoid behavior was ordered below seek, the robot would run into obstacles and likely crash while moving to its destination. Thus, the emergent intelligence of these robots is a product of the careful, simultaneous consideration of both wholes and parts (Arnold and Wade 2015 ). The desired result emerges from the determination of what each behavior should do in the context of the others and how the behaviors are correctly prioritized as a whole system.

A major advantage to this approach is flexibility in the software; in the software designed for this research, behavior modules can be coded and inserted by outside parties. A simple configuration file determines their load order (priority), and they can be added to the system by simply placing the compiled behavior module in the Behaviors folder on the host computer’s hard drive. In this way, the simulation system is extremely flexible in that it allows testing of all sorts of behaviors and orders without requiring any changes to the base system.

Method implementation

The details of each of the behaviors and control methods are explained in this section. It is important to note that UAVs are continuously broadcasting their own locations over a wireless network and receiving and processing the locations of other UAVs.

Launch— Take off from a stationary position

Activation: Robot is not flying, is within 10 m of deployment location, and has at least 99% battery life.

Begin ascending. Note that nothing more is needed; once the robot is flying, the height module will take over and bring it to the correct altitude.

Results: Robot will ascend from a previously landed position.

Avoid— Avoid collisions with buildings and obstacles

Activation: Potential collision detected based on speed, angle of movement, acceleration, and location of nearby objects as reported by sonar sensor.

If moving faster than acceleration rate, decelerate.

If moving slower than acceleration rate, accelerate full speed at a 200° angle from current heading. This essentially turns the robot in the opposite direction of the imminent collision, at a slight 20° angle difference. The 20° angle difference prevents the robot from moving straight backwards, and then forwards again into the same situation as the previously executed behavior takes over.

If, after 12 s, robot is still within 2 m of the original location, change the deflection degrees from 200 to 160 (20° angle on the other side of the opposite.

Results: Robots will “bounce around” objects in their way.

Climb— Climb over obstacles

Activation: An obstacle is closer than 5 m as detected by sonar sensor.

Accelerate upwards at maximum acceleration, until obstacle is not detected horizontally to robot.

Stabilize horizontal movement during upwards acceleration.

Results: As a robot nears an obstacle, it will ascend up over the obstacle, where the height module then takes over and brings the robot to the appropriate height above the obstacle.

Recharge— Recharge batteries

Activation: Less than 5 min of battery life left.

Move directly to deployment location at 75% of maximum speed.

If within 3 m of deployment location, reduce speed until stabilized, then land.

Results: When a robot’s battery becomes low, it flies directly back to the deployment location and lands.

Height— Maintain a certain height above the ground or large objects

Activation: Closest object below robot is six or more meters away or four or less meters away.

If closest object is six or more meters away, descend at maximum acceleration.

If closest object is four or less meters away, ascend at maximum acceleration.

Results: Robots tend to maintain the desired height above objects below them.

Spiral— Move outwards in an expanding spiral

Activation: Four or more survivors detected within a 10-m radius of each other.

Move in an expanding spiral from the center point of the located survivors until reaching a 100-m radius.

Results: This behavior can be equated to the “expanding square” visual search pattern (Washington State Department of Transportation 1997 ) but is implemented as an expanding circle instead of a square. When the UAV detects a concentrated group of survivors, it begins to spiral outwards from the center location of the survivors. As survivors often congregate in larger groups and move towards groups, it is theorized that this behavior will lead to the discovery of additional survivors that may not have been able to reach the detected group.

Form— Maintain 50 m ± 5 m distance between other robots

Activation: Closest robot is either within 45 m or more than 55 m away.

If within 45 m, accelerate in opposite direction of closest robot at maximum acceleration rate.

If more than 55 m away, accelerate towards closest robot at maximum acceleration rate.

Results: This is a type of flocking behavior (Kennedy et al. 2001 ). Robots tend to group up together and stick together in large groups. Small groups can split off, but as they move near each other, they tend to re-engage the larger group.

Repel— Stay at least 10 m away from other robots

Activation: Closest robot is within 10 m.

Accelerate in opposite direction of closest robot at maximum acceleration rate.

Results: This behavior prevents robots from moving too close to each other in the absence of a flocking behavior such as form .

Seek— Move directly to specified GPS location

Activation: Seek location specified, and robot is more than 10 m away.

Accelerate towards specified location at maximum acceleration rate.

Results: Robots can be ordered to move directly to specific locations.

Waypoint— Move towards a preset pattern of waypoints

Activation: Set of search waypoints exists.

Accelerate at maximum rate towards current waypoint.

Once waypoint is within camera detection range, broadcast completion of waypoint over wireless network and set next waypoint as current waypoint.

Results: As the UAVs act as a single entity, they “compete” to reach the next waypoint. No single UAV is in charge, and there is no “leader” UAV. Any UAV that reaches the next waypoint will send a message to all other UAVs declaring that the waypoint has been reached. Upon receipt of this message, the UAVs will begin to move to the next waypoint. Thus, as a single system, the UAVs can be assigned one set of waypoints and they will effectively explore every waypoint as a swarm. In essence, waypoints tell the swarm to ensure that some part of your swarm, any part, covers this waypoint . In the simulations used, UAVs communicated their waypoint information via Wi-Fi. Thus, delays or long distances in Wi-Fi could have an effect on the swarm’s behavior as a whole.

The waypoint search used in this research resulted in a version of a search called “parallel track” or “parallel sweep” (Washington State Department of Transportation 1997 ) performed as a swarm. Also, when this behavior combines with avoid , the UAVs perform a variation of the “contour search” (Washington State Department of Transportation 1997 ) because they automatically avoid collisions. These are some of the interesting emergent properties of the interactions between simple behaviors.

Scatter— Move towards a pre-defined search pattern waypoint which is not already allocated to another UAV

Once waypoint is within camera detection range, broadcast completion of waypoint over wireless network and set next waypoint as current waypoint. Next waypoint must not be the current waypoint of any other UAV in the system.

Results: The swarm of UAVs scatters across the disaster area, searching multiple different locations simultaneously.

Wander— Choose a random location and move towards it

Activation: Always. Note that this behavior is rarely activated in a fully functioning system because it is almost always subsumed by some other behavior.

If location sensor exists and is functioning, choose a random wander location 100 m away and accelerate towards it at half speed.

If within 10 m of current wander location, choose new location.

If a location sensor does not exist or is malfunctioning, set a random target heading and proceed at half speed.

After traveling for 1 min at current heading, change to a different heading.

Results: This behavior is included for robustness. Wander is a default behavior in case other behaviors crash or fail to execute for any reason. If all else fails, a UAV will try to wander to a new location which may have different sensory inputs and/or different terrain, facilitating a better result.

Table  1 shows the behaviors used by each control method. Although it may appear that these methods are similar in that they use many of the same behaviors, notably most of those behaviors are a necessary foundation to the successful function of any higher-order robot behavior. A living being must eat, drink, and breathe before she can do more complex tasks. In the same way, our UAVs must launch, avoid obstacles, and maintain height before they search for disaster survivors. The essential, method-defining behaviors are the ones included, or left out of, each method.

Standard method

A swarm of UAVs operating the standard method behavior set (Fig.  2 ) will launch then proceed to the first waypoint in their search pattern (Fig.  3 ). Along the way, they will maintain appropriate distances between each other by continuously broadcasting their locations over a wireless network, avoid collisions with obstacles through maneuvering around or climbing over, and maintain proper height. When the first UAV in the swarm reaches the current waypoint location, it broadcasts this data to the rest of the swarm. As the UAVs receive this data, they begin moving towards the next waypoint in the search pattern. In some cases, UAVs on the far side of the swarm may already be close to the new waypoint. The result is that a large swarm of UAVs may “zig-zag” between locations in a way that can be efficient, whereas a smaller swarm of just one, two, or three UAVs may actually fly back and forth between the waypoints. Both methods maximize coverage area and follow the same behavior software, though an observer will notice significant differences in the actual flight paths of the UAVs and may conclude (incorrectly) that they are actually using different artificial intelligence software.

figure 2

Standard method behavior set

figure 3

Standard method showing the paths of three UAVs launched from the blue rectangle on the center left. Red, yellow, and green dots are survivors in different states of discovery. In this scenario, UAVS moved in a search pattern across the area starting in the northwest and ending in the southeast. Photograph by Geospatial Information Authority (GSI) of Japan (Geospatial Information Authority of Japan 2011 )

Upon a low battery indication, a UAV will break from formation and return to its deployment location, land, and recharge its batteries. When the recharge is complete, the launch behavior will detect a full battery and automatically activate. The robot will then launch and proceed to the next waypoint, likely meeting up with the rest of the swarm along the way.

While following this method, it is possible and likely that robots will break into smaller groups as they recharge their batteries and return to the field. The design and architecture do not prevent or discourage this, and it is an emergent result of the complex interactions of simple behaviors.

Spiral method

The spiral method uses the standard method but implements an additional behavior: spiral, which is inserted after height and before form in the behavior priority list.

The spiral method behavior set (Fig.  4 ) operates similarly to the standard method, but differs in one significant way. While engaging in the standard method search, when a UAV’s spiral behavior is activated through detection of a concentration of survivors, the UAV “breaks away” from the group and performs a spiral maneuver out to a 100-m radius (Fig.  5 ). After completing this maneuver, the robot returns to its regular formation within the group. Within the software architecture, the only requirement to implement this method is the insertion of the spiral behavior module in the correct place in the behavior list. No other changes need to be made. That such a change can be made so simply is one of the advantages of the behavior-based artificial intelligence paradigm.

figure 4

Spiral method behavior set

figure 5

Spiral method showing the paths of three UAVs. As with standard method, UAVs launched from the blue rectangle. Note the circular pattern in the northeast corner as a UAV located the group of survivors (green dots) on top of the elevated building and performed the spiral behavior while the others continued the search. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011 )

The spiral method accounts for evidence gathered during disaster search and rescue (Editorial Office of the Ishinomaki Kahoku 2014 ) (A. E. S. M. Staff Member 2017 ) showing that survivors are likely to group together following a disaster. If a few people are found together, it is likely that more are present as well. Spiraling outwards from the locations of the first few people found is likely to result in the discovery of new survivors.

The distressed person density information could be used by rescue workers in many ways, such as determining where and when to send rescue vehicles such as helicopters or boats. Also, the spiral method may result in the discovery of distressed persons attempting to unite with the group, and coming close, but failing to cover the last bit of distance due to insurmountable obstacles, as happened during the 2011 tsunami (Editorial Office of the Ishinomaki Kahoku 2014 ).

Scatter method

The scatter method differs from standard and spiral methods significantly in that it sends each UAV to a different point in the search pattern. The waypoint behavior module is removed completely and replaced with a scatter module. Also, the form module is replaced with the repel module.

The scatter method (Fig.  6 ) represents a significant diversion from both the standard and spiral methods. Although this method is still cooperative, rather than operating as single flock with all robots seeking the same point then switching to the next when any one UAV reaches the point, using the scatter method, each UAV has its own destination point which is different from all the others (Fig.  7 ). Theoretically, this allows the swarm to spread over a larger area in shorter time.

figure 6

Scatter method behavior set

figure 7

Scatter method showing the paths of three UAVs. As with standard method, robots launched from the blue rectangle. However, each UAV proceeded to a different location in the search pattern, scattering them across the area. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011 )

Destinations are selected based on a staleness factor , that is, points that have not been reached yet by the swarm as a whole are highest priority, whereas points that have been visited further in the past are slightly lower, and points that have been recently visited are the lowest in priority. If one UAV is already seeking a point, a different point is chosen. If all points are already chosen, the UAV chooses an optimal point based on staleness factor. Using this method, the swarm of UAVs will effectively scatter across the disaster area, searching multiple different points simultaneously.

Although in theory the scatter method might appear to be a better option than standard or spiral methods given that different UAVs are able to explore different locations in parallel, in practice, a swarm of UAVs flocking together significantly increases the probability of survivor detection. Sensor range is limited, and a group of UAVs flocked together maintaining a certain distance from each other effectively forms a large, single system with a combined, redundant sensor range. Without flocking, a single UAV’s sensor range is limited; therefore, as locations are explored separately, the search pattern must necessarily be quite complex or contain a large number of waypoints to approach the same level of effectiveness as the other methods. In this case, a hybrid method between scatter and spiral could be more effective.

Performance analysis

Assumptions.

While developing the simulation software used in this research, several assumptions were made about the UAVs:

Programmable —The UAVs are programmable in that they are controlled by modifiable software and can receive commands to change speed and direction.

Quadcopter —UAVs are standard multirotor helicopters lifted and propelled by four rotors.

Stability control —UAVs have built-in stability control that allows them to hover stably in one location or can be easily equipped with equivalent Commercial Off the Shelf (COTS) software to provide this effect.

Network unavailable —Due to loss of infrastructure and other inherently challenging circumstances during most disaster search and rescue situations, it is assumed that a commercial Internet network may not be available. The UAVs will set up their own ad hoc network to communicate with each other. This network is not dependent on existing network infrastructure.

The simulation software allows the selection of different commonly available off-the-shelf UAVs. It also allows UAV parameters to be customized. For the scenarios used in this research, Table  2 shows the parameters that were used in the simulation based on current commercially available data.

Sensors and equipment

In addition to the software behavior modules, UAVs are provided with simulated sensors and equipment values to be customized (Table  3 ). Collision avoidance depends on sonar sensors. One sonar sensor is mounted down-facing, while the others are outward-facing from the left, right, forward, and rear sides of the UAV. The sonar data is fused together to form a single sonar sensor picture. Formation and flocking behavior depends on both sonar sensors and the GPS. Communication between UAVs, and therefore cooperative swarm behavior, depends on the Wi-Fi HD communicator. UAVs determine their own locations, and, by extension, which direction to travel to reach a waypoint, by using the GPS sensor. The behavior modules are highly dependent on the input from these sensors.

These sensors can be turned off or on, or “broken” in the simulation to simulate how a UAV will behave in different practical situations. The range and effectiveness of the sensors can also be adjusted. This allows the designing of a robust system prior to actual deployment and hardware testing.

The UAV’s camera is mounted in a down-facing position on the bottom of the chassis. Although a camera radius of just 15 m may seem small, the intent of this range is to capture difficult environmental conditions such as fog, snow, rain, and debris, which may interfere with a camera’s range of vision. A 15-m radius provides a conservative estimate that likely falls within the effective parameters of a wide range of commercially available cameras and sensors.

Simulation scenario

The environment chosen to be simulated for this research was in a town called Arahama, in Wakabayashi, Sendai City, Miyagi Prefecture, Japan, one day after the 2011 Great Eastern Japan Earthquake and Tsunami. This location was chosen because it was one of the hardest hit by the tsunami, and a great deal of data were available on the town, including satellite imagery, population, physical layout, timetable of the tsunami, search and rescue data, personal interviews, and locations of survivors. Within this environment, three different patterns were considered when setting the locations of distressed persons within the simulation (Fig.  8 ):

Random —Distressed persons were scattered at random across the search area.

Congregated —Distressed persons were concentrated at likely rescue locations according to data from a variety of sources. For example, schools, parking decks, and other tall buildings contained more survivors while low areas contained few, if any (Editorial Office of the Ishinomaki Kahoku 2014 ) (Municipal Development Policy Bureau 2017 ) (Post-Disaster Reconstruction Bureau 2015 ) (Tohoku Regional Development Association n.d. ) (A. E. S. M. Staff Member 2017 ).

Mixed —Half of the distressed persons were congregated and the other half random.

figure 8

Survivor distribution patterns. Gray boxes are buildings, red dots are survivors. From left to right: random, congregated, mixed

The mixed pattern was selected and used for our research. Although the congregation pattern is based on real data acquired at Arahama (Editorial Office of the Ishinomaki Kahoku 2014 ) (Municipal Development Policy Bureau 2017 ) (Post-Disaster Reconstruction Bureau 2015 ) (The Center for Remembering 3.11 2015 ) (Tohoku Regional Development Association n.d. ) (A. E. S. M. Staff Member 2017 ), randomly scattered survivors should not be discounted as it could be that they were simply not found during rescue efforts. Therefore, the mixed pattern is the best fit for this research. Practical algorithms should show greater effectiveness at congregation-heavy patterns than at random patterns.

For the purpose of the simulation, satellite imagery of the actual location was acquired (Figs.  9 and 10 ). Ideally, a photo immediately following the tsunami strike would be desirable. Unfortunately, such imagery was not available; this image was taken on March 12, 2011, the day after the tsunami strike. Building locations were placed according to the imagery and checked against height data as well as cross-referenced against actual photos and on-site interviews with local residents.

figure 9

Satellite photo of the town of Arahama taken on March 12, 2011 (Geospatial Information Authority of Japan 2011 ). Left is full photo, and right is sample of 300 m 2 sub-section built in DroneLab environment builder showing red buildings and red dot survivors. The large buildings in the upper left corner of the right photo are the ruins of the Sendai Arahama Elementary School, a primary evacuation site during the tsunami. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011 )

figure 10

Left is northwest corner of satellite photo in Fig.  9 , and right is the same area after build-out using DroneLab environment builder. Buildings shown as red rectangles and survivors as red dots. Photograph by GSI of Japan (Geospatial Information Authority of Japan 2011 )

Validation of the disaster area model

The model of the disaster area was built by overlaying structural data on the satellite photos shown above, resulting in a high level of face validity. The heights of the buildings were determined by on-site survey and measurement. As time did not allow for all buildings to be measured and some have in fact been demolished since 2011, buildings that could not be directly measured were assigned height data based on their types, locations, and designs. For example, in a row of similar houses, the height of a single house may have been measured and then used for all similar houses.

To accurately represent survivor distribution in the simulation model, data from a variety of sources were used. These data can be collated to show a pattern in which groups of certain numbers of survivors gathered at certain places within the town (Editorial Office of the Ishinomaki Kahoku 2014 ) (Municipal Development Policy Bureau 2017 ) (Post-Disaster Reconstruction Bureau 2015 ) (The Center for Remembering 3.11 2015 ) (Tohoku Regional Development Association n.d. ) (A. E. S. M. Staff Member 2017 ).

As no data is available on the locations of victims lost to the tsunami in Arahama, a random distribution pattern was chosen to represent the remainder of the town’s population. The mixed pattern using the real data combined with the use of random distribution for the remaining survivors, based on the total population of the town, is considered a reasonable way to represent the survivor locations in the simulation model based on available data.

The model was validated by comparing the locations and heights of buildings, numbers of survivors, and congregated groups of survivors to satellite photos, aerial photos, and records obtained in Arahama detailing the events during and immediately following the tsunami. The resultant simulation model was used as a base for the simulations performed during this research.

Results in simulation

Results were generated using the DroneLab Unmanned Aerial System (UAS) simulation software sponsored by the Japan Acquisition, Technology, and Logistics Agency’s (ATLA) Air Systems Research Center (ASRC). Footnote 5

DroneLab runs on multiple platforms, including macOS, Unix-like operating systems, or Microsoft Windows machines, using the Java environment. The simulation environment is user-definable, displaying either an image as a background or a blank field of 2000 × 2000 m. A background image is typically a satellite photo of arbitrary size. The environment is three dimensional, displaying both a two-dimensional top-down view and a three-dimensional view. Various sizes, heights, and dimensions of square, circular, and rectangular objects can be placed on the field both before and during a simulation. Survivors can also be placed on the field at specific locations and/or distributed randomly. Deployment locations for rescue workers can be placed as rectangular areas on the field. The aerial robots are displayed as circles with spinning bars in their centers, whereas the obstacles are red objects in the two-dimensional view and yellow objects in the three-dimensional view. Survivors are shown as red dots on the field, turning yellow and finally green based on their states of discovery. Sensor range displays can be toggled on and off from the simulator’s user interface.

DroneLab allows the inclusion of one or many robots equipped with simulated sensors and equipment and supports the addition of pluggable behavior modules written in the Java programming language. It includes a physics engine that allows specification of speeds, acceleration rates, and various other physical properties, and provides collision checking and gravity. DroneLab allows the acceleration of time and the addition of obstacles “on-the-fly” to create a dynamic virtual environment.

Figures  11 and 12 show the results of the simulation when applied to swarms of UAVs using the parameters presented in Table  2 . The percentage of survivors seen over time by the IR camera, referred to as camera coverage , was chosen as the measure to display, as its range can be generalized to many other sensors. Each UAV’s simulated camera’s detection radius was limited to 15 m as a way to account for environmental conditions such as darkness, fog, rain, snow, and debris. The camera coverage percentage shown in the vertical axis of the figures is a measure of the number of total survivors detected by the camera of any UAV divided by the total number of survivors in the simulation, the ratio of detected to undetected survivors by the swarm as a whole. Survivor distributions use the mixed method described previously in Fig.  8 . Three hundred fifty survivors were congregated on and around likely evacuation sites (Editorial Office of the Ishinomaki Kahoku 2014 ) and 300 were scattered randomly across the disaster area, for a total of 650 survivors. According to sources from Arahama (Editorial Office of the Ishinomaki Kahoku 2014 ) (Sato 2015 ) (A. E. S. M. Staff Member 2017 ), the number 650 is roughly equal to the population of the local area at the time of the tsunami. The time axis shows simply the hours, minutes, and seconds since the UAV swarm was deployed.

figure 11

Average percentage of survivors found over time, referred to as camera coverage, by a swarm of five UAVs across six simulation runs. Scatter method was the slowest and spiral method the fastest to reach the goal of 90%. The jump in coverage just after 43 min occurs when the swarm encounters an evacuation center such as a school in which many survivors are co-located

figure 12

Average camera coverage rates of swarms of five, 10, and 20 UAVs for all three methods shown in parallel. ST is standard , SP is spiral , and SC is scatter . Scatter method with five UAVs was the slowest and spiral method with 20 UAVs the fastest to reach the 90% coverage goal

Figures  11 and 12 show that in all of these results, every situation resulted in the achievement of 90% or more camera coverage in under 2 h. Swarms of 10 or 20 UAVs using the standard or spiral method were consistently able to discover 90% or more of simulated survivors in less than an hour. Left running for 4 h, swarms of 10 or more UAVs consistently achieved a 98% or 99% location rate as they re-ran their routes in flocking formation. Similar algorithm differences were observed when the UAVs were equipped with simulated 5-m-ranged FINDER sensor instead of the camera, though discovery times generally increased by 30–50% with the shorter-ranged sensor.

These results are significant as they show that there is the potential to spot 90% of visible survivors of a disaster situation, even in hazardous, non-drivable, or inundated areas, in under an hour with little operator intervention using the proposed technique. This is well within the 24-h time limit suggested as optimal for disaster response (Editorial Office of the Ishinomaki Kahoku 2014 ) (Bartels et al. n.d. ), even when the potential multi-hour mobilization times for manned rescue teams are factored in. These results almost certainly represent a significant improvement over existing methods. Actual data regarding the time it takes rescue workers to thoroughly search an equivalent area using existing methods without the use of UAVs varies by situation and is difficult to quantify. However, available evidence suggests that it can take days to search the most significantly affected areas (Editorial Office of the Ishinomaki Kahoku 2014 ).

Additionally, there are many situations in which long-term search and rescue efforts are necessary and difficult to sustain by manned personnel (American Red Cross 2015 ). At times, survivors are discovered days or even weeks after the initial disaster strike. In these situations, swarms of UAVs may continue operating and searching with little human interaction needed to achieve a high degree of sensor coverage over a short period of time. A swarm of 10 UAVs using the spiral method on average was able to achieve 98.9% camera coverage in under 90 min. This rate slowly grows over time due to the unpredictable nature of the swarm patterns. Each time the pattern is re-flown, the positions of each UAV differ due to responsive flocking behavior. This element of randomness improved long-term search results and could be leveraged to a higher degree in a non-simulated system.

Despite these results, it is important to acknowledge that in order for this data to be useful at present, a human rescue worker would necessarily view and process the data so that survivors could actually be rescued. The swarms of UAVs simulated in this research are not intended to perform actual rescues, although such efforts are possible (Erdelj et al. 2017 ) (American Red Cross 2015 ). Therefore, this research acts as an initial step to demonstrate what lies within the realm of the possible using a behavior-based UAV swarming approach to disaster search and rescue. The research also provides suggestions for initial algorithms and search methods that have proven effective in simulation.

Areas of improvement

Despite the positive outcome evident in the simulation results, one persistent cause of delay across all scenarios was the battery recharge behavior. The flattening discovery rates in each simulation run are often caused when UAVs run out of batteries around the 20–25 min time-frame and return to the deployment location for a recharge. This happens repeatedly as batteries discharge, sometimes requiring a UAV to fly across the entire area to return to the charging station. Intelligent recharging to handle this behavior could improve results of the algorithms further. For example, future iterations of the search algorithms could integrate battery recharge into their designs, potentially triggering an auto-charge when a UAV flies within a certain distance of its recharge station while its battery life is below a certain threshold.

Applications

This section provides a sample of practical applications for which this research can be leveraged, as well as brief guidance on how to apply the research to actual situations. Although some aspects of this research are experimental, such as the simulated miniaturized FINDER sensor, other aspects, such as the use of a UAV swarm equipped with Wi-Fi and IR cameras, are readily usable today.

At present, the recommended deployment configuration is 5 or 10 commercially available UAVs with parameters as good as or better than those specified in Table  2 . Each UAV should be equipped with an infrared (IR) camera and loaded with the software used in this research. Additional work would be necessary to be done to pull data from the real, rather than simulated, IR cameras and other sensors.

To accept the data feeds from the UAV swarm, a ground station and/or mobile application could be developed. This application could be designed in many different ways, but the basics could include a top-down graphical map, photo, or blank image of the search environment with a built-in customizable distance scale such as the one used in the DroneLab simulation software. As survivors are located, the operator or operators could tap the screen to indicate their locations. At present, the recognition of humans from camera feeds is a challenging research problem in and of itself. Thus, the rescue personnel could view the data feeds of the various UAVs and mark locations on a shared map. The combination of cooperative UAV swarm, mobile application, and input from rescue personnel would form a viable mode of operation using technology available today.

Types of disasters

The approach described in this research is well suited to earthquake and tsunami disasters, as well as any destructive natural or man-made disasters in which environmental or political conditions present difficulty in the deployment of rescue vehicles or personnel. These situations include the presence of significant or hazardous debris, inundated terrain, and/or dangerous or hostile conditions. Due to limits in UAV communication range and battery life, the cooperative behavior is optimized over a 2-km 2 area. Thus, the approach is particularly well suited to environments in which the presence of undiscovered survivors within a particular area is strongly suspected—for example, within cities, towns, villages, or other populated areas.

Civil/defense applications

In addition to civilian search and rescue, this research also has a number of applications that could apply to both the civil and defense sectors. With much detail omitted, the following are a list of potential applications in which the use of swarms of autonomous cooperating UAVs such as those simulated in this research could be highly valuable:

Intelligence gathering

Combat search and rescue

Smart object location acquisition

Incoming threat detection

Site assessment and map-building

Counter-UAS and counter-swarming

Ethical considerations

A number of ethical considerations surround the use of humanitarian robotics. One such consideration is the fact that swarming algorithms and autonomous robotic systems in general are inherently dual-use. These systems can often be used for civilian or military purposes. Although this research focuses on the use of UAV swarms for humanitarian disaster relief and the defense applications outlined above do not specifically recommend weaponization of this research or technology, such an outcome is possible.

The use of fully autonomous systems in weapons systems opens the potential for a new type of risk. When implemented on weapons platforms, autonomous systems can select and attack targets in ways that are faster and different than those performed by humans. Due to the potential for unintended collateral damage caused by these systems, the United States Department of Defense does not permit lethal fully autonomous weapons systems at this time (Human Rights Watch 2013 ). All weapons that include artificial intelligence must also include a human supervisor, or “human-in-the-loop,” for decision-making (US Department of Defense 2012 ).

In addition to its dual-use nature, other ethical concerns are inherent to humanitarian robotics research. Any time a machine is empowered with the ability to make or influence decisions that affect peoples’ lives, ethics becomes an important factor in system development and deployment (Sandvik et al. 2014 ). When designing a system based on this research, these factors should be among those considered as part of a comprehensive systemic ethics policy.

If an autonomous robot swarm is used to detect and report the locations of survivors, what issues might cause bias in reporting? Computer algorithms are developed by humans and cannot be said to be entirely free of bias and politics (Sandvik et al. 2014 ). Different algorithms, test cases, or detection equipment could create bias in the detection and reporting process.

Certain people or types of people may be reported over others. For example, automated face recognition techniques tend to be more effective on certain ethnic groups (Sharkey 2018 ). If such techniques are used by the swarm system to detect survivors, there is likely to be detection bias.

As a behavior-based approach creating emergent intelligence, how might ethics be examined differently in the case of this research than it would be in a centrally controlled system?

Do behavior-based artificial intelligence systems fall under the same sets of considerations as centrally controlled systems?

Another valid ethical concern in humanitarian robotics is the issue of neutrality. Neutrality can be compromised if UAVs are perceived—even if incorrectly—to be linked to a military or political power that has a stake in a humanitarian crisis (Emery 2016 ). Engagement with the local community is one way to approach this concern. However, with regard to the research described in this article, the UAVs used for this research are commercial or custom quadcopters commonly used by drone hobbyists. These UAV models are not likely to cause tension or misperception as might repurposed military UAVs.

Perceptions of the ethical issues surrounding UAVs also differ in different parts of the world. For example, in Europe and North America, concerns about the use of UAVs tend to include invasion of privacy, misuse by government or law enforcement, and fears of an aviation disaster. However, concerns in the Tana Delta of Kenya, where humanitarian drones were field tested, revolved around practical concerns such as the strength of the UAV’s camera, how far the system could operate, how quickly the drones could be deployed in an emergency, and who would be in physical possession of the system (Boyd 2014 ). Given this knowledge, it is important to consider the concerns of the local communities with regard to humanitarian drones, rather than to superimpose the concerns of aid-providing nations in the mistaken assumption that the concerns are identical.

The results of the study appear to greatly improve the availability of situational awareness data in the first few hours after a major natural disaster, which is widely considered one of the most critical SAR areas in need of improvement (Editorial Office of the Ishinomaki Kahoku 2014 ) (Erdelj et al. 2017 ) (Tait Communications 2012 ) (Bartels et al. n.d. ) (Ochoa and Santos 2015 ) (Shimanski 2005 ) (Adams et al. 2007 ) (Riley and Endsley 2004 ). Simulation data generated during the study show that a swarm of just five standard Footnote 6 UAVs executing the spiral method of cooperative, behavior-based search and rescue developed in this research can consistently achieve a 98.8% 15-m radius sensor coverage after 4 h, reaching a goal coverage rate of 90% in 90 min. This same swarm of five UAVs consistently achieves a 92.5% 5-m radius sensor coverage rate in 4 h, reaching the 90% goal in 3 h. As more robots are added, the numbers improve even more. A 10-UAV swarm averages 98.9% standard sensor coverage after 4 h and reaches a 90% coverage in only 53 min. Equipped the precise 5-m radius sensor, 10 UAVs reach 96.9% coverage after 4 h, reaching the goal 90% in 108 min.

In many simulations, a swarm of 20 UAVs using the spiral method reached the 90% goal in less than 34 min, slightly over half an hour to discover 90% of all visible survivors within a 2-km 2 area littered with waterlogged fields, damaged structures, fallen trees, and overturned piles of cars.

The spiral method is likely the quickest because it reacts more effectively to groups of survivors. The spiral method discovers clusters of survivors more quickly than the other methods through its spiral behavior module, which spirals outwards from an area in which more than a certain number of survivors are detected. If different types of data were sought after, a different set of behaviors might prove more effective.

Given the strong results of the simulations performed as part of this research, this approach to post-disaster assessment appears promising. Of course, in a real-world situation, the usage and availability of the data discovered by the UAVs is key. Also, although these simulations were designed to model a real environment with some degree of accuracy, the performance will certainly differ in an actual situation. However, this research does show that the use of swarms of UAVs with these algorithms has the potential to make a large amount of critical data available for consumption by rescue workers or other systems of interest. This research demonstrates the potential for high value in the area of disaster data acquisition using swarms of autonomous UAVs.

Change history

01 june 2019.

Following publication of the original article [1], the authors reported errors.

Standard parameters such as those of the commercially available DJI Phantom 4 quadcopter or similar model.

Standard sensor coverage for this research is considered to be a 15-m radius detection range.

The UK’s ORCHID Project seeks to create a disaster response system using a swarm of UAVs at a cost of around $2000 each (Kube and Zhang 1992 ).

A Finding Individuals for Disaster and Emergency Response (FINDER) sensor is a sensor developed by the US National Aeronautics and Space Administration (NASA) to aid in disaster search and rescue. A FINDER sensor uses low-power microwaves to detect the heartbeats of buried disaster survivors up to 9 m into a mound of rubble. It has been used to successfully locate survivors in Nepal. A FINDER sensor is currently the size of a carry-on bag and is thus not appropriate for carry by a standard quadcopter. However, simulating how a future miniaturized version of this sensor, or others like it, might perform alongside a standard visual or infrared camera provides an interesting comparison for the purposes of this research.

The DroneLab simulation software, as well as the UAV controlling software, may be available upon request to the (Institution omitted for blind paper submission) or through request to the paper’s author. At the time of this writing, the software is not public domain.

A. E. S. M. Staff Member, Interviewee, Description of events at Arahama during the Tohoku tsunami. [Interview]. 2017.

Adams AL, Schmidt TA, Newgard CD, Federiuk CS, Christie M, Scorvo S, DeFreest M (2007) Search is a time-critical event: when search and rescue missions may become futile. Wilderness and Environmental Medicine 18(2):95–101.

Article   Google Scholar  

Alley RE (1992) Problems of search and rescue in disasters. In The Management of Mass Burn Casualties and Fire Disasters. Dordrecht, Springer Netherlands. pp. 175–176. http://doi.org/10.1007/978-0-585-33973-3_2. .

American Red Cross, “Drones for disaster response and relief operations,” 2015.

Google Scholar  

Arnold RD, Wade JP (2015) A definition of systems thinking: a systems approach. Procedia Computer Science 44:669–678.

Balch T, Arkin RC (1998) Behavior-based formation control for multi-robot teams. IEEE Transactions on Robotics and Automation 14(6):926–939. http://doi.org/10.1109/70.736776 .

Bartels R, Herskovic V, Monares A, Ochoa SF, Pino JA, Roges MR (2010) A simple and portable command post to coordinate search and rescue activities in disaster relief efforts. In: 16th international conference on collaboration and technology. CRIWG, Maastricht.

Boyd D, “Humanitarian drones: perceptions vs. reality in Kenya’s Tana Delta, The Sentinel Project , 2014.

Brooks RA (1999) Cambrian intelligence: the early history of the new AI. The MIT Press, Cambridge, MA.

Editorial Office of the Ishinomaki Kahoku (2014) Surviving the 2011 tsunami: 100 testimonies of Ishinomaki area survivors of the great East Japan earthquake. Junposha Co., Ltd, Mejirodai, Bunkyo-ku, Tokyo.

Emery JR (2016) The possibilities and pitfalls of humanitarian drones. Ethics and International Affairs 30(2):153–165.

Erdelj M, Natalizio E, Chowdhury KRAIF (2017) Help from the sky: leveraging UAVs for disaster management. In: Pervasive Computing, pp. 24–32, January–march.

Frelinger D, J. Kvitky and W. Stanley, Proliferated autonomous weapons: an example of cooperative behavior, RAND Corporation, 1998.

Geospatial Information Authority of Japan, Arahama, Sendai on March 12, 2011, 2011.

Guarnieri M, Kurazume R, Masuda H, Inoh T, Takita K, Debenest P, Hodoshima R, Fukushima E, Hirose S (2009) HELIOS system: a team of tracked robots for special urban search and rescue operations. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, St. Louis.

Hambling D, Watch 50 drones controlled at once in record-breaking swarm, New Scientist , 2015.

Human Rights Watch, Review of the 2012 US policy on autonomy in weapons systems,2013.

Kennedy J, Eberhart RC, Shi Y (2001) Swarm intelligence. Morgan Kaufmann Publishers, San Francisco, CA.

Koffka K (1922) Perception: an introduction to the Gestalt-Theorie. Psychol Bull 19:531–585.

Kube CR, Zhang H (1992) Collective robotic intelligence. In: Second international conference on simulation and adaptive behavior, Honolulu. MIT Press Cambridge, MA, USA.

Macintyre A. G, J. A. Barbera, E.R. Smith, Surviving collapsed structure entrapment after earthquakes: a time-to-rescue analysis, Prehospital and Disaster Medicine , vol. 21, ch 1, pp. 4–19, 2006.

Molina P, Pares ME, Colomina I, Vitoria T, Silva PF, Skaloud J, Kornus W, Prades R, Aguilera C (2012) Drones to the rescue! Unmanned aerial search missions based on thermal imaging and reliable navigation. In: InsideGNSS, pp. 38–47, July–august.

Municipal Development Policy Bureau (2017) Ruins of the great East Japan earthquake: Sendai Arahama elementary school. United Nations Office for Disaster Risk Reduction, Sendai City.

Ochoa S, and R. Santos, Human-centric wireless sensor networks to improve information availability during urban search and rescue activities, Information Fusion, pp. 71–84, 2015.

Pinciroli C, & Beltrame G. (2016). Buzz: An extensible programming language for heterogeneous swarm robotics. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3794–3800.IEEE. https://doi.org/10.1109/IROS.2016.7759558

Post-Disaster Reconstruction Bureau (2015) Reconstruction of Sendai. In: Third UN conference on disaster risk reduction. United Nations Office for Disaster Risk Reduction, Sendai City.

Ramchurn SD, Wu F, Fischer JE, Reece S, Jiang W, Roberts SJ, Rodden T, Jennings NR (2016) Human-agent collaboration for disaster response. Journal of Autonomous Agents and Multi-Agent Systems 30(1):82–111. http://doi.org/10.1007/s10458-015-9286-4 .

Riley J. M, M. R. Endsley, The hunt for situational awareness: human-robot interaction in search and rescue, in Proceedings of the Human Factors and Ergonomics Society Annual Meeting , 2004.

Sandvik K.B, et al. Humanitarian technology: a critical research agenda. International Review of the Red Cross 96.893, 2014, pp. 219–242.

Sato Y (2015) Museums and the great East Japan earthquake. Sendai Miyagi Museum Alliance, Sendai City.

Sharkey N, The impact of gender and race bias in AI, Humanitarian Law and Policy , 2018.

Shimanski C, Situational awareness in search and rescue operations, in International Technical Rescue Symposium , 2005.

Tait Communications, Race against time: emergency response - preventing escalating chaos in a disaster, Tait limited, 2012.

The Center for Remembering 3.11 (2015) Activity report of the center for remembering 3.11. In: Third UN world conference on disaster risk reduction, Sendai City.

Tohoku Regional Development Association (2015) Tohoku regional development association earthquake disaster response: march 11th, 2011 the great East Japan earthquake. In: Third UN world conference on disaster risk reduction. Sendai City, Japan.

US Department of Defense, Autonomy in weapons systems, Directive Number 3000.09, 2012.

Vásárhelyi G, Virágh C, Somorjai G, Tarcai N, Szörényi T, Nepusz T, Vicsek T (2014) Outdoor flocking and formation flight with autonomous aerial robots. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, Chicago.

Virágh C, Vásárhelyi G, Tarcai N, Szörényi T, Somorjai G, Nepusz T, Vicsek T (2014) Flocking algorithm for autonomous flying robots. Bioinspiration & biomimetics 9(2):025012.

Washington State Department of Transportation (1997) Visual search patterns, pp 177–191.

Wiener N (1948) Cybernetics: or control and communication in the animal and machine. The MIT Press, Cambridge, MA.

Williams M, Researchers envisage swarms of tiny drones for dangerous rescue missions, PCWorld ,2015.

Download references

Acknowledgements

Not applicable.

Funding was provided by the Japan Acquisition, Logistics and Technology Agency and the United States Department of Defense. The authors are employees of these agencies.

Availability of data and materials

Please contact the author for data requests.

Author information

Authors and affiliations.

United States Army Armament, Research, Development, and Engineering Center (US Army ARDEC), United States Department of Defense, Picatinny, NJ, USA

Ross D. Arnold

Acquisition Logistics and Technology Agency (ATLA), Japan Ministry of Defense, Tokyo, Japan

Hiroyuki Yamaguchi & Toshiyuki Tanaka

You can also search for this author in PubMed   Google Scholar

Contributions

RA carried out the research, wrote the research software, and wrote the manuscript. HY conceived of the study and participated in the research reviews and coordination. TT advised the research, participated in the design of the study, provided the background materials for the research, and participated in the research reviews and coordination. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ross D. Arnold .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Arnold, R.D., Yamaguchi, H. & Tanaka, T. Search and rescue with autonomous flying robots through behavior-based cooperative intelligence. Int J Humanitarian Action 3 , 18 (2018). https://doi.org/10.1186/s41018-018-0045-4

Download citation

Received : 07 August 2018

Accepted : 10 October 2018

Published : 05 December 2018

DOI : https://doi.org/10.1186/s41018-018-0045-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Algorithm design behavior-based artificial intelligence
  • Disaster recovery
  • Drone swarms
  • Multi-robot systems
  • Post-disaster assessment
  • Rescue robots
  • Search and rescue
  • Swarm intelligence
  • Unmanned autonomous vehicles

rescue drone research paper

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

remotesensing-logo

Journal Menu

  • Remote Sensing Home
  • Aims & Scope
  • Editorial Board
  • Reviewer Board
  • Topical Advisory Panel
  • Photography Exhibition
  • Instructions for Authors
  • Special Issues
  • Sections & Collections
  • Article Processing Charge
  • Indexing & Archiving
  • Editor’s Choice Articles
  • Most Cited & Viewed
  • Journal Statistics
  • Journal History
  • Journal Awards
  • Society Collaborations
  • Conferences
  • Editorial Office

Journal Browser

  • arrow_forward_ios Forthcoming issue arrow_forward_ios Current issue
  • Vol. 16 (2024)
  • Vol. 15 (2023)
  • Vol. 14 (2022)
  • Vol. 13 (2021)
  • Vol. 12 (2020)
  • Vol. 11 (2019)
  • Vol. 10 (2018)
  • Vol. 9 (2017)
  • Vol. 8 (2016)
  • Vol. 7 (2015)
  • Vol. 6 (2014)
  • Vol. 5 (2013)
  • Vol. 4 (2012)
  • Vol. 3 (2011)
  • Vol. 2 (2010)
  • Vol. 1 (2009)

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

The Application of Unmanned Aerial Systems in Search and Rescue Activities

  • Print Special Issue Flyer

Special Issue Editors

Special issue information, benefits of publishing in a special issue.

  • Published Papers

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section " Earth Observation for Emergency Management ".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 26802

Share This Special Issue

rescue drone research paper

Dear Colleagues,

Recent advances in search and rescue (SAR) activities include the operational use of unmanned aerial vehicles (UAVs), known as drones. Although UAVs are commonly used in SAR missions, there is a growing demand for the development of methods for analysing drone-acquired data in an unsupervised fashion. This Special Issue focuses both on methodical papers on how data analysis during an SAR mission can be automated and on all aspects of the use of drones in SAR activities. New findings and recommendations in the field of drone-based SAR missions may facilitate searches and increase the probability of saving lives.

We are pleased to invite you to submit manuscripts on the use of UAVs in SAR activities as well as on new methods or approaches that make the applicability of drones in SAR more effective. We welcome theoretical contributions as well as field reports. Manuscripts focusing on terrestrial and marine environments are invited.

This Special Issue aims to broaden the knowledge about the use of unmanned aerial technologies in SAR services and to report on the recent progress of methods and procedures developed in this field.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Special unmanned aerial systems dedicated for search and rescue;
  • Use of consumer-grade drones in search and rescue;
  • Close-range photogrammetry in search and rescue;
  • Algorithms for person detection and tracking;
  • Special software for search and rescue with drones;
  • Field reports on the operational use of drones in search and rescue missions;
  • Reports from field experiments;
  • Drone payload use in search and rescue;
  • Terrain assessment with drones (e.g., snow/avalanche evaluation for rescuers);
  • Weather assessment with drones (e.g., wind evaluation for rescuers).

We look forward to receiving your contributions.

Prof. Dr. Tomasz Niedzielski Dr. Daniele Giordan Guest Editors

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website . Once you are registered, click here to go to the submission form . Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

  • unmanned aerial vehicle
  • search and rescue
  • aerial monitoring.
  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here .

Published Papers (3 papers)

Jump to: Review

rescue drone research paper

Jump to: Research

rescue drone research paper

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

IEEE

Join Public Safety Technology

How Drones Are Revolutionizing Search and Rescue

In recent years, the field of search and rescue has undergone a remarkable transformation, due to the integration of drone technology. These unmanned aerial vehicles have become indispensable tools for emergency responders, offering a new perspective on disaster scenes and significantly enhancing the efficiency of rescue operations. Drones have proven their worth in a variety of challenging environments, from dense forests to urban landscapes ravaged by natural disasters.

The ability of drones to access hard-to-reach areas quickly and safely has revolutionized how search and rescue teams operate. They can cover vast expanses of terrain in a fraction of the time it would take ground-based teams, providing real-time aerial imagery and data that is crucial for coordinating rescue efforts. This bird's-eye view allows rescuers to identify potential hazards, locate survivors, and plan the most effective routes for ground teams.

Moreover, drones equipped with thermal imaging cameras have become game-changers in locating missing persons, especially in low-visibility conditions or at night. These advanced sensors can detect heat signatures of survivors, even when they are hidden from plain sight, dramatically increasing the chances of successful rescues. The integration of artificial intelligence and machine learning algorithms has further enhanced the capabilities of these aerial assistants, enabling them to autonomously identify objects of interest and alert human operators to potential sightings of survivors or hazards.

As drone technology continues to advance, its impact on search and rescue operations is only expected to grow. From delivering essential supplies to victims in isolated areas to creating detailed 3D maps of disaster zones, drones are proving to be versatile and invaluable assets in emergency response scenarios. Their ability to operate in hazardous environments without risking human lives has made them an essential component of modern search and rescue strategies, ushering in a new era of more efficient, effective, and safer emergency response operations.

Understanding Drone Swarm Technology

Drone swarm technology represents a significant leap forward in the field of unmanned aerial systems. This innovative approach involves the coordinated operation of multiple drones working together as a cohesive unit, much like a swarm of insects in nature. The concept of drone swarms and swarm behavior has captured the imagination of researchers, military strategists, and emergency response planners alike, offering unprecedented capabilities in various applications, including search and rescue operations.

At its core, drone swarm technology is about creating a network of interconnected drones that can communicate with each other and work collaboratively towards a common goal. This system leverages the power of collective intelligence, allowing the swarm to perform tasks that would be impossible or impractical for a single drone. The swarm operates based on a set of predefined rules and algorithms that govern its behavior, enabling it to adapt to changing environments and make decisions autonomously.

The functioning of drone swarm technology relies on several key principles. First and foremost is the concept of decentralized control. Unlike traditional drone systems where a single operator controls each unit, swarm drones operate with a degree of autonomy. Each drone in the swarm is equipped with sensors and processing capabilities that allow it to perceive its environment, communicate with its neighbors, and make decisions based on the collective information gathered by the swarm.

Communication is another crucial aspect of how drone swarms work. The drones in a swarm constantly exchange information about their position, speed, and sensor data. This continuous flow of information enables the swarm to maintain formation, avoid collisions, and coordinate their actions effectively. Advanced communication protocols ensure that this data exchange happens in real-time, allowing the swarm to respond quickly to changing situations.

The key components of drone swarm systems include the individual drones themselves, which are typically smaller and lighter than traditional unmanned aerial vehicles. These drones are equipped with a range of sensors, including cameras, GPS, and potentially specialized equipment like thermal imaging or chemical detection sensors. The onboard computers of each drone run complex algorithms that enable swarm behavior, decision-making, and task allocation.

Another critical component is the swarm intelligence software that coordinates the actions of the individual drones. This software implements the rules and algorithms that govern swarm behavior, allowing the drones to function as a cohesive unit. Additionally, many drone swarm systems include a ground control station that provides overall mission parameters and allows human operators to monitor and, if necessary, intervene in the swarm's operations.

Advancements in drone swarm technology have been rapid and diverse. One significant area of progress has been in the development of more sophisticated swarm algorithms. These algorithms have evolved to handle increasingly complex tasks and environments, allowing swarms to navigate obstacles, execute dynamic formations, and even self-heal by redistributing tasks if individual drones are lost or damaged.

Another notable advancement has been in the miniaturization of drone technology. As drones become smaller and lighter, it has become feasible to deploy larger swarms, increasing the overall capability and flexibility of the system. Improvements in battery technology and energy efficiency have also extended the operational range and duration of drone swarms, making them more suitable for prolonged missions.

The integration of artificial intelligence and machine learning has pushed the boundaries of what drone swarms can achieve. These technologies enable swarms to learn from their experiences, optimize their behavior over time, and even predict and preemptively respond to potential scenarios. This level of autonomy and adaptability makes drone swarms increasingly valuable in dynamic and unpredictable environments, such as disaster zones.

Furthermore, advancements in inter-drone communication have led to more robust and resilient swarm networks. New protocols allow for faster data exchange and better coordination, even in environments with limited or disrupted communication channels. This improved communication capability enhances the swarm's ability to operate in challenging conditions, such as urban environments or areas with electromagnetic interference.

As drone swarm technology continues to evolve, its potential applications in various fields, including search and rescue, environmental monitoring, and even space exploration, are expanding. The ability of these swarms to cover large areas quickly, adapt to changing conditions, and perform complex tasks collaboratively positions them as a transformative technology with far-reaching implications for the future of aerial operations.

Applications of Drone Swarms in Disaster Management

The application of drone technology , particularly in the form of autonomous drones operating in swarms, has revolutionized disaster management strategies. These advanced systems offer unprecedented capabilities in responding to natural disasters, conducting urban search and rescue operations, mapping affected areas, and assessing structural damage. The versatility and efficiency of drone swarms make them invaluable assets in the critical hours and days following a disaster.

In the context of natural disaster response, drone swarms provide a rapid and comprehensive overview of the affected area. When a hurricane, earthquake, or flood strikes, the first challenge responders face is understanding the scale and nature of the damage. Drone swarms can be deployed quickly to survey large areas, providing real-time imagery and data that help emergency managers make informed decisions. These autonomous drones can navigate through dangerous or inaccessible terrain, reaching areas that might be too risky for human responders to access immediately.

For instance, in the aftermath of a major earthquake, a swarm of drones can be dispatched to assess the extent of damage across an entire city. Each drone in the swarm can be assigned a specific sector to survey, with the collective data providing a comprehensive map of destroyed buildings, blocked roads, and potential hazards like gas leaks or fires. This information is crucial for prioritizing response efforts and allocating resources effectively. The ability of drone swarms to cover vast areas quickly can significantly reduce the time it takes to gather critical intelligence, potentially saving lives in the process.

Urban search and rescue operations benefit greatly from the capabilities of drone swarms. In densely populated areas affected by disasters, locating and reaching survivors can be extremely challenging. Drone swarms can systematically search through rubble and debris, using thermal cameras and other sensors to detect signs of life. The collective power of multiple drones working in coordination allows for a more thorough and rapid search than what could be achieved by individual drones or ground-based teams alone.

These swarms can navigate through narrow spaces between collapsed structures, fly into buildings through broken windows, and even enter partially submerged areas in flood scenarios. As they search, the drones can relay information about the location of survivors, the structural integrity of buildings, and potential rescue routes to ground teams. This real-time data enables rescuers to plan their approach more effectively and safely, increasing the chances of successful rescue operations.

Mapping disaster-affected areas is another critical application where drone swarms excel. In the chaotic aftermath of a disaster, traditional maps quickly become obsolete as the landscape changes dramatically. Drone swarms can rapidly create updated, high-resolution maps of the affected area. Using advanced imaging technologies and photogrammetry techniques, these drones can generate detailed 3D models of the terrain and structures.

The process involves each drone in the swarm capturing images from different angles and altitudes. These images are then combined and processed to create comprehensive, accurate maps. The speed at which drone swarms can accomplish this task is unparalleled – what might take weeks using traditional surveying methods can be achieved in hours or days with a well-coordinated drone swarm. These up-to-date maps are invaluable for emergency responders, providing crucial information for navigation, identifying safe routes for evacuation, and planning the distribution of aid.

Assessing structural damage is a vital aspect of post-disaster management, and here too, drone swarms prove their worth. In the wake of earthquakes, hurricanes, or other disasters that can compromise building integrity, it's essential to quickly determine which structures are safe and which pose imminent danger. Drone swarms equipped with high-resolution cameras and specialized sensors can perform rapid, detailed inspections of buildings and infrastructure.

These autonomous drones can fly around and even inside damaged structures, capturing images and data from multiple angles. Using advanced image processing and machine learning algorithms, the swarm can analyze this data to identify cracks, deformations, or other signs of structural weakness. This information is crucial for making decisions about which buildings need to be evacuated, which can be repaired, and which must be demolished.

Moreover, drone swarms can continually monitor at-risk structures over time, detecting any progressive deterioration that might not be immediately apparent. This ongoing assessment is particularly valuable in scenarios where aftershocks or continuing environmental stresses (like flooding) might further compromise already damaged buildings.

The application of drone technology, especially in the form of autonomous drone swarms, has significantly enhanced the capabilities of disaster management teams. From providing immediate situational awareness to conducting detailed structural assessments, these systems offer a level of speed, safety, and comprehensiveness that was previously unattainable. As the technology continues to evolve, the role of drone swarms in disaster management is likely to become even more central, further improving the ability to respond effectively to natural and man-made disasters.

Enhancing Search and Rescue Efficiency with Drone Swarms

The integration of drone technology into search and rescue operations has marked a significant leap forward in emergency response efforts. Particularly, the use of drone swarms has revolutionized the way search and rescue missions are conducted, offering unprecedented levels of efficiency, coverage, and effectiveness. These coordinated groups of autonomous flying machines bring a host of advantages that dramatically improve the chances of locating and rescuing survivors in various challenging scenarios.

Drone swarms significantly enhance search and rescue operations through their ability to cover vast areas quickly and thoroughly. Unlike single drones or traditional search methods, a swarm can divide a large search area into smaller sections, with each drone responsible for a specific zone. This parallel processing approach allows for a comprehensive sweep of the terrain in a fraction of the time it would take using conventional methods. The swarm's collective intelligence enables it to adapt its search pattern based on real-time data, focusing more resources on areas of higher probability or interest.

Moreover, the redundancy inherent in swarm systems adds a layer of reliability to search operations. If one drone malfunctions or loses power, the others can compensate, ensuring continuous coverage of the search area. This flexibility is particularly crucial in time-sensitive rescue scenarios where every minute counts.

The superiority of drone swarms over single drones in search and rescue lies in their collective capabilities. While a single drone can provide valuable aerial perspective, it is limited in its coverage area and the types of sensors it can carry. A swarm, on the other hand, can deploy a diverse array of sensors across multiple units. Some drones might be equipped with high-resolution cameras, others with thermal imaging sensors, and still others with more specialized equipment like chemical detectors or audio sensors. This multi-modal approach to data gathering provides a more comprehensive understanding of the search area and increases the likelihood of detecting survivors under various conditions.

The distributed nature of a swarm also allows for real-time data fusion and analysis. As each drone gathers information, it can be immediately shared and processed by the swarm's collective intelligence. This enables rapid identification of patterns or anomalies that might be missed by a single drone or human operator. For instance, if one drone detects a faint heat signature, nearby drones can quickly converge on the area to provide additional perspectives and confirmation, all without the need for direct human intervention.

One of the most critical factors in any rescue operation is response time, and this is an area where drone swarms excel. The ability to deploy multiple drones simultaneously allows for near immediate coverage of large areas. In scenarios such as wilderness searches or urban disaster responses, where time is of the essence, this rapid deployment can literally make the difference between life and death.

The swarm's ability to quickly establish a comprehensive aerial view of the situation enables rescue coordinators to make informed decisions rapidly. They can identify the most promising areas for ground team deployment, spot potential hazards or obstacles, and create efficient search patterns based on real-time data. This synergy between aerial swarms and ground teams significantly reduces the time it takes to locate and reach survivors.

Moreover, drone swarms can operate continuously, with individual units returning to base for recharging while others maintain the search effort. This continuous operation ensures that there are no gaps in coverage, maintaining a constant vigil over the search area until the mission is complete.

The role of drone swarms in locating survivors is perhaps their most crucial function in search and rescue operations. These systems employ a variety of sophisticated technologies to detect signs of life in even the most challenging conditions. Thermal imaging cameras can spot body heat signatures, even when survivors are hidden under rubble or dense foliage. Advanced audio sensors can pick up faint sounds or cries for help that might be inaudible to human ears or ground-based equipment.

The swarm's collective intelligence comes into play here as well. By correlating data from multiple drones and sensor types, the system can differentiate between false positives and genuine signs of survivors. For example, if a thermal signature is detected, drones equipped with visual cameras can provide confirmation, while others might use specialized sensors to check for human-specific indicators like CO2 emissions.

In urban environments, where GPS signals might be unreliable due to interference from buildings, drone swarms can use advanced localization techniques to maintain accurate positioning. This precision is crucial for pinpointing the exact location of survivors and guiding rescue teams efficiently.

The adaptability of drone swarms also plays a significant role in their effectiveness. They can adjust their search patterns based on the type of terrain, weather conditions, or specific search parameters. In a forest fire scenario, for instance, the swarm can dynamically avoid dangerous areas while focusing on locations where survivors are most likely to seek refuge.

As drone technology continues to advance, the capabilities of these swarms in search and rescue operations are only expected to grow. Improvements in artificial intelligence, sensor technology, and swarm coordination algorithms will further enhance their ability to operate autonomously in complex environments, making them an increasingly indispensable tool in emergency response efforts.

The integration of drone swarms into search and rescue operations represents a paradigm shift in how these critical missions are conducted. By leveraging the power of multiple, coordinated drones, rescue teams can cover more ground, gather more detailed information, and respond more quickly to emergencies. This technology not only increases the efficiency of search and rescue operations but also significantly improves the chances of successful outcomes, ultimately saving more lives in the process.

Technological Advancements in Drone Swarm Systems

The field of swarm drones has seen remarkable technological advancements in recent years, driven by innovations in artificial intelligence, machine learning, materials science, and energy storage. These developments have significantly enhanced the capabilities of drone swarms, making them increasingly valuable in various applications, including search and rescue operations and even military operations.

One of the most crucial areas of advancement is in the AI algorithms that power drone swarm decision-making. These algorithms are the brain of the swarm, enabling individual drones to work together as a cohesive unit. Recent developments in swarm intelligence have led to more sophisticated decision-making processes that allow drones to adapt to complex and dynamic environments autonomously.

For instance, advanced path-planning algorithms now enable swarms to navigate through cluttered or obstacle-rich environments more efficiently. These algorithms consider not just the immediate surroundings of each drone but also the collective knowledge of the entire swarm. This allows for real-time optimization of flight paths, ensuring that the swarm can cover an area thoroughly while avoiding collisions and minimizing energy consumption.

Another significant development is in the area of task allocation algorithms. These AI systems can dynamically assign roles to individual drones within the swarm based on their capabilities, position, and the current mission requirements. For example, in a search and rescue scenario, drones equipped with thermal cameras might be automatically assigned to areas where survivors are more likely to be found, while drones with long-range communication capabilities might position themselves to relay information back to the command center.

Machine learning has played a pivotal role in improving drone swarm capabilities. By analyzing vast amounts of data from previous missions and simulations, machine learning algorithms can identify patterns and strategies that lead to more effective swarm behavior. This has led to the development of adaptive swarm systems that can learn and improve their performance over time.

One area where machine learning has made significant strides is in image recognition and object detection. Swarm drones equipped with cameras can now process visual data in real-time, identifying objects of interest with high accuracy. In military operations , this could mean detecting and tracking potential threats, while in search and rescue missions, it could involve identifying signs of survivors or assessing structural damage.

Moreover, machine learning algorithms have enhanced the swarm's ability to operate in GPS-denied environments. By learning to recognize visual landmarks and correlate them with internal sensor data, drones can maintain accurate positioning even when GPS signals are unavailable or unreliable. This is particularly valuable in urban environments or indoor spaces where traditional navigation methods may fail.

The development of new materials has also played a crucial role in enhancing drone swarm performance. Lightweight yet durable composites have allowed for the creation of drones that are more resilient to impacts and environmental stresses while remaining highly maneuverable. These advanced materials contribute to longer flight times and increased payload capacities, both of which are critical factors in the effectiveness of swarm operations.

Nano-materials have been incorporated into drone designs to improve their structural integrity and reduce weight. For instance, carbon nanotubes and graphene-based composites are being used to create stronger, lighter airframes. These materials not only enhance the drones' performance but also increase their survivability in harsh conditions, making them more suitable for challenging environments often encountered in military operations and disaster response scenarios.

Additionally, advancements in smart materials have led to the development of adaptive structures that can change shape or properties in response to environmental stimuli. This technology allows drones to optimize their aerodynamics in real-time, improving efficiency and extending operational range.

Battery technologies are evolving rapidly to meet the demands of longer drone swarm operations. The limited flight time of drones has long been a significant constraint, but recent advancements are pushing the boundaries of what's possible. High-density lithium-ion batteries have become more efficient and lighter, providing increased energy storage without adding significant weight to the drones.

Researchers are also exploring alternative energy sources to complement or replace traditional batteries. Solar cells integrated into drone wings can extend flight times by harvesting energy during operation. For longer missions, hydrogen fuel cells are being considered as a potential power source, offering the possibility of multi-hour flight times.

Another promising development is in wireless charging technology. This allows drones to recharge their batteries without landing, potentially by hovering over charging stations strategically placed in the operation area. Such innovations could enable truly continuous swarm operations, with individual drones taking turns to recharge while the swarm as a whole remains active.

The integration of edge computing capabilities into swarm drones has significantly enhanced their ability to process data on-board. This reduces the need for constant communication with a central command center, allowing swarms to operate more autonomously and efficiently. Edge computing also enables faster decision-making, as drones can process sensor data and make tactical decisions in real-time without relying on external systems.

In the context of military operations, swarm drones have seen particularly rapid advancement. Militaries around the world are investing heavily in this technology, recognizing its potential to revolutionize warfare. Swarm drones can be used for reconnaissance, providing real-time intelligence over vast areas. They can also be employed in offensive operations, overwhelming enemy defenses through sheer numbers and coordinated tactics.

The development of swarming algorithms specifically tailored for military applications has led to more sophisticated evasion and attack patterns. These systems can adapt to enemy countermeasures, making them highly effective in contested environments. Additionally, the low cost of individual drones in a swarm compared to traditional military aircraft makes them an attractive option for large-scale deployments.

However, the use of swarm drones in military contexts raises significant ethical and strategic questions. The potential for autonomous weapon systems capable of making lethal decisions without human intervention is a topic of intense debate in international forums.

As swarm drone technology continues to advance, we can expect to see even more sophisticated systems emerge. Future developments may include improved inter-swarm communication, allowing multiple swarms to coordinate over vast distances, and the integration of quantum sensors for ultra-precise navigation and detection capabilities.

The rapid pace of technological advancements in drone swarm systems is transforming various fields, from disaster response to military operations. As these technologies mature, they promise to offer unprecedented capabilities in terms of area coverage, data gathering, and autonomous decision-making. However, their development also brings challenges, particularly in terms of ethical use and regulatory frameworks, which will need to evolve alongside the technology to ensure responsible deployment of these powerful systems.

Ethical and Legal Considerations

The rapid advancement and deployment of drone swarm technology, while offering immense potential benefits, also raises a host of ethical issues and legal challenges. As these systems become more autonomous and capable, it is crucial to address the ethical values and legal frameworks that should govern their use, particularly in sensitive areas such as disaster management and search and rescue operations.

One of the primary ethical concerns surrounding the use of drone swarms in disaster management is the potential invasion of privacy. During emergency situations, these swarms can collect vast amounts of data, including high-resolution imagery and video footage of affected areas. While this information is invaluable for coordinating rescue efforts, it also raises questions about the privacy rights of individuals caught in these sweeps. There's a delicate balance to strike between the need for comprehensive situational awareness and the protection of personal privacy, especially when footage might capture people in vulnerable or compromising situations.

Moreover, the storage, handling, and potential misuse of this collected data present additional ethical challenges. Questions arise about who has access to this information, how long it should be retained, and what safeguards are in place to prevent its misuse. There's a risk that data collected for emergency purposes could be repurposed for surveillance or other activities that infringe on civil liberties.

Another significant ethical consideration is the potential for autonomous decision-making by drone swarms. As these systems become more advanced, they may be tasked with making critical choices in emergency situations. For instance, in a search and rescue scenario, a drone swarm might need to prioritize which areas to search first or even decide which individuals to assist based on the likelihood of survival. These decisions, traditionally made by human responders, carry enormous ethical weight. The algorithms governing such choices must be carefully designed to align with human values and ethical principles, ensuring that life-and-death decisions are not made solely by machines without human oversight.

The use of drone swarms also raises questions of accountability and responsibility. In the event of a malfunction or an incorrect decision that leads to harm, it can be challenging to determine who is responsible – the operators, the manufacturers, or the programmers of the AI systems guiding the swarms. This ambiguity in accountability could lead to situations where no one takes responsibility for negative outcomes, potentially eroding public trust in these technologies.

From a legal standpoint, the deployment of drone swarms operates in a rapidly evolving and often ambiguous regulatory environment. Many existing laws and regulations were not designed with the capabilities of drone swarms in mind, leading to gaps and inconsistencies in legal frameworks. For instance, current aviation regulations in many countries are not adequately equipped to handle the complexities of coordinated multi-drone operations, especially in urban environments or during emergencies.

Privacy laws also struggle to keep pace with the capabilities of drone swarms. The ability of these systems to gather vast amounts of data quickly and efficiently challenges existing notions of reasonable expectations of privacy. Legal frameworks need to be updated to address issues such as incidental data collection, consent in emergency situations, and the appropriate use and retention of data gathered during disaster response operations.

International law presents another layer of complexity, particularly when drone swarms are used in cross-border disaster response efforts. Questions arise about jurisdiction, data sharing between countries, and the applicability of different national laws to the operation of these systems. There's a need for international cooperation and standardization to ensure that drone swarms can be effectively deployed in global humanitarian efforts without running afoul of varying national regulations.

Addressing these ethical and legal challenges requires a multi-faceted approach. First, there's a need for sturdy public discourse and stakeholder engagement to identify and prioritize the ethical values that should guide the development and deployment of drone swarm technology. This process should involve not just technologists and policymakers, but also ethicists, privacy advocates, and representatives from communities likely to be affected by these systems.

Secondly, the development of comprehensive and flexible legal frameworks is crucial. These frameworks should be designed to balance the benefits of drone swarm technology with the need to protect individual rights and societal values. They should address issues of privacy, data protection, accountability, and safety while remaining adaptable enough to accommodate rapid technological advancements.

Thirdly, there's a need for increased transparency and oversight in the development and deployment of drone swarms. This could involve the creation of independent review boards to assess the ethical implications of new drone swarm applications, particularly in sensitive areas like disaster response.

Furthermore, incorporating ethical considerations into the design process of drone swarm systems is essential. This approach, often referred to as "ethics by design," involves building ethical safeguards and decision-making processes into the core functionality of these systems. For example, drone swarms could be programmed with strict data minimization protocols, collecting only the information necessary for their mission and automatically deleting or anonymizing sensitive data.

Education and training for operators and decision-makers involved in deploying drone swarms is also crucial. This should include not just technical training but also education on the ethical implications and legal responsibilities associated with these systems.

Finally, international cooperation and standardization efforts are needed to create a cohesive global approach to the ethical and legal challenges posed by drone swarms. This could involve the development of international guidelines or conventions governing the use of these technologies in various contexts, including disaster response and humanitarian operations.

The advent of drone swarm technology marks a significant leap forward in the ability to respond to disasters, conduct search and rescue operations, and manage complex emergency scenarios. Throughout this exploration of drone swarms, we have seen how these systems are revolutionizing various aspects of disaster management and emergency response.

From providing rapid situational awareness in the immediate aftermath of a disaster to conducting thorough searches over vast areas, drone swarms offer capabilities that were previously unattainable. Their ability to work collaboratively, sharing information and adapting to changing conditions in real-time, represents a paradigm shift in how we approach emergency operations.

The technological advancements driving these systems are impressive and rapidly evolving. Improvements in AI algorithms, machine learning capabilities, materials science, and energy storage are continually expanding the potential applications and effectiveness of drone swarms. As these technologies mature, we can expect even more sophisticated and capable systems to emerge, further enhancing human ability to respond to crises and save lives.

However, as with any powerful technology, the development and deployment of drone swarms come with significant ethical and legal considerations. Balancing the immense potential benefits with concerns over privacy, autonomy, and accountability will be crucial in ensuring that these systems are used responsibly and effectively.

Cooperative Search and Rescue with Drone Swarm

  • Conference paper
  • First Online: 01 January 2024
  • Cite this conference paper

rescue drone research paper

  • Luiz Giacomossi 14 ,
  • Marcos R. O. A. Maximo 14 ,
  • Nils Sundelius 15 ,
  • Peter Funk 15 ,
  • José F. B. Brancalion 16 &
  • Rickard Sohlberg 15  

Part of the book series: Lecture Notes in Mechanical Engineering ((LNME))

Included in the following conference series:

  • International Congress and Workshop on Industrial AI

526 Accesses

Unmanned Aerial Vehicle (UAV) swarms, also known as drone swarms, have been a subject of extensive research due to their potential to enhance monitoring, surveillance, and search missions. Coordinating several drones flying simultaneously presents a challenge in increasing their level of automation and intelligence to improve strategic organization. To address this challenge, we propose a solution that uses hill climbing, potential fields, and search strategies in conjunction with a probability map to coordinate a UAV swarm. The UAVs are autonomous and equipped with distributed intelligence to facilitate a cooperative search application. Our results show the effectiveness of the swarm, indicating that this approach is a promising approach to addressing this problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

rescue drone research paper

Aerial Wilderness Search and Rescue with Ground Support

rescue drone research paper

Artificial Intelligence Supervised Swarm UAVs for Reconnaissance

rescue drone research paper

UAV Swarm Objectives: A Critical Analysis and Comprehensive Review

Brambilla M, Ferrante E, Birattari M et al (2013) Swarm robotics: a review from the swarm engineering perspective. Swarm Intell 7:1–41

Article   Google Scholar  

Chen W, Liu J, Guo H, Kato N (2020) Toward robustand intelligent drone swarm: challenges and future directions. IEEE Network 34(4): 278–283

Google Scholar  

Sundelius N, Funk P, Sohlberg R (2023) Simulation environment evaluating ai algorithms for search missions using drone swarms. In: International congress and workshop on industrial AI 2023.

Huang H, Messina E (2007) Autonomy levels for unmanned systems (ALFUS) framework Volume II: framework models initial version, special publication (NIST SP), national institute of standards and technology, Gaithersburg, MD

Dantas A, Diniz L, Almeida M, Olsson E, Funk P, Sohlberg R, Ramos A (2022) Intelligent system for detection and identification of ground anomalies for rescue. Springer International Publishing, pp 277–282

Giacomossi L, Souza F, Cortes RG, Mirko Montecinos Cortez H, Ferreira C, Marcondes CAC, Loubach DS, Sbruzzi EF, Verri FAN, Marques JC, Pereira L, Maximo ROA, Curtis VV (2021) Autonomous and collective intelligence for UAV swarm in target search scenario. In: 2021 IEEE Latin American robotics symposium (LARS)

Das S, Santoro N (2019) Moving and computing models: agents. Springer International Publishing

Polson N (2018) AIQ: hur artificiell intelligens fungerar

Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598)

Glover F, Laguna M (1998) Tabu search. Springer

MATH   Google Scholar  

Barnes L (2018) A potential field based formation control methodology for robot swarms. University of South Florida

Ousingsawat J, Earl MG (2007) Modified lawn-mower search pattern for areas comprised of weighted regions. In: 2007 American control conference. NY, USA, pp 918–923

Skiena S (2020) The algorithm design manual. Texts in computer science. Springer International Publishing

Sampedro C, Bavle H, Sanchez-Lopez JL, Fernández RAS, Rodríguez-Ramos A, Molina M, Campoy P (2016) A flexible and dynamic mission planning architecture for uav swarm coordination. In: 2016 international conference on unmanned aircraft systems, pp 355–363

Volchenkov D, San Juan V, Santos M, Andújar JM (2018) Intelligent uav map generation and discrete path planning for search and rescue operations. Complexity 2018

Download references

Acknowledgements

Luiz Giacomossi acknowledges Embraer S.A for his scholarship. Marcos Maximo is partially funded by CNPq – National Research Council of Brazil through the grant 307525/2022-8. The authors are also grateful to Embraer and Vinnova, Sweden's Innovation Agency for supporting and funding this research.

Author information

Authors and affiliations.

Autonomous Computational Systems Lab (LAB-SCA), Aeronautics Institute of Technology (ITA), São José Dos Campos, Brazil

Luiz Giacomossi & Marcos R. O. A. Maximo

Mälardalen University (MDU), Universitetsplan 1, 721 23, Västerås, Sweden

Nils Sundelius, Peter Funk & Rickard Sohlberg

Technological Development Department, EMBRAER S.A, São José Dos Campos, Brazil

José F. B. Brancalion

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Luiz Giacomossi .

Editor information

Editors and affiliations.

Division of Operation and Maintenance Engineering, Luleå University of Technology, Luleå, Norrbottens Län, Sweden

Ramin Karim

Diego Galar

Ravdeep Kour

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Giacomossi, L., Maximo, M.R.O.A., Sundelius, N., Funk, P., Brancalion, J.F.B., Sohlberg, R. (2024). Cooperative Search and Rescue with Drone Swarm. In: Kumar, U., Karim, R., Galar, D., Kour, R. (eds) International Congress and Workshop on Industrial AI and eMaintenance 2023. IAI 2023. Lecture Notes in Mechanical Engineering. Springer, Cham. https://doi.org/10.1007/978-3-031-39619-9_28

Download citation

DOI : https://doi.org/10.1007/978-3-031-39619-9_28

Published : 01 January 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-39618-2

Online ISBN : 978-3-031-39619-9

eBook Packages : Engineering Engineering (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

The Science and Information (SAI) Organization

Publication Links

  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)
  • About the Journal
  • Call for Papers
  • Editorial Board
  • Submit your Paper
  • Current Issue
  • Apply as a Reviewer
  • Indexing & Archiving

Special Issues

  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Submit your Paper/Poster

Computing Conference

Intelligent Systems Conference (IntelliSys)

Future Technologies Conference (FTC)

DOI: 10.14569/IJACSA.2023.0141145 PDF

IoT-based Autonomous Search and Rescue Drone for Precision Firefighting and Disaster Management

Author 1: Shubeeksh Kumaran Author 2: V Aditya Raj Author 3: Sangeetha J Author 4: V R Monish Raman

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 14 Issue 11, 2023.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Disaster management is a line of work that deals with the lives of people, such work requires utmost precision, accuracy, and tough decision-making under critical situations. Our research aims to utilize Internet of Things (IoT)-based autonomous drones to provide detailed situational awareness and assessment of these dangerous areas to rescue personnel, firefighters, and police officers. The research involves the integration of four systems with our drone, each capable of tackling situations the drone can be in. As the recognition of civilians and protecting them is a key aspect of disaster management, our first system (i.e., Enhanced Human Identification System) to detect trapped victims and provide rescue personnel the identity of the human located. Moreover, it also leverages an Enhanced Deep Super-Resolution Network (EDSR) x4-based Upscaling technology to improve the image of human located. The second system is the Fire Extinguishing System which is equipped with an inbuilt fire extinguisher and a webcam to detect and put off fire at disaster sites to ensure the safety of both trapped civilians and rescue personnels. The third system (i.e., Active Obstacle Avoidance system) ensures the safety of the drone as well as any civilians the drone encounters by detecting any obstacle surrounding its pre-defined path and preventing the drone from any collision with an obstacle. The final system (i.e., Air Quality and Temperature Monitoring system) provides situational awareness to the rescue personnel. To accurately analyze the area and its safety levels, inform the rescue force on whether to take precautions such as wearing a fire proximity suit in case of high temperature or trying a different approach to manage the disaster. With these integrated systems, Autonomous surveillance drones with such capabilities will improve the equation of autonomous Search and Rescue (SAR) operations to a great extent as every aspect of our approach considers both the rescuer and victims in a region of disaster.

Shubeeksh Kumaran, V Aditya Raj, Sangeetha J and V R Monish Raman, “IoT-based Autonomous Search and Rescue Drone for Precision Firefighting and Disaster Management” International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023. http://dx.doi.org/10.14569/IJACSA.2023.0141145

@article{Kumaran2023, title = {IoT-based Autonomous Search and Rescue Drone for Precision Firefighting and Disaster Management}, journal = {International Journal of Advanced Computer Science and Applications}, doi = {10.14569/IJACSA.2023.0141145}, url = {http://dx.doi.org/10.14569/IJACSA.2023.0141145}, year = {2023}, publisher = {The Science and Information Organization}, volume = {14}, number = {11}, author = {Shubeeksh Kumaran and V Aditya Raj and Sangeetha J and V R Monish Raman} }

Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

rescue drone research paper

Future of Information and Communication Conference (FICC) 2025

28-29 April 2025

  • Berlin, Germany

rescue drone research paper

Computing Conference 2025

19-20 June 2025

  • London, United Kingdom

rescue drone research paper

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

rescue drone research paper

Future Technologies Conference (FTC) 2024

14-15 November 2024

rescue drone research paper

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Elsevier - PMC COVID-19 Collection logo

Managing the drone revolution: A systematic literature review into the current use of airborne drones and future strategic directions for their effective control

Rico merkert, james bushell.

  • Author information
  • Article notes
  • Copyright and License information

Corresponding author.

Received 2020 Apr 30; Revised 2020 Jun 19; Accepted 2020 Aug 30; Issue date 2020 Oct.

Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active.

Commercial and private deployment of airborne drones is revolutionising many ecosystems. To identify critical issues and research gaps, our systematic literature review findings suggest that historic issues such as privacy, acceptance and security are increasingly replaced by operational considerations including interaction with and impacts on other airspace users. Recent incidents show that unrestricted drone use can inflict problems on other airspace users like airports and emergency services. Our review of current regulatory approaches shows a need for further policy and management response to both manage rapid and efficient drone usage growth, and facilitate innovation (e.g. intraurban package delivery), with one promising strategic response being low altitude airspace management (LAAM) systems for all drone use cases.

Keywords: Future of drones, Unmanned aerial vehicles (UAVs), Low altitude airspace management (LAAM), Air traffic control, Strategy, Innovation, Systematic literature review

Historic issues such as privacy, acceptance and security increasingly replaced by operational and strategy considerations.

The literature on drones is wide and not significantly concentrated in any particular source or to any author/ institution.

Drone usage can be categorised into 4 uses: monitoring/inspection/data collection, photography, recreation and logistics.

Low altitude airspace management (LAAM) as strategic response for all drone use cases.

1. Introduction

Remote technology and automation have been present for centuries, giving human operators safety from harm and enabling new task functionality (increasing capability of individual operations and capacity of the system). Early examples include fireships, a maritime drone, which were used in navies to destroy other ships remotely. In World Wars 1 and 2, airborne drones were used to disrupt airspace above cities, drop ordinance on enemy territory and as target practice for pilots. Railways have for some time used drone (non-crewed) locomotives to support driver occupied locomotives.

While drones have had a long history in military deployment, their increasingly widespread use in non-military roles requires consideration (e.g., Hodgkinson and Johnston, 2018 ). Though current usage is limited whilst the technology is in the development phase, as they possess significant potential versatility drones may transform the way that logistics services are provided. Their use no doubt will lead to the achievement of new business, social, environmental and other goals ( Atwater, 2015 ). However, it also creates a potentially disruptive scenario as their usage expands out of control and causing problems for other parts of the economic system, as illustrated in the rapidly growing literature presented in this paper.

Interestingly, during the COVID-19 crisis drone potential has been further harnessed, using the people free nature of the technology to modify current service delivery to improve safety and capacity levels, including the delivery of face masks to remote islands in Korea and prescription medicines from pharmacies to retirement villages in Florida. It could be argued that COVID-19 has increased technological advancement in many areas and that perhaps drones represent a revolution in how we transport goods and potentially even ourselves (however that is analysis for a future paper).

In that sense, it is important to note that the use of drones in larger commercial applications is also growing (see, e.g. Bartsch et al., 2016 ), with their deployment in remote work leading to significant cost reductions and capability enhancements (such as in mining, engineering and transport network management contexts and agricultural scanning). Their ability to view large areas at a low cost from altitude provides new viewing aspects and new data acquisition ability (or existing data can be sourced at a large scale at a lower cost) to make decisions and manage operations more effectively. Similarly, airborne photography has entered a new stage of development with operators, both large and small, able to give consumers new imagery that had previously been in the domain of birds only. Besides, the recent spurt of the retail sale of drones for recreational and small-scale commercial purposes has pushed airborne drones into the entertainment space.

However, there is a range of other potential uses. Experience in delivering medical supplies in remote African areas gives a potential preview of their role in urban parcel/package delivery, radically changing the way small deliveries are made in urban areas. Commercial and policymaking efforts are turning to contemplate this future and how airborne drones may need control in such uses. This may have significant impacts, not only on delivery cost but on urban congestion and traffic management issues – should they replace land-based journeys. Being in urban areas, implementation issues will arise that require consideration, given the greater risks involved.

While there have been earlier reviews (e.g. a techno-ethical one, Luppicini and So, 2016 ), the commercial use of drones is yet to be written about in any significant volume in the management literature. Preliminary issues like privacy/security received the required attention, given the potential for drones to peer (visually or audially, and intentionally or not) into areas that were previously easy to guard. With increased use, the focus has moved to the engineering literature where a range of computer, materials and design issues are being discussed. Recently, the management literature has begun to case study how drones are used in current commercial contexts, and more importantly, has begun to consider the broader role that drones may play in the logistics industry. What is missing, in our view, is a clear understanding as to where to next, given that increased use cases and traffic volumes might not only significantly disturb other airspace users but also bring the drone ecosystem itself to a standstill (uncontrolled chaos scenario). We aim to investigate whether the emerging body of literature can provide sufficient answers and solutions or at least trending ideas on how to provide drone use with a framework that allows this evolving industry to continue growing at a rapid pace and also to innovatively disturb traditional business models in an economically ordered and safe manner.

This paper reviews the extant literature on the potential implementation of drones into the economic system and specifically how the implementation and ongoing use may be managed. Section 2 outlines our methodology for conducting the systematic review. Section 3 then presents our bibliometric results and discusses the issues being reported in the literature and highlights the four main use cases for drones (based on a content analysis of the reviewed papers) that we see being discussed. Section 4 examples current regulatory steps and then conclude in Section 5 with some discussion and identification of future research avenues, including the need for greater regulation of the drone ecosystem at the macro level and the potential for low altitude air management systems (LAAM).

2. Methodology

Originally developed in the medical literature the systematic literature review (SLR) has been used as a methodology in a range of management papers. In the transport literature it has been deployed in areas such as supply chain (e.g. Perera et al., 2018 ) and in aviation management (e.g. Ginieis et al., 2012 ; Spasojevic et al., 2018 ). Whilst not a strict laboratory controlled study ( Ginieis et al., 2012 ), they do give researchers and practitioners a flavour for the extent and coverage of the literature, and some vision as to where and by whom it is being generated and what it covers.

Drones have received literary attention for some time, primarily in legal/ethical, engineering and computer science fields. For this paper, we have focussed on management literature, given our interest in investigating drone management and related issues. Importantly, we ignore any military/defence use of drones to focus only on civilian applications. While ground-based and maritime drones are also present in the literature ( Pathak et al., 2019 ), the term ‘drones’ is now widely understood to refer to airborne ones, upon which we focus.

For our search, we developed a search string in Scopus composed of a keyword search for ‘drone*‘. We added synonyms like ‘unmanned aviation’, ‘unmanned aircraft*', ‘unmanned aerial vehicle*', ‘UAV’, or ‘remotely piloted aircraft*', which yielded 65,953 documents. We then restricted the results to the Scopus allocated subject areas of ‘Business, Management and Accounting’ (which includes a variety of areas such as innovation, strategy or logistics and supply chain management), or ‘Economics, Econometrics and Finance’ (yielding 1567 documents). Further, we restricted results to articles (published and in press), conference papers or book chapters (1133 documents), and we restricted the search to articles published in the last five years only, since the beginning of 2015 (519 documents). Finally, we limited results to the English language (505 documents).

Using Covidence (an online tool that aids in the faster review of documents through work flowing the review process and collaborative review), we analysed and filtered these articles. This was due to a variety of reasons. Initial screening results showed that for a substantial portion of the papers, drones are not the core focus of the paper and are merely an enabling device for the key topic of the paper, such as strategies for disseminating technology products into the construction sector ( Sepasgozar et al., 2018 ). Where drones were more significant, some articles were operationally (e.g. Zhou et al., 2018 ) or engineering focussed (e.g. Chen et al., 2017 ) with no substantial management consideration. Other articles were excluded as they were not relevant, including other uses for the word ‘drone/s’ (e.g. bees or employees) or UAVs (e.g. corporate finance terms). Articles without full text were also eliminated. Article content was further reviewed through Covidence, and the final sample of 133 articles was derived. Results were then analysed with Excel and Bibexcel ( Persson et al., 2009 ).

The identified papers are a population of different paper types. Some represent operational use case studies. Others are engineering focussed but are contemplative of future management endeavours. There are papers written from other (non-drone) perspectives that provide useful insight into drone deployment more generally. And in addition to bibliographic results, we found that use cases of drones to be a worthy area for discussion, as well as the current issues being experienced, which have expanded past historic issues to cover new ones that had not been encountered.

3.1. Bibliographic results

The following are selected results of our review. As illustrated in Fig. 1 , publications related to drone management (including case studies of their use) have been increasing.

Fig. 1

Publication year.

Table 1 provides a summary of the published sources relates to our 133 reviewed drone papers. What is evident is that a few sources account for a significant number of publications on drone management in the investigation period. Also evident is a very long tail of single publication sources. What Table 1 also demonstrates is that drone management is still heavily domained in the technology and engineering literature. However, other types of journals are still present to cover specific drone issues (e.g. security and mining reclamation). As the management of drones appears to be very much about micro-level management instead of macro-level management, it is perhaps natural that technology, engineering and related literature are the major publication areas for drones to date.

Listed publication sources.

Number of publications Sources
6 Journal of Advanced Transportation
Technology in Society
4 ENR (Engineering News Record)
International Journal of Recent Technology and Engineering
3 Applied Geography
Computer Law Security Review
ICTC 2019 - 10th International Conference on ICT Convergence: ICT Convergence Leading the Autonomous Future
International Journal of Intelligent Unmanned Systems
Journal of Humanitarian Logistics and Supply Chain Management
Knowledge Based Systems
Science and Engineering Ethics
Studies in Systems, Decision and Control
Technology Analysis and Strategic Management
2 11 unique sources (including the Journal of Air Transport Management)
1 64 unique sources

In terms of author contribution and potential thought leadership, there are 408 unique authors of the analysed papers, representing a wide and varied number of contributors. Of these, one has produced five publications (Hwang, J), one has made four publications (Liu, Y), three have made three publications ( Abaffy, L, Kim, H and Zhang, X ) and 16 have produced two publications. Aside from a number of author pair or group combinations in clearly linked publications from the same research activity, there does not appear to be any significant grouping/clustering of authors as is evident in other systematic reviews of other topics.

Similarly to authors, contributing institutions are wide and varied in range, with those with three or more contributions shown in Table 2 . Again, a long tail of institutional contribution is present, with some institutions having more concentrated contribution. Note that for these institutions, contribution may be planned but is more often unplanned, with different faculties (e.g. engineering and health) making independent, uncoordinated contributions to the literature. Inspection of the contributing departments reveals substantial contribution from engineering and computer science disciplines or institutes of that nature.

Institution contribution.

Affiliated publications Institutions
6 Sejong University
4 Beihang University
Griffith University
Monash University
3 Kyungheee University
University of Guelph
University of Massachusetts Amherst

Country contribution is shown in Table 3 . Top 10 contributing countries and regions. The US, China, Australia and South Korea are significant contributors. Continentally (see Table 3 .), while Asia and North America are significant (to be expected based on the country results), the diverse efforts of European countries are also evident given Europe's substantial contribution.

Top 10 contributing countries and regions.

Country Affiliated publications Regions Affiliated Publications
USA 58 Europe/USA 58
China 25 East Asia 27
Australia 22 China 25
South Korea 21 Australasia 22
India 17 South Asia 19
Italy 11 Europe related 9
France 10 Middle East, North America, SE Asia 6
Germany 8 South America 4
United Kingdom 7 Africa 1
Canada 6

Our analysis of author keywords (543 keywords) revealed similarly wide and varied results, reflecting the wide range of contexts of research focus, as shown in Table 4 . Making allowance for similar keywords (e.g. drone delivery and drone-delivery), 442 unique keywords were identified. Excluding keywords used only once (386 keywords) and excluding 91 drone referential keywords that are not descriptive of an issue (e.g. drone, drones, UAV, UAVs, unmanned aerial vehicles), these keywords were identified multiple times. Key issues relating to privacy, security, acceptance and management are evident.

Keyword analysis.

Count Keyword
16 Drone delivery (including food, parcel or generic)
9 Path planning/routing,
Privacy
5 Regulation
4 Routing
3 Behavioural intentions
Logistics
Ethics
2 Age
Attitude
Gender
Data protection
Humanitarian logistics
Policy
Data protection
Lifecycle
Humanitarian logistics
Obstacles Avoidance
Surveillance
Desire
Disaster management
Optimisation
Monitoring
Multi-UAVs

We note that papers from earlier in the literature focus on conceptual issues such as privacy and security and have stood as a warning scene for industry to ensure that these concerns are addressed, and that policy makers will be alert to them. However, and concurrent with greater usage and chance to study this usage, papers later in the date range show a clear trend towards the consideration of more commercial aspects of drone adoption including how they are operated and used.

For example, the keyword ‘privacy’ appears in 2016 (four articles), 2017 (two articles) and 2019 (three articles). ‘Regulation’ appears in 2016 (2 articles), 2018 (one article) and 2019 (2 articles). The keyword ‘ethics’ appears in articles in 2016 (one article) and 2019 (two articles). However, ‘drone delivery’ is top of mind in the research community, quickly followed by how drones are going to navigate their way around. Of the drone delivery keywords, 13 of these (more than 80 percent) were published in 2019 indicating its rather recent focus in the literature, which is consistent with the drone use case discussion presented in section 3.3 .

3.2. Present and emerging issues in civilian drone usage results

In this section, we discuss some of the content of these papers. Operating in new spaces, in a third (vertical) dimension and proximity to other users, drone use is expected to have a significant impact on the quality of life, health, social and economic well-being ( Kyrkou et al., 2019 ). However, this potential disruption will, being a technological development ( Kwon et al., 2017 ), create issues and problems that require management to minimise negative impacts (as well as to maximise positive potential). Notably, however, our review indicates that these security, privacy and acceptance concerns, whist significant and relevant, are not as dominant as they have been in previous periods – with the use of drones in various ecosystems providing an opportunity for researchers to examine their introduction and impact on those with whom they interact.

Security management remains a critical issue. Invasion (intentional or not) of sensitive airspaces, like airports ( Boselli et al., 2017 ) and power stations ( Solodov et al., 2018 ) have the potential to and do cause costly disruption (e.g. the near-total closure of Gatwick Airport and disruption to fire and emergency services work in Tasmania in 2018). Safety is a perennial issue though automation may support improved physical safety outcomes ( Torens et al., 2018 ). Privacy issues remain a concern, particularly from drones that can capture imagery, particularly those that are used close to private personal space such as homes and apartments ( Daly, 2017 ; Aydin, 2019 ), or as drones are used in new ways, including research approaches ( Resnik and Elliott, 2018 ). Drone users, particularly recreational ones, do not have an understanding of the privacy requirements that they are subject to (Finn and Wright, 2016). Therefore, a regulatory response is likely to be required. Ethical issues around the use of drones for surveillance purposes are also present ( West and Bowman, 2016 ). Other amenity issues, such as the impact of noise, are also under consideration (e.g. Chang and Li, 2018 ).

The issue of drone acceptance therefore by the public remains an issue, though different parts of the community are more accepting than others ( Anania et al., 2019 ; Sakiyama et al., 2017 ; Rengarajan et al., 2017 ). Some literature (e.g. Boucher, 2016 ; Khan et al., 2019 ) notes that an outcome of this acceptance debate is that drones are being developed to be accepted, taking into account, instead of enforcing, acceptance of drones by the public, showing the role that ‘social license’ ( Gunningham et al., 2004 ) plays in the acceptance debate. Drones require societal trust ( Nelson and Gorichanaz, 2019 ). The demilitarisation of drones has facilitated trust ( Boucher, 2015 ), and positive media attention to non-controversial use cases has been shown to have had a positive impact on acceptance ( Freeman and Freeland, 2016 ).

The first stages of research into specific consumer reaction to drones have begun to bear fruit. Studies have shown how media positioning frames consumer and public responses to drone technology ( Tham et al., 2017 ). Recent work indicates that consumers may respond positively to drones. The technological aspects of drones have been identified to form a relationship with consumers through changing perceptions of risk, functional benefits and relational attributes ( Ramadan et al., 2017 ). Drones provide a psychological benefit to consumers and generate positive intentions to use drones ( Hwang et al., 2019a ). Perceptions of environmental benefits suggest favourable consumer perceptions of drone use ( Hwang et al., 2019b ). A study of motivated consumer innovations suggests that dimensions of functional, hedonic and social motivatedness are key drivers of attitudes towards consumption using drones ( Hwang et al., 2019c ). Innovativeness is noted as an attraction of drone food delivery services for consumers, with younger and female consumers more likely to be attracted by drones ( Hwang et al., 2019d ). Managing perceived risks associated with drone deliveries is a necessary task for foodservice delivery operators ( Hwang et al., 2019e ). In marketing, aerial drone photography is being well received by targets who respond positively to their inclusion in campaigns/advertisements given its cognitive stimulation ( Royo-Vela and Black, 2018 ). Use of drone imagery in this manner is, therefore expanding ( Stankov et al., 2019 ).

Operational management issues have begun to come to the fore with some studies beginning to examine drone maintenance regimes ( Martinetti et al., 2018 ), battery life management/charging and efficient performance characteristics ( Goss et al., 2017 ; Pinto et al., 2019 ). Importantly, with the move towards logistics, other questions are being raised, including how to optimise delivery strategies (e.g. El-Adle et al., 2019 ). Initial analysis indicates combined truck and drone delivery systems are a more efficient method of logistics delivery systems than current approaches ( Ferrandez et al., 2016 ; Chung, 2018 ; Carlsson and Song, 2017 ; Liu et al., 2018 ), Wang et al. (2019) . However serial delivery systems may be more efficient still ( Sharvarani et al., 2019b ) and overall delivery considerations need further analysis, such as preparation time for deliveries which are different between truck vs. drone delivery ( Swanson, 2019 ). Further research in different urban contexts may yield different results (e.g. dense urban areas with higher density and shorter trip distances). Take-off and landing management processes (Gupta et al., 2019; Papa, 2018a , 2018b ; Papa, 2018a ) and ground handling operations ( Meincke et al., 2018 ) are also evident in the literature. Using longer-range drones for civilian purposes is beginning to be discussed (more so of remotely piloted drones instead of automated ones) ( Tatham et al., 2017a ) and the development of specific, commercial drone aviation parks for large drones has been completed ( Abaffy, 2015a , Abaffy, 2015b ).

Initial strategic impacts are receiving literary attention. Drones are driving entrepreneurial activity ( Giones and Brem, 2017 ). Magistretti and Dell’Era (2019) show that operators use four main types of technology development strategies when using drones: focus (adding drones to current operations), depth (expanding current operations more fully), breadth (expanding operations across new offerings) and holistic (developing wholly new operations or approaches). Both Kim et al. (2016) and Meunier and Bellais (2019) note that drone technology leads to spillover effects in other sectors. Hypothecations of societal impacts of future drone issues are also being made ( Rao et al., 2016 ). Consideration of their use in extra-terrestrial environments is also contemplated ( Pergola and Cipolla, 2016 ; Roma, 2017 ).

In the next section, we analyse drone use through several revealed use cases.

3.3. Primary use cases

A valuable part of our review and a key finding is our contribution to understanding how drones are deployed. A large proportion of reviewed articles are (usage) case studies rather than a systematic analysis of an issue. Through these papers, we can highlight that there are presently four primary categories: monitoring/inspection and data acquisition, photography, logistics (including passenger), and recreation. Even accounting for the lag between events and their academic publication, we view that the categories below are reflective of unpublished but current use types.

3.3.1. Monitoring, inspection and data collection

With lower capital costs and greater capabilities, drones can capture existing data in new ways, or capture uncollected data for new analysis. Industrial users are taking advantage of the new opportunities being offered by the technology to do things in new ways, for the same or better outcome.

Network management businesses, e.g. pipelines or energy transmission ( Li et al., 2018 ), road maintenance ( Abaffy, 2015a , Abaffy, 2015b ) and railway operation ( Vong et al., 2018 ) have swapped costly inspection teams with drones. Some inspection drones have real-time analysis capability and quickly report issues and objects for investigation back to the base rather than involving separate analysis stages. These users mainly deploy drones on their specific network geographies (within a set meterage from the network line) however, in positioning to and from their inspection areas, they may traverse open airspace. These network geographies are often in public spaces and given that powerlines (and sometimes rail/road networks) are placed over private properties via easements, management of drone airspace use is important.

Agricultural (and related) industries are inquisitive when it comes to learning more about the land they manage and naturally have looked at drone technology to capture new information ( Weersink et al., 2018 ). Farming has had a recent history of using satellite information to identify crop health issues, using data collected to more efficiently target the application of fertilisers and pesticides. More recently, drones have acquired this information ( Na et al., 2017 ). This has financial implications, but also environmental impacts, as reduced inputs lead to reduced negative impacts for the same output. Similarly, mining operations have used drones to remotely manage and optimise different elements of their production process ( Wendland and Boxnick, 2017 ), including monitoring stockpiles of ore and leeching pads for maintenance issues and analysing blast ore before its processing ( Bamford et al., 2017 ), accessing waterbodies in hazardous/remote locations to facilitate sampling for environmental management ( Banerjee et al., 2018 ; Langhammer et al., 2018 ) and imaging mines for rehabilitation ( Moudry et al., 2019 ). The construction industry uses drones in planning construction sites cheaper than other means (such as helicopters) and at lower risk to staff ( Abaffy and Sawyer, 2016 ; Li and Liu, 2019 ) and hazardous industrial plants use drones to monitor gas production ( Kovacs et al., 2019 ). Importantly for all of these industries, use of drones takes place largely in the airspace above the mining or farming areas and may have minimal impact on other users (notwithstanding that mining and farming areas are generally quite distant from urban areas).

Drones are also used by government and regulatory agencies for surveillance purposes and to monitor compliance. The technology has, for instance, been used in New South Wales to monitor land clearing, both to ensure that permits are complied with and to check if illegal land clearing has taken place. In hard to access areas, air pollution monitoring has been undertaken with drones ( Alvear et al., 2017 ). Drones were used to assess urban damage in the aftermath of floods, hurricanes and even the 2011 Fukushima nuclear reactor disaster ( Hultquist et al., 2017 ). Drones are also used to assess compliance with rehabilitation performance ( Johansen et al., 2019 ), and just this year have seen use in shark monitoring trials at beaches. Emergency services are making more use of drone technology. While some of this use has overlap with logistics (refer below), using drones in search and rescue is a logical move to increase the capability of rescue activities ( Lygouras et al., 2017 ; Kamlofsky et al., 2018 ). Despite the disruptive potential noted above, the monitoring use of drones is useful to fire management ( Athanasis et al., 2019 ) and surf lifesaving ( Lygouras et al., 2017 ) teams. Drones see use in humanitarian relief uses ( Bravo et al., 2019 ; Carli et al., 2019 ). The use of drones for security monitoring is also increasing ( Anania et al., 2018 ; Sakiyama et al., 2017 ). Sensitive but large area enterprises such as forestry or solar cell farms can monitor and inspect remotely with drones ( Xi et al., 2018 ; Saadat and Sharif, 2017 ). These uses are often performed over public and private property and therefore impact a range of other users. However, they are also supported by regulatory requirements and are often undertaken for public purposes and so might be more accepted by the general public.

3.3.2. Photography/image collection

Photography is another special form of data acquisition. While monitoring/inspection uses by industry might also use photographic means to acquire data, this is to convert visual imagery into data to support decision making. However using photos solely for aesthetic value has become an important use of drones in its own right, mainly for personal use (such as the documentation of a person's special event), but also increasingly for commercial use such as sporting events or in marketing campaigns (e.g. Royo-Vela and Black, 2018 ; Stankov et al., 2019 ). Being able to fly has been a dream of (some) humans since time immemorial, and use of drones to capture imagery from birds-eye-views is attracting substantial interest from some quarters.

Use of drones for this purpose is somewhat ad-hoc, and in a large number of cases involves the use of public space as users document their weddings, family events, naturescapes or other events (either themselves or through a commercial operator). However some uses (e.g. farmers taking drone photography of their farm operations) take place entirely over the privately owned property of the drone operator, and some of the aforementioned events happen over publicly but remote land that is not intensively used like urban public land. For sporting events, such as football matches, golf tournaments and car races, use is largely confined to space above the event and closely managed by the event manager to maximise the photographic potential of the event and avoid event disruption.

3.3.3. Recreation

Drones as recreation is a new use, though mimics things like remote-controlled cars which have provided people with entertainment for many decades. The explosion of recreative use shows how popular the phenomenon is, as people take advantage of the third dimension for leisure, which for a long time has been a luxury only enjoyed by those who could fly (in various forms) or partake in risky sports. Drones are being used, e.g. in tourism activities ( Song and Ko, 2017 ), and there are even competitive drone racing tournaments ( Barin et al., 2017 ). Drones are also being used as three-dimensional art installations to generate linked visual structures with no other purpose than entertainment ( China Global Television Network, 2019 ).

The expansion into recreative space is perhaps linked to the increasing acceptance of drone technology by the public as people become more familiar with the technology and begin considering their potential uses for it. Most recreative use is over public spaces such as parks and other such spaces with some of it in non-urban areas being conducted over farmland and naturescapes (either owned or not by the drone operator), though is limited by the low complexity of drones available to use for this purpose.

3.3.4. Logistics

Perhaps most interesting, and most in need of management consideration is using drones for logistics purposes. In its very early days, this use case has perhaps the most significant potential for disruption. Current discussion contemplates that their use will enhance supply chain efficiency and effectiveness ( Druehl et al., 2018 ). Indeed, currently inside warehouses, logistics firms are using drones to manage inventories ( Xu et al., 2018 ). Externally, drones have been used for medical supplies ( Prasad et al., 2018 ; Tatham et al., 2017b ) and organ deliveries ( Balakrishnan et al., 2016 ) in different contexts so far, but with trials for aerial pesticide application ( Zheng et al., 2019 ) and food deliveries currently underway, their use in broader delivery services (e.g. Drone-as-a-Service Asma et al., 2017 , Kang and Jeon, 2016 Shahzaad et al., 2019 ) may lead to substantial shifts in delivery service execution. Prospective applications also include postage/package delivery, with interest being shown by major logistics firms ( Connolly, 2016 ) and the potential for other drone facilitated household services (e.g. dry-cleaning collection/delivery). But we are sure that this is just the tip of the iceberg of opportunity for drones in the logistics space. Indeed, the potential for personal logistics (i.e. humans) is also a goal of some operators ( Lee et al., 2019 ) which would call for significant regulatory oversight (especially safety). Large scale industrial applications are also being investigated ( Damiani et al., 2015 ). The list of potential uses is extensive, and the development of drones in this way is likely to be revolutionary however initial findings are suggesting that they may only be feasible in congested urban areas ( Yoo and Chankov, 2018 ).

The above use classes show the wide spectrum on which drones are used. Clearly, both the literature and observations of trends outside the literature show that these uses will expand. Several questions in many contexts are open for academic exploration at this time, and a few that are of interest to us we will present here (though our specific areas for further research for our paper topic we will discuss at the end of this paper). In the future logistics space, an important question we believe will arise is who owns drone fleets? Will drones be owned by individuals (e.g. mobile phones and private cars) or will they be owned by fleet management/delivery companies and used in an on-demand manner (as common in traditional wet leased air freight operations; e.g. Merkert et al., 2017 )? A drone premium is likely to be chargeable given the convenience and time-saving factors but who will ultimately pay this premium? Will it be added to the delivery cost of goods and services (as in the current postage cost model) or will goods providers decide to use drones for competitive advantage and absorb the cost as part of their cost structure (offsetting delivery cost savings)?

But the key question on our minds for the remainder of this paper is the management of the significant volume of traffic that these movements will create. Increased and increasing use will be more invasive of airspace than current usage, which if not managed appropriately, and if not managed for community standards (within the license to operate), may lead to rejection of the technology and the benefits that they are purported to bring.

4. Managing the drone revolution – current regulatory approaches

We have alluded to the specific issues that drones will present above. Solodov et al. (2018) describe a range of particular drone threats, in the forms of surveillance, smuggling, kinetic (i.e. collision), electronic and distraction. Solutions to these threats include both non-destructive means (such as software intervention, UAV vs UAV, ground-based capture/interference and bird-based methods) and destructive (including electromagnetism, lasers, firearms, and missiles ( Solodov et al., 2018 ). Some airports are working to manage drones in their airspace (e.g. Sichko, 2019 ; Mackie and Lawrence, 2019 ). Many of these methods are reactive or defensive. Instead, more proactive and preventative methods of management would be warranted. Current regulatory approaches are looking to assign responsibility to the operator, which is, in reality, a concern for both consumer and operator ( Liu and Chen, 2019 ).

But further management of lower airspace is a growing area of policy consideration. Across the globe, laws and regulations will need to be created to manage drone impacts. Jurisdictions across the world are examining the drone use and building regulatory environments around them. Chen (2016) identified that the legal and regulatory framework in the US needs reform to facilitate commercial purposes. Integration of drones into the presently regulated airspace (particularly in urban areas and areas of higher sensitivity) is seen by industry to be a likely policy outcome ( Torens et al., 2018 ). Various consistent jurisdictional approaches to this regulation are under development, some of which appear consistent with that envisaged by Clarke (2016) , and the European approach is said to focus on the operation of the flight, rather than the aircraft itself ( Hirling and Holzapfel, 2017 ). This might be described as an approach to softly regulate the industry as it presently stands to allow for safe participation. These regulatory measures significantly increase the requirements of operators to build cultures of safety into their operations. This approach bears a resemblance to other transport sectors (i.e. non-drone aviation, railways and road vehicle operations) which require pilot/driver licensing and firm accreditation. Regulators worldwide are looking to manage the drone itself (weight and size) who flies the drone (both organisationally and personally), how they fly it (height, day/night, speed, visual line of sight), where they fly it (restricted areas, near people, near private space) and other factors (such as the number of simultaneous drones operated) ( Civil Aviation Safety Authority, 2019 ).

The approach by regulators in most jurisdictions so far to grow regulation with the industry, instead of trying to foresee the future and regulate that, is one that may (and are indeed intended to) be designed to support entrepreneurship, innovation and economic growth ( Chisholm, 2018 ).

However, despite the above, it is clear that even in jurisdictions that are well advanced in terms of established drone governance frameworks, more regulation will no doubt be required. The above framework does not cover the full regulatory gap between current drone use and the non-drone airspace. Operators seeking to operate outside of the limits of the above regulation will arise and require further management. Drone automation will mean that pilot intervention to manage the drone in the event of abnormal operations will be impossible. However, there will remain human-controlled drones (including remote ones) such as for recreation or ad-hoc, customised usage. Manned and unmanned drones will have to operate together, and both modes will involve new levels of complexity, particularly as drone numbers increase. Questions will arise about how to manage drones across the industry, where individual adoption by firms will more than likely require harmonised regulation to support supply chain efficiency ( Druehl et al., 2018 ; Foina et al., 2015 ). And different operators will run subnetworks with different path optimisation plans ( Liu et al., 2019 ; Jeong et al., 2019 ). With the substantial increase in flying, in both time and frequency terms in particular, drones are going to have a far more significant impact than the current regulatory impact can manage.

5. Managing the drone revolution – where to from here?

Given the relatively low level of literary consideration, the opportunities for interesting research into the control and macro-management of drones are significant, wide and varied. However, in the context of this paper, the primary area for further research that we see as relevant is how the new drone management ecosystem is to be managed in the macro sense. There are still a raft of challenges to be overcome ( Zhou et al., 2018 ), however with the prospect that drone flight will be as normal as car trips, and that they will play a role in ‘smart’ cities ( Mohamed et al., 2018 ), how to ensure that this new system is not only safe but also productive is essential.

An Internet-of-Drones ( Edwin et al., 2019 ) is a very potential future. Research into the use of flying ad-hoc networks to monitor and manage deviant drone behaviour ( Bahloul et al., 2017 ; Barka et al., 2018 , Karthikeyan and Vadivel, 2019) are in progress, as are geofencing ( Boselli et al., 2017 ) and signal jamming ( Chowdhury et al., 2017 ) that act on the navigation systems within drones to prevent drone incursion into restricted areas. Though to implement some of these preventative technologies, it is, of course, necessary that the relevant drones have navigation technologies installed to be acted upon by the countermeasures, which for a substantial number of retail drones is not the case. For those that do have navigation technology, research efforts are quite extensive into developing algorithms and programs to facilitate orderly inter-drone coordination like network registration processes ( Agron et al., 2019 ) incorporating obstacle detection, ( Zheng et al., 2016 ; Zhu et al., 2017 ; Choutri et al., 2019 ; Abdullah et al., 2019 ), separation processes and collision avoidance ( Tan et al., 2017 ; Nysetvold and Salmon, 2019 ) the impact of weather on drone performance ( Vural et al., 2019 ), completion of common tasks ( Zhuravska et al., 2018 ; Abraham et al., 2019 ; Fesenko et al., 2019 ; Zhu and Wen, 2019 ), inter-drone information security ( Abughalwa and Hasna, 2019 ) and operation in GPS poor areas ( Siva and Poellabauer, 2019 ), though many of these are conceptual and theoretical deployment (e.g. Kim and Kang, 2019 ). Connecting independent networks of drones (that are expected to exist in the future) is yet to appear in the literature, though some elements of this are developing such as using drones as nodes of a multi-drone communication network ( Kuleshov et al., 2018 ; Smith et al., 2018 ; Xiao and Guo, 2019 ). Though note, these methods are only for local drone coordination of the drone and static obstacles (e.g. buildings) or a few connected drones or drone micro-management – systems and processes being developed to impact the drone from the drone's perspective. However, more thinking about drone macro-management and their broader interaction with the environment needs progression, particularly how to manage drones and their collective impact on the remainder of society so that this impact is positive.

Industry is turning towards this question with operators looking to develop more complex management systems. It is likely that (as for aviation generally) each operator will look to develop a customised way of managing drones to suit their operations, such as for search and rescue systems ( Mohsin et al., 2016 ; Mondal et al., 2018 ), complex distribution networks ( Shavarani, 2019 ) or routings with ad hoc targets ( Suteris et al., 2018 ) which will no doubt be complex given the use of the third dimension ( Pandey et al., 2018 ). The concept of an overarching coordinating network is gaining traction in industry and government - NASA is, for instance, looking to integrate UAS into the national airspace system ( Luxhøj et al., 2017 ; Matus and Hedblom, 2018 ; He et al., 2019 ). Conversely, the industry has a different view. Logistics and technology firms such as Amazon and Google are looking at using drones in their parcel delivery systems and firms such as Uber are looking to introduce point to point passenger drone services. Small scale trials are underway in various locations globally, where industry is developing their navigation systems to manage drone delivery. Industry is making the argument that they would be able to self-regulate their drones with these systems, designing them to communicate between drones of different operators and centralised processors. These systems would look to simultaneously program the most efficient routes for deliveries (taking into account, mitigate and avoid collisions and incursions that may cause damage not only to other drones but also to other non-involved parties).

A competing view is considering whether drones should be integrated into the overall air transport management system ( Zhang et al., 2018 ) and managed using many of the same tools and mechanisms deployed by regular aircraft such as identification and collision avoidance systems ( Lin, 2019 ). There is a view that far more oversight of the sector will be required to ensure that safety conditions can be met, and that airborne drones cannot operate separately to large aircraft with which drones will share airspace. A system through which this control can be exercised is being called by airspace management technology developers ‘low altitude airspace management systems (LAAM)’. LAAM as currently envisaged may replicate the control mechanisms used for general and civil aviation flights. Still, importantly each of these different types of flights, drone and non-drone, will know about all other flights in making flight planning and execution decisions. They will be able to communicate with drones and record their position and use within the network. Other features might also be incorporated into LAAM, including the ability to issue instructions to drones (for say crash avoidance) or the ability to enforce geofencing boundaries to prevent drone incursion into specified issues. They may be able to aid in congestion management, to ensure that all drones can achieve their missions within reasonable parameters and may include mechanisms to facilitate flight planning and operations, consistent with current air and rail control management systems. Real-time management of issues would be an essential feature of LAAMS ( Zheng et al., 2016 ). To us, the debate over centralised or distributed airspace management is quite interesting, not only for the impact that it may have on airspace management for drones but also the precedents it may make for other sectors. The impact of such coordination systems on public drone acceptance would also be of interest for researchers to address considering the involvement of government in such regulation may be trusted more than that of the private sector.

From an engineering and technical perspective, the areas of research that are required are almost endless, as new systems are scoped, designed and developed to integrate within the current regulatory environment and aviation control systems. But from our perspective, that of management, there are a few key areas of research that stem from the question of LAAM implementation. Firstly, the need for LAAM and what they are to do needs better articulation from those who would be impacted by it.

As noted, key potential future users of such systems are discussing their need, but further consultation is required to detail precisely what is needed. There are significant policy and commercial/regulatory discussions, but from an academic perspective, this discussion will provide useful insight into a range of issues. An immediate area to investigate are the perspectives of current recreational and commercial users and their reaction to such a possible integration into LAAM and determining what they may like to see for themselves if LAAM is implemented. Current regulations enforce rules on operators which may not be required with a LAAM. Besides, research into prospective users and their preliminary strategies, pricing and other decisions that firm such as logistics ones will make when using the network. Consideration of overall supply chains and the changes that drones may bring in the context of LAAM, helping to not only enable but cheapen the use of drones and impact a range of upstream and downstream elements. Retail precincts may be impacted by yet more package delivery. Warehouses may look quite different from what they do now. Drones may replace hydrocarbon fuel consumption with electric fuel consumption. They may also remove trucks from roads, particularly urban delivery ones. And individual supply chains and travel patterns may change as drones become part of everyday life.

Other transport management specific questions remain to be answered, as highlighted in the literature. Delivery substitution decisions will also be of interest to academia. Cost will be a driver of these changes, but other factors such as service quality and the types of services offered will become a focus area. Optimal drone network designs will be an interesting avenue of discussion (e.g. Pulver and Wei, 2018 ) which will vary depending on the purpose of the drones employed. Optimising how truck and drone fleets interact may be a useful transitive measure to help improve delivery time and efficiency ( Freitas and Penna, 2018 ). Consideration of other delivery mechanisms is also worth researching, such as replacing the truck with a parent drone ( Kim and Awwad, 2017 ). Medical deliveries will need higher prioritisation on the network to ensure their rapid delivery from the donors to the operating theatres where they are needed, or transit points through which they will need to travel ( Balakrishnan et al., 2016 ). So some form of prioritisation matrix will be required.

A key limitation of our approach and any literature review more principally is the lack of full comprehensiveness as literature in the relevant subject area is a proliferating (past the cut-off date and the publication of the paper) and b) not confined to academic outputs (i.e. those indexed in SCOPUS). During our grey literature review, we noticed a recent surge of consultancy reports on drone use cases in the context of urban air mobility (UAM) as a new mode of transportation (e.g. Baur et al., 2018 (Roland Berger); Booz Allen Hamilton, 2018 ; Grandl et al., 2018 (Porsche Consulting); Thomsen, M., 2017 (Airbus)) which suggests that academic papers covering this topic will follow. Indeed, Fu et al. (2019) is a first in a potential series of such papers and has been included in our review.

In summary, our literature review results suggest that security, privacy and acceptance concerns, whist significant and relevant, are not as dominant as they have been in previous periods – with the use of drones in various ecosystems providing an opportunity for researchers to examine their introduction and impact on those with whom they interact. We conclude that further work is needed to understand potential impacts of drone usage (e.g. fatalities due to accidents), subsequent potential risk trade-offs and adjustment/formulation of new regulation ( Hirling and Holzapfel, 2017 ), The safety/cost trade-off will be an important one to contribute to the setting of appropriate safety rules that facilitate the industry without constraining it unnecessarily, including the development of low altitude airspace management systems to support the increased deployment.

Acknowledgements

We acknowledge the contribution and comments received from participants at the 2019 Air Transport Research Society 23rd World Conference. The comments from two anonymous reviewers have helped us to further improve the paper for which we are thankful. We are grateful for the comments and financial support received from Thales Australia and the University of Sydney Business School through an Industry Partnership Grant.

  • Abaffy L. Drones used to conduct bridge inspections in Minnesota. Eng. News Rec. 2015;274(41) [ Google Scholar ]
  • Abaffy L. Construction begins on drone aviation park in North Dakota. Eng. News Rec. 2015;274(31) [ Google Scholar ]
  • Abaffy L., Sawyer T. How drones are reformatting photography. Eng. News Rec. 2016;275(2) [ Google Scholar ]
  • Abdullah A.A., Sahib B.B., Abu N.A. 2019 2nd International Conference of Computer and Informatics Engineering (IC2IE) IEEE; 2019. Investigating connection algorithms among drones in the DRANET system; pp. 175–180. [ Google Scholar ]
  • Abraham L., Biju S., Biju F., Jose J., Kalantri R., Rajguru S. 2019 International Conference on Innovative Sustainable Computational Technologies (CISCT) IEEE; 2019. Swarm robotics in disaster management; pp. 1–5. [ Google Scholar ]
  • Abughalwa M., Hasna M.O. ICTC 2019 - 10th International Conference On ICT Convergence: ICT Convergence Leading The Autonomous Future. 2019. A comparative secrecy study of flying and ground eavesdropping in UAV based communication systems. [ Google Scholar ]
  • Agron D.J.S., Ramli M.R., Lee J.M., Kim D.S. 2019 International Conference on Information and Communication Technology Convergence (ICTC) IEEE; 2019. Secure ground control station-based routing protocol for UAV networks; pp. 794–798. [ Google Scholar ]
  • Alvear O., Zema N.R., Natalizio E., Calafate C.T. Using UAV-based systems to monitor air pollution in areas with poor accessibility. J. Adv. Transport. 2017;17 [ Google Scholar ]
  • Anania E.C., Rice S., Pierce M., Winter S.R., Capps J., Walters N.W., Milner M.N. Public support for police drone missions depends on political affiliation and neighborhood demographics. Technol. Soc. 2019;57:95–103. [ Google Scholar ]
  • Asma T., Addouche S.A., Dellagi S., El Mhamedi A. 2017 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI) IEEE; 2017. Post-production analysis approach for drone delivery fleet; pp. 150–155. [ Google Scholar ]
  • Athanasis N., Themistocleous M., Kalabokidis K., Chatzitheodorou C. European, Mediterranean, and Middle Eastern Conference on Information Systems. Springer; Cham: 2018. Big data analysis in UAV surveillance for wildfire prevention and management; pp. 47–58. [ Google Scholar ]
  • Atwater D.M. The commercial global drone market: emerging opportunities for social and environmental uses of UAVs. Graziadio Business Report. 2015;18(2) [ Google Scholar ]
  • Aydin B. Public acceptance of drones: knowledge, attitudes, and practice. Technol. Soc. 2019;59:101180. [ Google Scholar ]
  • Bahloul N.E.H., Boudjit S., Abdennebi M., Boubiche D.E. 2017 26th International Conference on Computer Communication and Networks (ICCCN) IEEE; 2017. Bio-inspired on demand routing protocol for unmanned aerial vehicles; pp. 1–6. [ Google Scholar ]
  • Balakrishnan N., Devaraj K., Rajan S., Seshadri G. Transportation of organs using UAV. Proc. Int. Conf. Ind. Eng. Oper. Management. 2016;3090 [ Google Scholar ]
  • Bamford T., Esmaeili K., Schoellig A.P. A real-time analysis of post-blast rock fragmentation using UAV technology. Int. J. Min. Reclamat. Environ. 2017;31(6):439–456. [ Google Scholar ]
  • Banerjee B.P., Raval S., Maslin T.J., Timms W. Development of a UAV-mounted system for remotely collecting mine water samples. Int. J. Min. Reclamat. Environ. 2018:1–12. [ Google Scholar ]
  • Barin A., Dolgov I., Toups Z.O. Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 2017. Understanding dangerous play: a grounded theory analysis of high-performance drone racing crashes; pp. 485–496. [ Google Scholar ]
  • Barka E., Kerrache C.A., Lagraa N., Lakas A., Calafate C.T., Cano J.C. UNION: a trust model distinguishing intentional and Unintentional misbehavior in inter-UAV communication. J. Adv. Transport. 2018 doi: 10.1155/2018/7475357. [ DOI ] [ Google Scholar ]
  • Bartsch R., Coyne J., Gray K. Routledge; 2016. Drones in Society – Exploring the Strange New World of Unmanned Aircraft. [ Google Scholar ]
  • Baur S., Schickram S., Homulenko A., Martinez N., Dyski A. Urban air mobility the rise of a new mode of transportation; Passenger drones ready for take-off, Roland Berger. 2018. https://www.rolandberger.com/fr/Publications/Passenger-drones-ready-for-take-off.html available at.
  • Boselli C., Danis J., McQueen S., Breger A., Jiang T., Looze D., Ni D. Geo-fencing to secure airport perimeter against sUAS. International Journal of Intelligent Unmanned Systems. 2017;5(4):102–116. [ Google Scholar ]
  • Boucher P. Domesticating the drone: the demilitarisation of unmanned aircraft for civil markets. Sci. Eng. Ethics. 2015;21(6):1393–1412. doi: 10.1007/s11948-014-9603-3. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Boucher P. ‘You wouldn't have your granny using them’: drawing boundaries between acceptable and unacceptable Applications of civil drones. Sci. Eng. Ethics. 2016;22(5):1391–1418. doi: 10.1007/s11948-015-9720-7. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bravo R.Z.B., Leiras A., Cyrino Oliveira F.L. The use of UAV s in humanitarian relief: an application of POMDP‐based methodology for finding victims. Prod. Oper. Manag. 2019;28(2):421–440. [ Google Scholar ]
  • Carli F., Manzotti M.E., Savoini H. ICT for a Better Life and a Better World. Springer; Cham: 2019. New market creation for technological breakthroughs: commercial drones and the disruption of the emergency market; pp. 335–345. [ Google Scholar ]
  • Carlsson J.G., Song S. Coordinated logistics with a truck and a drone. Manag. Sci. 2017;64(9):4052–4069. [ Google Scholar ]
  • Chang S.J., Li K.W. 2018 5th International Conference on Industrial Engineering and Applications (ICIEA) IEEE; 2018. April). Visual and hearing detection capabilities to discriminate whether a UAV invade a campus airspace; pp. 146–149. [ Google Scholar ]
  • Chen G.Y. Reforming the current regulatory framework for commercial drones: retaining American businesses' competitive advantage in the global economy. Northwest Journal of International Law and Business. 2016;37:513. [ Google Scholar ]
  • Chen P., Zeng W., Yu G., Wang Y. Surrogate safety analysis of pedestrian-vehicle conflict at intersections using unmanned aerial vehicle videos. J. Adv. Transport. 2017;17 [ Google Scholar ]
  • China Global Television Network . 2019. 500 Drones Create Stunning Light Show on AI-Driven Future. https://www.youtube.com/watch?v=LvYNHSf7FbI [ Google Scholar ]
  • Chisholm J.D. Drones, dangerous animals and peeping Toms: impact of imposed vs. organic regulation on entrepreneurship, innovation and economic growth. Int. J. Enterpren. Small Bus. 2018;35(3):428–451. [ Google Scholar ]
  • Choutri K., Lagha M., Dala L. Distributed obstacles avoidance for UAVs formation using consensus-based switching topology. International Journal of Computing and Digital Systems. 2019;8:167–178. [ Google Scholar ]
  • Chowdhury D., Sarkar M., Haider M.Z.A. Cyber-vigilance system for anti-terrorist drives based on an unmanned aerial vehicular networking signal jammer for specific territorial security. Adv. Sci. Technol. Eng. Syst. J. 2017;3(3):43–50. [ Google Scholar ]
  • Chung J. Heuristic method for collaborative parcel delivery with drone. J. Distrib. Sci. 2018;16(2):19–24. [ Google Scholar ]
  • Civil Aviation Safety Authority 2019. www.droneflyer.gov.au
  • Clarke R. Appropriate regulatory responses to the drone epidemic. Comput. Law Secur. Rep. 2016;32(1):152–155. [ Google Scholar ]
  • Connolly K.B. Eyes on the skies: the dream of drone delivery starts to take flight. Packag. Digest. 2016;53(3):18–25. [ Google Scholar ]
  • Daly A. Privacy in automation: an appraisal of the emerging Australian approach. Comput. Law Secur. Rep. 2017;33(6):836–846. [ Google Scholar ]
  • Damiani L., Revetria R., Giribone P., Guizzi G. Concomitant 14th International Conference on SoMeT. 2015. Simulative comparison between ship and airship for the transport of waste natural gas from oil wells. [ Google Scholar ]
  • Druehl C., Carrillo J., Hsuan J. Collaboration and Strategies; 2018. Technological Innovations: Impacts on Supply Chains. Innovation And Supply Chain Management: Relationship; pp. 259–281. [ Google Scholar ]
  • Edwin E.B., RoshniThanka M., Deula S. An internet of drone (IoD) based data analytics in cloud for emergency services. Int. J. Recent Technol. Eng. 2019;7(5S2):263–367. [ Google Scholar ]
  • El-Adle A.M., Ghoniem A., Haouari M. Parcel delivery by vehicle and drone. Journal of the Operational Research Society. 2019:1–19. [ Google Scholar ]
  • Ferrandez S.M., Harbison T., Weber T., Sturges R., Rich R. Optimization of a truck-drone in tandem delivery network using k-means and genetic algorithm. J. Ind. Eng. Manag. 2016;9(2):374–388. [ Google Scholar ]
  • Fesenko H., Kharchenko V., Zaitseva E. 2019 International Conference on Information and Digital Technologies (IDT) IEEE; 2019. June). Evaluating reliability of a multi-fleet with a reserve drone fleet: an approach and basic model; pp. 128–132. [ Google Scholar ]
  • Foina A.G., Krainer C., Sengupta R. Unmanned Aircraft Systems (ICUAS), 2015 International Conference on. 2015. June). An unmanned aerial traffic management solution for cities using an air parcel model; pp. 1295–1300. [ Google Scholar ]
  • Freeman P.K., Freeland R.S. Media framing the reception of unmanned aerial vehicles in the United States of America. Technol. Soc. 2016;44:23–29. [ Google Scholar ]
  • Freitas J.C., Penna P.H.V. A variable neighborhood search for flying sidekick traveling salesman problem. Int. Trans. Oper. Res. 2018:1–24. [ Google Scholar ]
  • Fu M., Rothfeld R., Antoniou C. Exploring preferences for transportation modes in an urban air mobility environment: Munich case study. Transportation Research Record. 2019;2673(10):427–442. [ Google Scholar ]
  • Ginieis M., Sánchez-Rebull M.V., Campa-Planas F. The academic journal literature on air transport: analysis using systematic literature review methodology. J. Air Transport. Manag. 2012;19:31–35. [ Google Scholar ]
  • Giones F., Brem A. From toys to tools: the co-evolution of technological and entrepreneurial developments in the drone industry. Bus. Horiz. 2017;60(6):875–884. [ Google Scholar ]
  • Goss K., Musmeci R., Silvestri S. 2017 26th International Conference on Computer Communication and Networks (ICCCN) IEEE; 2017. Realistic models for characterizing the performance of unmanned aerial vehicles; pp. 1–9. [ Google Scholar ]
  • Grandl G., Ostgathe M., Cachay J., Doppler S., Salib J., Ross H. Porsche Consulting; 2018. The future of vertical mobility. https://www.porsche-consulting.com/fileadmin/docs/04_Medien/Publikationen/TT1371_The_Future_of_Vertical_Mobility/The_Future_of_Vertical_Mobility_A_Porsche_Consulting_study__C_2018.pdf available at. [ Google Scholar ]
  • Gunningham N., Kagan R.A., Thornton D. Social license and environmental protection: why businesses go beyond compliance. Law Soc. Inq. 2004;29(2):307–341. [ Google Scholar ]
  • Hamilton Booz Allen. Urban air mobility (UAM) market study, McLean. 2018. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20190001472.pdf available at.
  • He D., Liu H., Chan S., Guizani M. IEEE Network; 2019. How to Govern the Non-cooperative Amateur Drones? [ Google Scholar ]
  • Hirling O., Holzapfel F. O.R.C.U.S. risk assessment tool for operations of light UAS above Germany. International. Journal of Intelligent Unmanned Systems. 2017;5(1):2–17. [ Google Scholar ]
  • Hodgkinson D., Johnston R. Routledge; 2018. Aviation Law and Drones – Unmanned Aircraft and the Future of Aviation. [ Google Scholar ]
  • Hultquist C., Sava E., Cervone G., Waters N. Damage assessment of the urban environment during disasters using volunteered geographic information. Big Data for Regional Science. 2017:214–228. [ Google Scholar ]
  • Hwang J., Choe J.Y.J. Exploring perceived risk in building successful drone food delivery services. Int. J. Contemp. Hospit. Manag. 2019;31(8):3249–3269. doi: 10.1108/IJCHM-07-2018-0558. [ DOI ] [ Google Scholar ]
  • Hwang J., Kim H. Consequences of a green image of drone food delivery services: the moderating role of gender and age. Bus. Strat. Environ. 2019;28:872–884. [ Google Scholar ]
  • Hwang J., Cho S.B., Kim W. Consequences of psychological benefits of using eco-friendly services in the context of drone food delivery services. J. Trav. Tourism Market. 2019:1–12. [ Google Scholar ]
  • Hwang J., Kim H., Kim W. Investigating motivated consumer innovativeness in the context of drone food delivery services. J. Hospit. Tourism Manag. 2019;38:102–110. [ Google Scholar ]
  • Hwang J., Lee J.S., Kim H. Perceived innovativeness of drone food delivery services and its impacts on attitude and behavioral intentions: the moderating role of gender and age. Int. J. Hospit. Manag. 2019;81:94–103. [ Google Scholar ]
  • Jeong H.Y., Song B.D., Lee S. Truck-drone hybrid delivery routing: payload-energy dependency and No-Fly zones. Int. J. Prod. Econ. 2019;214:220–233. [ Google Scholar ]
  • Johansen K., Erskine P.D., McCabe M.F. Using Unmanned Aerial Vehicles to assess the rehabilitation performance of open cut coal mines. J. Clean. Prod. 2019;209:819–833. [ Google Scholar ]
  • Kamlofsky J.A., Naidoo N., Bright G., Bergamini M.L., Zelasco J., Ansaldo F., Stopforth R. 2018. Semi-Autonomous Robot Control System with an Improved 3D Vision Scheme for Search and Rescue Missions. A Joint Research Collaboration between South Africa and Argentina. [ Google Scholar ]
  • Kang K., Jeon I. Study on utilization drones in domestic logistics service in Korea. J. Distrib. Sci. 2016;14:51–57. [ Google Scholar ]
  • Khan R., Tausif S., Javed Malik A. Consumer acceptance of delivery drones in urban areas. Int. J. Consum. Stud. 2019;43(1):87–101. [ Google Scholar ]
  • Kim K., Awwad M. International Annual Conference of the American Society for Engineering Management. 2017. Modeling Effective Deployment of Airborne Fulfilment Centres. [ Google Scholar ]
  • Kim K., Kang Y. 2019 International Conference on Information and Communication Technology Convergence (ICTC) IEEE; 2019. Implementation of UAS identification and authentication on oneM2M IoT platform; pp. 948–950. [ Google Scholar ]
  • Kim D.H., Lee B.K., Sohn S.Y. Quantifying technology–industry spillover effects based on patent citation network analysis of unmanned aerial vehicle (UAV) Technol. Forecast. Soc. Change. 2016;105:140–157. [ Google Scholar ]
  • Kovacs M., Călămar A.N., Toth L., Simion S., Simion A., Kovacs I. Opportunity of using drones equipped with sensors for measurement of combustion gases. Calitatea. 2019;20(S1):207. [ Google Scholar ]
  • Kuleshov S.V., Zaytseva A.A., Aksenov A.Y. The conceptual view of unmanned aerial vehicle implementation as a mobile communication node of active data transmission network. International Journal of Intelligent Unmanned Systems. 2018;6(4):174–183. [ Google Scholar ]
  • Kwon H., Kim J., Park Y. Applying LSA text mining technique in envisioning social impacts of emerging technologies: the case of drone technology. Technovation. 2017;60:15–28. [ Google Scholar ]
  • Kyrkou C., Timotheou S., Kolios P., Theocharides T., Panayiotou C. Drones: augmenting our quality of life. IEEE Potentials. 2019;38(1):30–36. [ Google Scholar ]
  • Langhammer J., Janský B., Kocum J., Minařík R. 3-D reconstruction of an abandoned montane reservoir using UAV photogrammetry, aerial LiDAR and field survey. Appl. Geogr. 2018;98:9–21. [ Google Scholar ]
  • Lee J.K., Kim S.H., Sim G.R. Mode choice behavior analysis of air transport on the introduction of remotely piloted passenger aircraft. J. Air Transport. Manag. 2019;76:48–55. [ Google Scholar ]
  • Li Y., Liu C. Applications of multirotor drone technologies in construction management. International Journal of Construction Management. 2019;19(5):401–412. [ Google Scholar ]
  • Li Y., Sun Z., Qin R. Proceedings of the International Annual Conference of the American Society for Engineering Management. American Society for Engineering Management (ASEM); 2018. Routing algorithm and cost analysis for using hydrogen fuel cell powered unmanned aerial vehicle in high voltage transmission line inspection; pp. 1–11. [ Google Scholar ]
  • Lin L. The design of UAV collision avoidance system based on ADS-B IN. Paper Asia. 2019;2:141–144. [ Google Scholar ]
  • Liu C.C., Chen J.J. Analysis of the weights of service quality indicators for drone filming and photography by the fuzzy analytic network process. Appl. Sci. 2019;9(6):1236. [ Google Scholar ]
  • Liu J., Guan Z., Xie X. 2018 8th International Conference on Logistics, Informatics and Service Sciences (LISS) IEEE; 2018. Truck and Drone in Tandem Route Scheduling under Sparse Demand Distribution; pp. 1–6. [ Google Scholar ]
  • Liu Y., Liu Z., Shi J., Wu G., Chen C. Journal of Advanced Transportation; 2019. Optimization of base location and patrol routes for unmanned aerial vehicles in border intelligence, surveillance, and reconnaissance. Article ID. [ Google Scholar ]
  • Luppicini R., So A. A technoethical review of commercial drone use in the context of governance, ethics, and privacy. Technol. Soc. 2016;46:109–119. [ Google Scholar ]
  • Luxhøj J.T., Joyce W., Luxhøj C. A ConOps derived UAS safety risk model. J. Risk Res. 2017:1–23. [ Google Scholar ]
  • Lygouras E., Dokas I.M., Andritsos K., Tarchanidis K., Gasteratos A. International Conference on Information Systems for Crisis Response and Management in Mediterranean Countries. Springer; Cham: 2017. Identifying hazardous emerging behaviors in search and rescue missions with drones: a proposed methodology; pp. 70–76. [ Google Scholar ]
  • Lygouras E., Gasteratos A., Tarchanidis K. International Conference on Information Systems for Crisis Response and Management in Mediterranean Countries. Springer; Cham: 2017. ROLFER: an innovative proactive platform to reserve swimmer's safety; pp. 57–69. [ Google Scholar ]
  • Mackie T., Lawrence A. Integrating unmanned aircraft systems into airport operations: from buy-in to public safety. J. Airpt. Manag. 2019;13(4):380–390. [ Google Scholar ]
  • Magistretti S., Dell'Era C. Unveiling opportunities afforded by emerging technologies: evidences from the drone industry. Technol. Anal. Strat. Manag. 2019;31(5):606–623. [ Google Scholar ]
  • Martinetti A., Schakel E.J., van Dongen L.A. Flying asset: framework for developing scalable maintenance program for Unmanned Aircraft Systems (UAS) J. Qual. Mainten. Eng. 2018;24(2):152–169. [ Google Scholar ]
  • Matus F., Hedblom B. 2018 Integrated Communications, Navigation, Surveillance Conference (ICNS) IEEE; 2018. Addressing the Low-Altitude Airspace Integration Challenge—USS or UTM Core? 2F1-1. [ Google Scholar ]
  • Meincke P., Asmer L., Geike L., Wiarda H. 2018 8th International Conference on Logistics, Informatics and Service Sciences (LISS) IEEE; 2018. Concepts for cargo ground handling of unmanned cargo aircrafts and their influence on the supply chain; pp. 1–10. [ Google Scholar ]
  • Merkert R., Van de Voorde E., de Wit J. Making or breaking - key success factors in the air cargo market. J. Air Transport. Manag. 2017;61:1–5. [ Google Scholar ]
  • Meunier F.X., Bellais R. Technical systems and cross-sector knowledge diffusion: an illustration with drones. Technol. Anal. Strat. Manag. 2019;31(4):433–446. [ Google Scholar ]
  • Mohamed N., Al-Jaroodi J., Jawhar I., Idries A., Mohammed F. Unmanned aerial vehicles applications in future smart cities. Technol. Forecast. Soc. Change. 2018 (in press) [ Google Scholar ]
  • Mohsin B., Steinhäusler F., Madl P., Kiefel M. An innovative system to enhance situational awareness in disaster response. J. Homel. Secur. Emerg. Manag. 2016;13(3):301–327. [ Google Scholar ]
  • Mondal T., Bhattacharya I., Pramanik P., Boral N., Roy J., Saha S., Saha S. A multi-criteria evaluation approach in navigation technique for micro-jet for damage & need assessment in disaster response scenarios. Knowl. Base Syst. 2018;162:220–237. [ Google Scholar ]
  • Moudrý V., Gdulová K., Fogl M., Klápště P., Urban R., Komárek J., Moudrá L., Štroner M., Barták V., Solský M. Comparison of leaf-off and leaf-on combined UAV imagery and airborne LiDAR for assessment of a post-mining site terrain and vegetation structure: prospects for monitoring hazards and restoration success. Appl. Geogr. 2019;104:32–41. [ Google Scholar ]
  • Na S., Park C., So K., Park J., Lee K. 2017 6th International Conference on Agro-Geoinformatics. IEEE; 2017. Mapping the spatial distribution of barley growth based on unmanned aerial vehicle; pp. 1–5. [ Google Scholar ]
  • Nelson J., Gorichanaz T. Technology in Society; 2019. Trust as an Ethical Value in Emerging Technology Governance: the Case of Drone Regulation. (in press) [ Google Scholar ]
  • Nysetvold T.B., Salmon J.L. Deconfliction in high-density unmanned aerial vehicle systems. J. Air Transport. 2019;27(2):61–69. [ Google Scholar ]
  • Pandey P., Shukla A., Tiwari R. Three-dimensional path planning for unmanned aerial vehicles using glowworm swarm optimization algorithm. International Journal of System Assurance Engineering and Management. 2018;9(4):836–852. [ Google Scholar ]
  • Papa U. Sonar sensor model for safe landing and obstacle detection. Stud. Syst. Decis. Control. 2018;136:13–28. [ Google Scholar ]
  • Papa U. Optical sensor for UAS aided landing. Stud. Syst. Decis. Control. 2018;136:63–79. [ Google Scholar ]
  • Pathak P., Damle M., Pal P.R., Yadav V. Humanitarian impact of drones in healthcare and disaster management. Int. J. Recent Technol. Eng. 2019;7(5):201–205. [ Google Scholar ]
  • Perera H.N., Hurley J., Fahimnia B., Reisi M. The human factor in supply chain forecasting: a systematic review. Eur. J. Oper. Res. 2018;274(2):574–600. [ Google Scholar ]
  • Pergola P., Cipolla V. Mission architecture for Mars exploration based on small satellites and planetary drones. International Journal of Intelligent Unmanned Systems. 2016;4(3):142–162. [ Google Scholar ]
  • Persson O., Danell R., Wiborg Schneider J. How to use Bibexcel for various types of bibliometric analysis. In: Åström F., Danell R., Larsen B., Schneider J., editors. Celebrating Scholarly Communication Studies: A Festschrift for Olle Persson at His 60th Birthday. International Society for Scientometrics and Informetrics; Leuven, Belgium: 2009. pp. 9–24. [ Google Scholar ]
  • Pinto R., Zambetti M., Lagorio A., Pirola F. A network design model for a meal delivery service using drones. International Journal of Logistics Research and Applications. 2019:1–21. [ Google Scholar ]
  • Prasad G., Abishek P., Karthick R. Influence of unmanned aerial vehicle in medical product transport. International Journal of Intelligent Unmanned Systems. 2018;7(2):88–94. [ Google Scholar ]
  • Pulver A., Wei R. Optimizing the spatial location of medical drones. Appl. Geogr. 2018;90:9–16. [ Google Scholar ]
  • Ramadan Z.B., Farah M.F., Mrad M. An adapted TPB approach to consumers' acceptance of service-delivery drones. Technol. Anal. Strat. Manag. 2017;29(7):817–828. [ Google Scholar ]
  • Rao B., Gopi A.G., Maione R. The societal impact of commercial drones. Technol. Soc. 2016;45:83–90. [ Google Scholar ]
  • Rengarajan V., Alamelu R., Amudha R., Cresenta Shakila Motha L., Sivasundaram Anushan S.C. Youth awareness on drones - a new paradigm in freight logistics. Int. J. Appl. Bus. Econ. Res. 2017;15(13):353–361. [ Google Scholar ]
  • Resnik D.B., Elliott K.C. Using drones to study human beings: ethical and regulatory issues. Sci. Eng. Ethics. 2018:1–12. doi: 10.1007/s11948-018-0032-6. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Roma A. Drones and popularisation of space. Space Pol. 2017;41:65–67. [ Google Scholar ]
  • Royo-Vela M., Black M. Drone images versus terrain images in advertisements: images' verticality effects and the mediating role of mental simulation on attitude towards the advertisement. J. Market. Commun. 2018:1–19. [ Google Scholar ]
  • Saadat N., Sharif M.M.M. 2017 International Conference on Engineering Technology and Technopreneurship (ICE2T) IEEE; 2017. Application framework for forest surveillance and data acquisition using unmanned aerial vehicle system; pp. 1–6. [ Google Scholar ]
  • Sakiyama M., Miethe T.D., Lieberman J.D., Heen M.S., Tuttle O. Big hover or big brother? Public attitudes about drone usage in domestic policing activities. Secur. J. 2017;30(4):1027–1044. [ Google Scholar ]
  • Sepasgozar S.M., Davis S.R., Loosemore M. Dissemination practices of construction sites' technology vendors in technology exhibitions. J. Manag. Eng. 2018;34(6) [ Google Scholar ]
  • Shahzaad B., Bouguettaya A., Mistry S., Neiat A.G. 2019 IEEE International Conference on Web Services (ICWS) IEEE; 2019. Composing Drone-As-A-Service (DAAS) for Delivery; pp. 28–32. [ Google Scholar ]
  • Shavarani S.M. Multi-level facility location-allocation problem for post-disaster humanitarian relief distribution: a case study. J. Humanit. Logist. Supply Chain Manag. 2019;9(1):70–81. [ Google Scholar ]
  • Shavarani S.M., Golabi M., Izbirak G. A capacitated biobjective location problem with uniformly distributed demands in the UAV‐supported delivery operation. Int. Trans. Oper. Res. 2019 doi: 10.1111/itor.12735. [ DOI ] [ Google Scholar ]
  • Sichko P. Integrating unmanned aerial system operations into the Dallas/Fort Worth airport environment. J. Airpt. Manag. 2019;13(3):206–214. [ Google Scholar ]
  • Siva J., Poellabauer C. Mission-oriented Sensor Networks and Systems: Art and Science. 2019. Robot and drone localization in GPS-denied areas; pp. 597–631. [ Google Scholar ]
  • Smith P., Hunjet R., Aleti A., Barca J.C. Data transfer via UAV swarm behaviours: rule generation, evolution and learning. Australian Journal of Telecommunications and the Digital Economy. 2018;6(2):35. [ Google Scholar ]
  • Solodov A., Williams A., Al Hanaei S., Goddard B. Analyzing the threat of unmanned aerial vehicles (UAV) to nuclear facilities. Secur. J. 2018;31(1):305–324. [ Google Scholar ]
  • Song B.D., Ko Y.D. Quantitative approaches for economic use of emerging technology in the tourism industry: unmanned aerial vehicle systems. Asia Pac. J. Tourism Res. 2017;22(12):1207–1220. [ Google Scholar ]
  • Spasojevic B., Lohmann G., Scott N. Air transport and tourism–a systematic literature review (2000–2014) Curr. Issues Tourism. 2018;21(9):975–997. [ Google Scholar ]
  • Stankov U., Kennell J., Morrison A.M., Vujičić M.D. The view from above: the relevance of shared aerial drone videos for destination marketing. J. Trav. Tourism Market. 2019:1–15. [ Google Scholar ]
  • Suteris M.S., Rahman F.A., Ismail A. Route schedule optimization method of unmanned aerial vehicle implementation for maritime surveillance in monitoring trawler activities in Kuala Kedah, Malaysia. Int. J. Supply Chain Manag. 2018;7(5):245–249. [ Google Scholar ]
  • Swanson D. A simulation-based process model for managing drone deployment to minimize total delivery time. IEEE Eng. Manag. Rev. 2019;47(3):154–167. [ Google Scholar ]
  • Tan D.Y., Chi W., Bin Mohamed Salleh M.F., Low K.H. vol. 5. IOS Press; 2017. Study on impact of separation distance to traffic management for small UAS operations in urban environment; p. 39. (Transdisciplinary Engineering: A Paradigm Shift: Proceedings of the 24th ISPE Inc. International Conference on Transdisciplinary Engineering). July 10-14. [ Google Scholar ]
  • Tatham P., Ball C., Wu Y., Diplas P. Long-endurance remotely piloted aircraft systems (LE-RPAS) support for humanitarian logistic operations: the current position and the proposed way ahead. J. Humanit. Logist. Supply Chain Manag. 2017;7(1):2–25. [ Google Scholar ]
  • Tatham P., Stadler F., Murray A., Shaban R.Z. Flying maggots: a smart logistic solution to an enduring medical challenge. J. Humanit. Logist. Supply Chain Manag. 2017;7(2):172–193. [ Google Scholar ]
  • Tham A., Ogulin R., Selen W. Taming the wicked problem of a drone ecosystem: the role of the media. Emergence. 2017;19(3–4) [ Google Scholar ]
  • Thomsen M. Putting urban transport into the third dimension, together, Airbus. 2017. https://www.europeanfiles.eu/digital/putting-urban-transport-third-dimension-together available at.
  • Torens C., Dauer J.C., Adolf F. Advances in Aeronautical Informatics. Springer; Cham: 2018. Towards autonomy and safety for unmanned aircraft systems; pp. 105–120. [ Google Scholar ]
  • Vong C.H., Ravitharan R., Reichl P., Chevin J., Chung H. IEEE International Conference on Intelligent Rail Transportation (ICIRT) 2017. Ultrasound International; 2018. Small Scale Unmanned Aerial System (UAS) for Railway Culvert and Tunnel Inspection; pp. 1024–1032. [ Google Scholar ]
  • Vural D., Dell R.F., Kose E. Locating unmanned aircraft systems for multiple missions under different weather conditions. Operational Research. 2019:1–20. [ Google Scholar ]
  • Wang K., Yuan B., Zhao M., Lu Y. Cooperative route planning for the drone and truck in delivery services: a bi-objective optimisation approach. J. Oper. Res. Soc. 2019:1–18. [ Google Scholar ]
  • Weersink A., Fraser E., Pannell D., Duncan E., Rotz S. Opportunities and challenges for Big Data in agricultural and environmental analysis. Annual Review of Resource Economics. 2018;10:19–37. [ Google Scholar ]
  • Wendland E., Boxnick H. Drones in mining: from toy to process optimization. Min. Surf. Min. World Min. 2017;69(4):240–241. [ Google Scholar ]
  • West J.P., Bowman J.S. The domestic use of drones: an ethical analysis of surveillance issues. Publ. Adm. Rev. 2016;76(4):649–659. [ Google Scholar ]
  • Xi Z., Lou Z., Sun Y., Li X., Yang Q., Yan W. 2018 17th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES) IEEE; 2018. A vision-based inspection strategy for large-scale photovoltaic farms using an autonomous UAV; pp. 200–203. [ Google Scholar ]
  • Xiao Z.M., Guo Z. Research on key technologies of multi-UAV cooperative monitoring for machines faults in paper mills. (2019). Paper Asia. 2019;2:152–156. [ Google Scholar ]
  • Xu L., Kamat V.R., Menassa C.C. Automatic extraction of 1D barcodes from video scans for drone-assisted inventory management in warehousing applications. International Journal of Logistics Research and Applications. 2018;21(3):243–258. [ Google Scholar ]
  • Yoo H.D., Chankov S.M. 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) IEEE; 2018. Drone-delivery Using Autonomous Mobility: an Innovative Approach to Future Last-Mile Delivery Problems; pp. 1216–1220. [ Google Scholar ]
  • Zhang X., Liu Y., Zhang Y., Guan X., Delahaye D., Tang L. Safety assessment and risk estimation for unmanned aerial vehicles operating in national airspace system. J. Adv. Transport. 2018 [ Google Scholar ]
  • Zheng Z., Liu Y., Zhang X. The more obstacle information sharing, the more effective real-time path planning? Knowl. Base Syst. 2016;114:36–46. [ Google Scholar ]
  • Zheng S., Wang Z., Wachenheim C.J. Technology adoption among farmers in Jilin Province, China: the case of aerial pesticide application. China Agricultural Economic Review. 2019;11(1):206–216. [ Google Scholar ]
  • Zhou Z., Irizarry J., Lu Y. A multidimensional framework for unmanned aerial system applications in construction project management. J. Manag. Eng. 2018;34(3) [ Google Scholar ]
  • Zhu M., Wen Y.Q. Design and analysis of collaborative unmanned surface-aerial vehicle cruise systems. Journal of Advanced Transportation 2019. 2019 [ Google Scholar ]
  • Zhu C.C., Liang X.L., Sun Q. Conference Proceedings of the International Symposium on Project Management. 2017. Research on obstacle avoidance of UAV swarm based on cognitive model; pp. 152–158. [ Google Scholar ]
  • Zhuravska I., Kulakovska I., Musiyenko M. Development of a method for determining the area of operation of unmanned vehicles formation by using the graph theory. E. Eur. J. Enterprise Technol. 2018;2(3–92):4–12. [ Google Scholar ]
  • View on publisher site
  • PDF (749.6 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Drones reduce the treatment-free interval in search and rescue operations with telemedical support - A randomized controlled trial

Affiliations.

  • 1 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy; Department of Sport Science, Medical Section, University of Innsbruck, Innsbruck, Austria.
  • 2 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy.
  • 3 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy; Department of Cellular, Computational and Integrative Biology, University of Trento, Trento, Italy.
  • 4 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy; Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy; Department of Neurology/Stroke Unit, General Hospital of Bolzano, Italy.
  • 5 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy; Department of Internal and Emergency Medicine, Buergerspital, Solothurn, Switzerland.
  • 6 Center for Sensing Solutions, Eurac Research, Bolzano, Italy.
  • 7 NOI Techpark, Bolzano, Italy; Corpo Nazionale Soccorso Alpino e Speleologico - CNSAS, Milano, Italy.
  • 8 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy; Corpo Nazionale Soccorso Alpino e Speleologico - CNSAS, Milano, Italy. Electronic address: [email protected].
  • PMID: 36680868
  • DOI: 10.1016/j.ajem.2023.01.020

Introduction: Response to medical incidents in mountainous areas is delayed due to the remote and challenging terrain. Drones could assist in a quicker search for patients and can facilitate earlier treatment through delivery of medical equipment. We aim to assess the effects of using drones in search and rescue (SAR) operations in challenging terrain. We hypothesize that drones can reduce the search time and treatment-free interval of patients by delivering an emergency kit and telemedical support.

Methods: In this randomized controlled trial with a cross-over design two methods of searching for and initiating treatment of a patient were compared. The primary outcome was a comparison of the times for locating a patient through visual contact and starting treatment on-site between the drone-assisted intervention arm and the conventional ground-rescue control arm. A linear mixed model (LMM) was used to evaluate the effect of using a drone on search and start of treatment times.

Results: Twenty-four SAR missions, performed by six SAR teams each with four team members, were analyzed. The mean time to locate the patient was 14.6 min (95% CI 11.3-17.9) in the drone-assisted intervention arm and 20.6 min (95% CI 17.3-23.9) in the control arm. The mean time to start treatment was 15.7 min (95% CI 12.4-19.0) in the drone-assisted arm and 22.4 min (95% CI 19.1-25.7) in the control arm (p < 0.01 for both comparisons).

Conclusion: The successful use of drones in SAR operations leads to a reduction in search time and treatment-free interval of patients in challenging terrain, which could improve outcomes in patients suffering from traumatic injuries, the most commonly occurring incident requiring mountain rescue operations.

Keywords: Drone; Emergency medical services; Mountain rescue; Search and rescue; Unmanned aircraft.

Copyright © 2023 The Authors. Published by Elsevier Inc. All rights reserved.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest None.

Similar articles

  • The potential use of unmanned aircraft systems (drones) in mountain search and rescue operations. Karaca Y, Cicek M, Tatli O, Sahin A, Pasli S, Beser MF, Turedi S. Karaca Y, et al. Am J Emerg Med. 2018 Apr;36(4):583-588. doi: 10.1016/j.ajem.2017.09.025. Epub 2017 Sep 15. Am J Emerg Med. 2018. PMID: 28928001 Clinical Trial.
  • Using an Unmanned Aircraft System (Drone) to Conduct a Complex High Altitude Search and Rescue Operation: A Case Study. McRae JN, Gay CJ, Nielsen BM, Hunt AP. McRae JN, et al. Wilderness Environ Med. 2019 Sep;30(3):287-290. doi: 10.1016/j.wem.2019.03.004. Epub 2019 Jun 4. Wilderness Environ Med. 2019. PMID: 31171441
  • Utilizing Drones to Restore and Maintain Radio Communication During Search and Rescue Operations. McRae JN, Nielsen BM, Gay CJ, Hunt AP, Nigh AD. McRae JN, et al. Wilderness Environ Med. 2021 Mar;32(1):41-46. doi: 10.1016/j.wem.2020.11.002. Epub 2021 Jan 29. Wilderness Environ Med. 2021. PMID: 33518495
  • Use of Unmanned Aerial Vehicles in Wilderness Search and Rescue Operations: A Scoping Review. Vincent-Lambert C, Pretorius A, Van Tonder B. Vincent-Lambert C, et al. Wilderness Environ Med. 2023 Dec;34(4):580-588. doi: 10.1016/j.wem.2023.08.022. Epub 2023 Nov 1. Wilderness Environ Med. 2023. PMID: 37923682 Review.
  • COVID-19 Pandemic in Mountainous Areas: Impact, Mitigation Strategies, and New Technologies in Search and Rescue Operations. van Veelen MJ, Voegele A, Rauch S, Kaufmann M, Brugger H, Strapazzon G. van Veelen MJ, et al. High Alt Med Biol. 2021 Sep;22(3):335-341. doi: 10.1089/ham.2020.0216. Epub 2021 Jul 28. High Alt Med Biol. 2021. PMID: 34319777 Free PMC article. Review.
  • Green HEMS in mountain and remote areas: reduction of carbon footprint through drones? van Veelen MJ, Strapazzon G. van Veelen MJ, et al. Scand J Trauma Resusc Emerg Med. 2023 Jul 18;31(1):36. doi: 10.1186/s13049-023-01099-5. Scand J Trauma Resusc Emerg Med. 2023. PMID: 37464430 Free PMC article. No abstract available.

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • ClinicalKey
  • Elsevier Science
  • Ovid Technologies, Inc.
  • MedlinePlus Health Information

Miscellaneous

  • NCI CPTAC Assay Portal

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

COMMENTS

  1. (PDF) Drones for Search and Rescue

    In this work, we discuss how drones can effectively assist rescue crews in their mission to save human life. This paper is published under the Creative Commons Attribution 4.0 International (CC-BY ...

  2. Applications of drone in disaster management: A scoping review

    A comprehensive scoping review of drone applications in disaster management. Drone is a great tool in mapping, search and rescue, transportation and training. Lack of data on the use of drones in disaster victim identification. Focus more on drone assistance to humans in victim identification to prevent delay in DVI.

  3. An autonomous drone for search and rescue in forests using ...

    Autonomous drones will play an essential role in human-machine teaming in future search and rescue (SAR) missions. ... This research was funded by the Austrian Science Fund (FWF) under grant number P 32185-NBL and by the State of Upper Austria and the Austrian Federal Ministry of Education, Science, and Research via the LIT—Linz Institute of ...

  4. AI-based Drone Assisted Human Rescue in Disaster Environments

    Abstract. In this survey we are focusing on utilizing drone-based systems for the detection of individuals, particularly by identifying human screams and other distress signals. This study has significant relevance in post-disaster scenarios, including events such as earthquakes, hurricanes, military conflicts, wildfires, and more.

  5. Unmanned Aerial Vehicles for Search and Rescue: A Survey

    In recent years, unmanned aerial vehicles (UAVs) have gained popularity due to their flexibility, mobility, and accessibility in various fields, including search and rescue (SAR) operations. The use of UAVs in SAR can greatly enhance the task success rates in reaching inaccessible or dangerous areas, performing challenging operations, and providing real-time monitoring and modeling of the ...

  6. Search and rescue operation using UAVs: A case study

    Many people go missing in the wild every year. In this paper, the Search and Rescue (SAR) mission is conducted using a novel system comprising an Unmanned Aerial Vehicle (UAV) coupled with real-time machine-learning-based object detection system embedded on a smartphone. Human detection from UAV in the wilderness is a challenging task, because ...

  7. (PDF) Drone Swarms to Support Search and Rescue Operations

    In this section, we present the key topics that came up in our. review of the literature and the interviews as five key research challenges for drone. swarms in search and rescue operations ...

  8. Autonomous Aerial Robots for Search and Rescue Missions

    Autonomous aerial robots, often referred to as drones, have demonstrated immense potential in enhancing the effectiveness and safety of search and rescue (SAR) operations. This paper presents a comprehensive study of the latest advancements in drone technology tailored for SAR missions. It examines the integration of sophisticated algorithms that empower drones with autonomous navigation ...

  9. PDF IoT-based Autonomous Search and Rescue Drone for Precision Firefighting

    Our research aims to utilize Internet of Things (IoT)-based autonomous drones to provide detailed situational awareness and assessment of these dangerous areas to rescue personnel, firefighters, and police officers. The research involves the integration of four systems with our drone, each capable of tackling situations the drone can be in.

  10. The role of drones in disaster response: A literature review of

    Several potential applications of drones in the context of response operations can be listed as monitoring, enhancing situational awareness, enabling search and rescue (SAR) operations, conducting damage assessment, providing a standalone mobile communication network, and delivering first aid supplies (FSD, 2017).

  11. Search and rescue with autonomous flying robots through behavior-based

    This research combines behavior-based artificial intelligence, swarm intelligence, pattern search theory, and existing disaster data into a theory of improved search and rescue through the use of autonomous flying robots, also called drones, Unmanned Aerial Vehicles (UAV), or Unmanned Aerial Systems (UAS).

  12. The Application of Unmanned Aerial Systems in Search and Rescue ...

    Research areas may include (but are not limited to) the following: Special unmanned aerial systems dedicated for search and rescue; Use of consumer-grade drones in search and rescue; Close-range photogrammetry in search and rescue; Algorithms for person detection and tracking; Special software for search and rescue with drones;

  13. How Drones Are Revolutionizing Search and Rescue

    The ability of drones to access hard-to-reach areas quickly and safely has revolutionized how search and rescue teams operate. They can cover vast expanses of terrain in a fraction of the time it would take ground-based teams, providing real-time aerial imagery and data that is crucial for coordinating rescue efforts.

  14. PDF Challenges Arising in a Multi-Drone System for Search and Rescue

    Fig. 1. User interface of the drone swarm search and rescue prototype. In the HERD1 research project, we seek to create a prototype for semi-autonomous multi-drone systems. This prototype will enable end-users to interact with drone swarms and provide control of the swarm without overwhelming the operator with moment-by-moment decision-making.

  15. Drones to the rescue? Exploring rescue workers behavioral intention to

    This paper examines the determinants that drive the behavioral intention of mountain rescuers to adopt drones in rescue missions. Design/methodology/approach This is a behavioral study that builds upon an extended model of the. unified theory of acceptance and use of technology (UTAUT) and investigates the relationship between individual ...

  16. Full article: Global perspectives on unmanned aerial vehicles

    The number of research papers published in each social science field. ... (Citation 2023) call for more research into how drone usage affects data accuracy, ethical considerations, and the development of new analytical methods. Similarly, ... Van C. 2017. First report of using portable unmanned aircraft systems (drones) for search and rescue.

  17. Cooperative Search and Rescue with Drone Swarm

    Our research focuses on using UAVs in a swarm to perform SAR missions with the objective of rescuing missing individuals in distress. The main contribution of this paper is the development of a new approach that enables the utilization of drone swarms in search and rescue operations when a probability map of the region is provided in advance.

  18. IoT-based Autonomous Search and Rescue Drone for Precision Firefighting

    The research involves the integration of four systems with our drone, each capable of tackling situations the drone can be in. As the recognition of civilians and protecting them is a key aspect of disaster management, our first system (i.e., Enhanced Human Identification System) to detect trapped victims and provide rescue personnel the ...

  19. Managing the drone revolution: A systematic literature review into the

    Research into the use of flying ad-hoc networks to monitor and manage deviant drone behaviour (Bahloul et al., 2017; Barka et al., 2018, Karthikeyan and Vadivel, 2019) are in progress, as are geofencing (Boselli et al., 2017) and signal jamming (Chowdhury et al., 2017) that act on the navigation systems within drones to prevent drone incursion ...

  20. (PDF) Drones to the rescue? Exploring rescue workers' behavioral

    This paper examines the determinants that drive the behavioral intention of mountain rescuers to adopt drones in rescue missions. Design/methodology/approach This is a behavioral study that builds ...

  21. Managing the drone revolution: A systematic literature review into the

    Initial screening results showed that for a substantial portion of the papers, drones are not the core focus of the paper and are merely an enabling device ... Stopforth R. 2018. Semi-Autonomous Robot Control System with an Improved 3D Vision Scheme for Search and Rescue Missions. A Joint Research Collaboration between South Africa and ...

  22. Lifeguards in the sky: Examining the public acceptance of beach-rescue

    The research review by the authors suggests that the public acceptance of drones will depend on the usage context, with most acceptance for drones expressed in relation to security-related applications -such as crime detection and investigation, national security defense, and emergency search and rescue [[36], [37], [38]].

  23. Drones reduce the treatment-free interval in search and rescue

    1 Institute of Mountain Emergency Medicine, Eurac Research, Bolzano, Italy; Department of Sport Science, Medical Section, University of Innsbruck, Innsbruck, Austria. ... We aim to assess the effects of using drones in search and rescue (SAR) operations in challenging terrain. We hypothesize that drones can reduce the search time and treatment ...