Autonomous aerial robots supervise mobile robots on the ground

Research article by Netcetera expert Nithin Mathews

Self-assembling robots have the potential to undergo autonomous morphological adaptation. However, due to the simplicity in their hardware makeup and their limited perspective of the environment, they are often not able to reach their full potential. Without external cues or prior information, they may not be able to adapt their collective robot structures (morphologies) to tasks and environments. Together with other researchers, our colleague Nithin Mathews published a research article in the journal “Robotics and Autonomous Systems”. They present a novel control methodology named “supervised morphogenesis” for the control of heterogeneous robot groups composed of both ground-based and aerial robots. This control methodology enables aerial robots to exploit their elevated position and better view of the environment to initiate and control (hence supervise) the formation of self-assembling robots on the ground. That is, aerial robots use input from onboard cameras and other dedicated sensors to build two or three-dimensional models of the environment. These models are then used to perform on-board simulations that determine the most suitable task-dependent (i.e., target) morphologies to be formed on the ground.

In the research article “Supervised morphogenesis: exploiting morphological flexibility of self-assembling multirobot systems through cooperation with aerial robots”, Nithin and his colleagues from multiple research labs and the Fraunhofer Institute present results of two case studies using two different autonomous aerial platforms and up to six self-assembling autonomous robots. The research is a significant step towards realizing the true potential of self-assembling robots by enabling autonomous morphological adaptation to unknown tasks and environments.

Existing self-assembling robots are often pre-programmed by human operators who precisely define the scale and shape of target morphologies to be formed before deployment. Alternatively, robots rely on specific environmental cues to infer target morphologies. This is primarily because self-assembling robots tend to be relatively simple robotic units. They lack the sensory apparatus to characterize the environment with sufficient accuracy to find a suitable target morphology for a given situation.

Example morphologies that can be formed by ground-based self-assembling robots.
The aerial robots used for supervision: The commercially available AR.Drone (left) and the eye-bot (right) developed by the EPFL Laboratory of Intelligent Systems.

Supervised morphogenesis is a novel approach that enables aerial robots to provide assistance to ground-based self-assembling robots. The aerial robots have decision-making authority and extend the functionality of a group of self-assembling robots. That is, self-assembling robots rely on the aerial robot to act as an “eye-in-the-sky" and to provide the guidance required to form new morphologies fit for the task and/or the environment. A key feature of supervised morphogenesis is its high portability to other systems because it does not depend on proprietary hardware and can be implemented using standard cameras, LEDs, and wireless Ethernet-based communication available to most robotic platforms.

The robot team considered in case study no. 1 composed of one AR.Drone and six foot-bots.
The experimental setup considered in case study no. 2: Five foot-bots are initially placed in the deployment area. The light source represents the destination area. A hill obstacle that cannot be crossed by an individual foot-bot is placed between both areas. The task requires all foot-bots to reach the destination without toppling over. Visualized are also two positions above the hill obstacle the eye-bot uses to build a three-dimensional model of the environment using its monocular vision system.

The paper also confirms that the presented control methodology can provide performance benefits to heterogeneous robot groups in term of task completion times. The methodology enables aerial robots to allocate precise number of resources needed for a target morphology by recruiting robots based on their location on the ground and based on their mutual proximity while freeing up the rest of the group to pursue other tasks.

Before joining Netcetera, Nithin was a researcher at IRIDIA – the artificial intelligence research laboratory of the Université Libre de Bruxelles in Belgium. To finalize the research presented in the article, Nithin received support from Netcetera in the form of educational days and from Wallonia-Brussels-International (WBI) through a Scholarship for Excellence grant. The article is a continuation of Nithin’s previous research on robots with “mergeable nervous systems”.

More stories

On this topic