Research Activities

Urban Search & Rescue Robotics: During my experience in both NIFTI and TRADR projects, I had the chance to deeply understand what are the main problems to be addressed, when robots are involved, together with humans, in rescue missions. Closely working with firefighters, I better understood what should be the capabilities of a robot, in terms of autonomy, in order to support rescue responders. According to this experience, I worked on increasing the level of autonomy of a rescue robot. In July 2012, after the earthquake in Mirandola, in the Emilia-Romagna region, Northern Italy, I also worked to deploy a team of humans and robots to assess damage to historical buildings, and cultural artifacts located therein.

Cognitive Robotics: I worked on a new approach for robot cognitive control based on modeling robot stimuli, the stimulus-response and the resulting task switching or stimulus inhibition. The proposed framework contributes to the state of the art on robot planning and high level control as it provides a novel view on robot behaviors and a complete implementation on an advanced robotic platform.

High Level Robot Control: I developed a robotic system that employs high-level control in order to operate in a real-world setting where the main task is human-assisted exploration of an environment. In this system, I integrate multi-modal perception from vision and mapping with a model-based executive control. In this framework, action planning is performed using a high-level representation of the environment that is obtained through topological segmentation of the metric map and object detection and 3D localization in the map. This work contributes in providing an effective method to build a logical representation of the low-level perception of the robot and, compile the robot perception into knowledge and, finally, integrate this knowledge with a model-based executive control. The overall system enables the robot to infer strategies in so generating parametric plans that are directly instantiated from perception.

Human-Robot Interaction and Collaboration: I described a novel framework to learn actions, intentions and plans of a firefighter, executing a rescue task in a training car accident scenario. This framework is based on a model of human-robot collaboration. This model interpreted the collaboration as a learning process mediated by the firefighter prompt behaviors and the apprentice collecting information from him to learn a plan. Novel is the use of the Gaze Machine, a wearable devise which can allow to gather and convey visual and audio input from the firefighter while executing a task. In this work I described the process through which such the information, delivered by the Gaze Machine, is transformed into plans.

Mapping and 3D Path Planning: I developed a framework for real-time 3D motion planning and control of tracked robots, for autonomous navigation in harsh environments. This framework is based on a semantic representation of the environment. This representation is used to plan feasible 3D paths for the robot, toward a target position. The physical execution of the path is delegated to a decoupled controller, responsible of both generating velocity commands and adapting the robot morphology during the tracking task. I extended the controller with a statistical model assessing the touch of the articulated mechanical components of the robot with the traversed surfaces. This model is used to both correct the estimation of the robot morphology and to ensure that the robot has a better traction onto harsh terrain. I also worked on improving the semantic representation of the environment with information about traversability. I developed a path planning algorithm which also takes into account dynamic obstacles.

3D Motion Planning and Control: I proposed a new approach for an adaptive robust 3D trajectory tracking controller design. The controller is modeled for actively articulated tracked vehicles (AATVs). These vehicles have active sub-tracks, called flippers, linked to the ends of the main tracks, to extend the locomotion capabilities in hazardous environments, such as rescue scenarios. The proposed controller adapts the flippers configuration and simultaneously generates the track velocities, to allow the vehicle to autonomously follow a given feasible 3D path. The approach develops both a direct and differential kinematic model of the AATV for traversal task execution correlating the robot body motion to the flippers motion. The differential kinematic model integrates a differential drive robot model, compensating the slippage between the vehicle tracks and the traversed terrain. The underlying feedback control law dynamically accounts for the kinematic singularities of the mechanical vehicle structure. The designed controller integrates a strategy selector, which has the role of locally modifying the rail path of the flipper end points.

Augmented Reality and Simulation: I developed an AR-based simulation framework which allows robot developers to build on-line an Augmented Reality Environment (ARE) for real robots, integrated into the visualization interface of Robot Operating System (ROS). The system I proposed goes beyond an interface for drawing objects, as the design exploited a stochastic model activating the behaviors of the introduced objects. The framework also builds a world model representation that serves as ground truth for training and validating algorithms for vision, motion planning and control.

Heterogeneous Multi-Robot Coverage: We developed a framework which allows a team of heterogeneous robots to cooperate in order to accomplish a coverage task. This framework comprises of three main components. The first component is responsible of generating at real-time the next best position in the environment to be reached by each robot. The next best positions of the robots are generated according to a search strategy which is based on the well-known Real-time A* algorithm extended to robots with different capabilities (e.g., UAVs and UGVs). The second component manages both local perception of the robots and merging. Perception concerns with 3D point cloud acquisition, registration, segmentation and, finally, traversability analysis of the robot’s surrounding. The latter component handles the motions of the robots. This framework embeds the dynamics of the robots modelled by the physic engine of V-REP with both the perceptual and motion functionalities of the robots, developed under ROS. Finally, the framework includes a strategy for locally solve conflicts among robots as well as a functionality for robot rendez-vous.

 
 
me2.JPG