Hide menu

CUAS Videos

The following videos demonstrate specific aspects of the research performed within the CUAS project.

The video shows experimentation with our delegation and planning framework. A ground operator interacts with two micro-aerial vehicles to surveil an area by delegating cells to individual micro-aerial vehicles. A distributed temporal planner is then used to determine the task and motion plans required by the vehicles interactively with the ongoing delegation. The mission is autonomous from take-off to landing and uses a Vicon real-time motion tracking system for localization. The video is related to CUAS topics 1-4.

The video shows experimentation with simultaneous motion planning and obstacle avoidance. It depicts real-time plan repair based on dynamic creation of no-fly zones such as that required for a person walking into the operational area. The system continually replans motion paths using a 3D map based on input from a depth camera. The flights are autonomous and use a Vicon real-time motion tracking system for localization. The video is related to CUAS topic 5.

The video shows experimentation with virtual leashing of a micro-aerial vehicle to a person. The idea is that the micro-aerial vehicle passively follows an emergency rescuer around until it is needed to actively execute mission tasks required by the rescuer. Initial experimentation has been done indoors. The flight is autonomous and uses a Vicon real-time motion tracking system for localization. The video is related to CUAS topics 5 and 7.

The video shows experimentation with interface functionality using video see-through technology. Micro-aerial vehicles are difficult to see when flying at a distance. The idea here is to use augmented reality with iPad-like devices to enhance a ground operator's operational viewpoints of mission environments. The Vicon real-time motion trancking system is used for localizing the tablet, the oprator's head, and the UAV. The video is related to CUAS topic 6.

The video shows an example mission using the video see-through technology to control a small-scale UAV platform.

The video shows experimentation with visual detection and tracking both indoors and outdoors. A combination of image tracking algorithms augmented with probabilistic modelling of kinematics with PHD filters is used to make the task of target re-identification more robust. The video is related to CUAS topic 7.

The video shows a mission in which the RMAX helicopter equipped with a laser range finder is used to generate pointcloud information about a selected region.

The video shows the first part of a complex mission executed using two heterogeneous UAV platforms with different sensor capabilities where the goal is to generate a 3D model of the environment, identify building structures and use the acquired information to survey buildings' facades. A ground operator selects a region to be scanned and a mission is generated and executed. During the mission execution a 3D occupancy grid representation is built based on pointcloud data. At the end of the mission convex hulls around the building structures are automatically identified through a clustering algorithm. This information is then used in the second part of the mission shown below.

The video shows the second leg of the mission where a LinkQuad platform is deployed to collect information about the facades of the building structures identified previously by the RMAX system. The LinkQuad uses the 3D occupancy grid generated by the RMAX to generate its own motion plans around the structures as well as to calculate optimal positions to fly to in order to cover whole area of each facade with its camera sensor.

This video shows a similar mission as described above. Here, the mission takes place in an indoor environment and takes advantage of a motion capturing system. Two LinkQuad platforms are used with different sensor configurations. One is equipped with a laser range finder, and the other with a depth camera. The video shows generation of the pointcloud, the 3D occupancy grid, and the convex hulls around the building structure which is automatically identified through a clustering algorithm.

The video shows the second leg of the mission where the other LinkQuad platform equipped with a depth camera is deployed to collect additional information about the facades of the building structure identified. The second LinkQuad uses the 3D occupancy grid, and identified buidling structure information generated by the first one to calculate its own motion plans around the structure and compute optimal positions to fly to in order to cover the whole area of each of the facades.

A summary of the above videos in a table form is available here: table view

Page responsible: Patrick Doherty
Last updated: 2020-05-12