The following table contains videos from the CUAS project.

Note: Flights are autonomous. While some videos show human operators with remote controls, these are backup operators who stand ready to take over in case of problems during experimentation.

Topics

Video

Description

Topic 1-4

VID1

This video shows experimentation with our delegation and planning framework. A ground operator interacts with two micro-aerial vehicles to surveil an area by delegating cells to individual micro-aerial vehicles. A distributed temporal planner is then used to determine the task and motion plans required by the vehicles interactively with the ongoing delegation. The mission is autonomous from take-off to landing and uses a Vicon real-time motion tracking system for localization.

Topic 5

VID2

This video shows experimentation with simultaneous motion planning and obstacle avoidance. It depicts real-time plan repair based on dynamic creation of no-fly zones such as that required for a person walking into the operational area. The system continually replans motion paths using a 3D map based on input from a depth camera. The flights are autonomous and use a Vicon real-time motion tracking system for localization.

Topic 5,

Topic 7

VID3

This video shows experimentation with virtual leashing of a micro-aerial vehicle to a person. The idea is that the micro-aerial vehicle passively follows an emergency rescuer around until it is needed to actively execute mission tasks required by the rescuer. Initial experimentation has been done indoors. The flight is autonomous and uses a Vicon real-time motion tracking system for localization.

Topic 6

VID4

This video shows experimentation with interface functionality using video see-through technology. Micro-aerial vehicles are difficult to see when flying at a distance. The idea here is to use augmented reality with iPad-like devices to enhance a ground operator's operational viewpoints of mission environments. The Vicon real-time motion trancking system is used for localizing the tablet, the oprator's head, and the UAV.

VID4b

This video shows an example mission using the video see-through technology to control a small-scale UAV platform.

Topic 7

VID5

This video shows experimentation with visual detection and tracking both indoors and outdoors. A combination of image tracking algorithms augmented with probabilistic modelling of kinematics with PHD filters is used to make the task of target re-identification more robust.

Example mission 1
(RMAX+LinkQuad)

VID6

This video shows a mission in which the RMAX helicopter equipped with a laser range finder is used to generate pointcloud information about a selected region.

VID7

VID7 (speed 3x)

This video shows the first part of a complex mission executed using two heterogeneous UAV platforms with different sensor capabilities where the goal is to generate a 3D model of the environment, identify building structures and use the acquired information to survey buildings' facades. A ground operator selects a region to be scanned and a mission is generated and executed. During the mission execution a 3D occupancy grid representation is built based on pointcloud data. At the end of the mission convex hulls around the building structures are automatically identified through a clustering algorithm. This information is then used in the second part of the mission shown below.

VID8

VID8 (speed 3x)

This video shows the second leg of the mission where a LinkQuad platform is deployed to collect information about the facades of the building structures identified previously by the RMAX system. The LinkQuad uses the 3D occupancy grid generated by the RMAX to generate its own motion plans around the structures as well as to calculate optimal positions to fly to in order to cover whole area of each facade with its camera sensor.

Example mission 2
(Two LinkQuads)

VID9

This video shows a similar mission as described above. Here, the mission takes place in an indoor environment and takes advantage of a motion capturing system. Two LinkQuad platforms are used with different sensor configurations. One is equipped with a laser range finder, and the other with a depth camera. The video shows generation of the pointcloud, the 3D occupancy grid, and the convex hulls around the building structure which is automatically identified through a clustering algorithm.

VID10

This video shows the second leg of the mission where the other LinkQuad platform equipped with a depth camera is deployed to collect additional information about the facades of the building structure identified. The second LinkQuad uses the 3D occupancy grid, and identified buidling structure information generated by the first one to calculate its own motion plans around the structure and compute optimal positions to fly to in order to cover the whole area of each of the facades.