The CUAS project is structured around seven major topic areas. Each topic area has a set of goals and milestones structured in terms of a systems research perspective and a disciplinary or more basic research oriented perspective.
- Topic 1: Temporal and Spatial Reasoning Techniques
This topic area focuses on on-line, real-time temporal and spatial reasoning techniques for robotics. Such functionalities are particularly important in the development, integration and real-time leveraging of higher level deliberative functionalities such as planning, cooperation, and situational awareness in operational UA systems.
- Topic 2: Delegation and Cooperation
This topic area focuses on formal and practical uses of the concept of delegation, which offers a very powerful and concise abstraction for modeling collaboration among multiple robotic systems and human resources. The theory which can be built up around delegation also permits a unified characterization of the essential concepts of mixed-initiative interaction, where humans request help from robotic systems and vice versa, and adjustable autonomy.
- Topic 3: Distributed Automated Planning
Complex multi-agent missions often require planning. Appropriate planners should balance flexible local decision-making against the necessity to gain approval from a human operator for some decisions. This requires a unifying sharable task structure that agents can dynamically elaborate, yet is concise, formal, and amenable to analysis in terms of safety properties. We propose to develop new distributed multi-agent planning techniques based on the use of such task structures as both input and output. A novel hybrid sequential / partially ordered plan structure will enable the creation of very rich state information, improving planning performance significantly. The need to take resources such as time, space and fuel into account leads to a distributed constraint problem that will be tackled both formally and pragmatically. Planners will use delegation to enlist the assistance of other agents in achieving mission goals and to dynamically request assistance or approval from human operators. This results in a distributed and mixed-initiative planning process with clearly defined authority and responsibilities.
- Topic 4: Distributed Situation Awareness
Successful missions require situation awareness, knowing what happens in the environment and understanding the impact on mission objectives. This topic focuses on developing a formally specified situation awareness infrastructure for collaborative agents based on the stream-based knowledge processing middleware framework DyKnow. The infrastructure will support collection, processing and fusion of information on many abstraction levels, where high-level reasoning will be firmly grounded in sensing and conclusions drawn from reasoning will appropriately influence low-level processing. Processing will reconfigure itself to adapt to new circumstances and quality of service guarantees will be given. A major challenge is fusing semantically meaningful information from different sources, where we propose an ontology-based approach. To support situation awareness among collaborative UAS's we will also develop infrastructure support for goal-directed sharing and fusing of semantic information.
- Topic 5: Personal Micro-Aerial Vehicles
In this topic, we envision a personal MAV assistant serving as an information facilitator for both individual and team emergency response personnel. The objective is to take an already existing MAV, the LinkQuad quadrotor system, and adapt it for personalized, on-the-fly usage in the field with push-button mission setup, portability, plug and play sensors and a quality of service to match emergency response and security applications criteria. The PMAV will also be able to use on-line tracking techniques from Topic 7 to follow an emergency services assistant in the field, remaining available for commands at all times. The PMAV can be used stand-alone or as a participant in collaborative UAS missions with heterogeneous platforms, in which case it will contribute distributively to the generation of situation awareness models for human users and other UAS's.
- Topic 6: Multi-Modal Decision Support Interfaces and Visualization
Planning, monitoring and controlling the coordinated operation of flocks of UAS's is complex due to the four-dimensional nature of their actions. 3D data of this type is very difficult for users to interpret and 2D 'radar' displays place a great cognitive load on the user, severely limiting the number of aircraft an operator can follow. This topic will address these issues through the creation of interactive information representations using 3D displays based on Virtual Reality (VR) as well as Augmented Reality (AR) technologies, in particular head-mounted displays (HMD). Such representations can allow the user to rapidly and clearly interpret the environment and monitor operations. HMDs will also be exploited for video feedback from the UAS's in a 'telepresence' mode: seeing in real-time through video cameras mounted on the UAS and, if necessary, taking direct control of the aircraft to resolve problems beyond the scope of the UAS's on-board systems.
- Topic 7: Visual Target Tracking
A limitation of traditional target tracking is that tracking does not start before a target is definitely detected. Recently there has been a clear change in this traditional way of doing target tracking in that the hard detection problem is integrated into the tracking algorithm. In this topic, we will take this idea to the next level by incorporating novel ways of representing the targets and making use of information from several different sensors. The goal is to derive new and better target tracking algorithms based on the particle filter in order to be able to track an arbitrary number of persons moving around in an area where UAS's are operating. We will fuse information from a combination of cameras and lasers to provide tracking results in world coordinates. This will involve machine learning algorithms in order to learn individual and adaptable descriptors for each target.
Page responsible: Patrick Doherty
Last updated: 2015-04-28