Hide menu

The Stream-Based Reasoning Project

Intelligent agents, robotic or otherwise, observe their world through sensing over time. However, the sensory information load can be difficult to manage. Streaming data can have a high velocity making it difficult to react to in a timely manner. Doing so is however important since high-velocity data tends to get stale quickly, and thus loses its value if not handled immediately. This means data must be handled in a timely fashion, for example to ensure safety constraints are upheld. Similarly, streaming data tends to be high-volume meaning that there is too much data for a system to reasonably store or even process. Furthermore, streaming data can have a large variety with different types of multimodal data being used. Finally, veracity of streaming data plays a major role, where data can be imprecise or incomplete. This affects robotic systems that observe the world through uncertain sensory data, but also social media or news resources which may be counterfactual. An agent's mental representation of the world it resides in is greatly affected by its strategies towards handling these four V's. also commonly associated with big data.

The Stream Reasoning Project investigates problems related to coping with these streaming data properties, and in particular how one might reason with and learn from such data. Stream reasoning systems commonly consider a mixture of an incrementally-available sequence of states with a finite knowledge base that can be modified as more information becomes available. These updates are an incremental process, which can be used to discover answers to queries or to generate a stream relative to provided constraints. Hence we commonly define stream reasoning as incremental reasoning over rapidly-changing information. A modern stream reasoning framework, such as DyKnow, needs to be able to handle a network of many such stream reasoning processes over potentially many streams. This can either be done for a single agent or platform, or in collaboration with a number of other agents or platforms through e.g. federations.

DyKnow

DyKnow is a stream reasoning framework that is tasked with the organization and maintenance of networks of stream-connected components, and has been used successfully as a central component in the UASTech UAV architecture. Its core conceptual building blocks are knowledge processes, which are represented either by sources or computation units, the latter of which perform transformations on streaming data with the goal of producing a new stream at a higher level of abstraction. For example, a person detector might generate person positions by combining the data from the cameras, IMU, and GPS.

A partial high-level view of the incremental information and knowledge processing required for a UAV traffic surveillance scenario.

DyKnow also provides support for chronicle recognition, detecting complex events that are defined in terms of temporal patterns of primitive events. Object Linkage Structures support object classification, where hypotheses about object types and identities can be formed and continuously validated or rejected.

A formula progression component provided by DyKnow is essential for runtime verification tasks. In progression-based runtime verification, we incrementally evaluate temporal logic formulas. During each step, new states are considered and the formulas are rewritten to take into account the new information. Execution monitoring is an application of runtime verification in which we guard whether a system adheres to its specifications, which is also useful for monitoring the execution of plans. TALplanner allows operators to be manually or automatically annotated with monitor formulas, conditions that must hold throughout the execution of the given operator. Such conditions are included in any generated plan. During execution, DyKnow continuously gathers information to generate a sequence of states, through which monitor formulas can be progressed. Any violation is signalled to the execution system, which can then abort, replan, or perform any other context-dependent form of recovery.

A partial high-level view of the incremental information and knowledge processing required for a UAV traffic surveillance scenario.

Situation Awareness

While field robotics provide a highly dynamic infrastructure for information gathering, we lately observe an increasing amount of streaming data being made publically-available. In the pursuit of open-data initiatives governments have been releasing more data for public use, some of which comes in the form of data streams, thus providing complementary information to field robotic systems when accessible. For example, rather than having a UAV monitor a bridge, one might be able to make due with Trafikverket's traffic cameras instead, freeing up the UAV's resources for other tasks. Likewise, public weather forecasts from SMHI could be incorporated into a UAV's flight plan to avoid areas of bad weather on its way to a target destination. This has the potential to pool information from geographically-disparate resources to enhance an agent's situation awareness considerably. On the other hand, it introduces additional needs in the form of autonomic computing, which focuses on self-managing capabilities of a system. This is because the scope of a system can no longer be assumed to be known in advance. Both goals and capabilities may change over time, both between and during execution. There is therefore a need for service discovery and composition, where the semantics of the streaming sources or transformations are understood. DyKnow uses adaptive reconfiguration to select and use relevant capabilities in the form of stream data resources and transformations while being robust to changes in the availability of these capabilities. These also make for useful features in application domains such as the Internet of Things (IoT).

A partial high-level view of the incremental information and knowledge processing required for a UAV traffic surveillance scenario.

An increased observational power can be leveraged to help recognize and predict an agent's goals and intentions. Activity recognition uses learning of normative models to make real-time predictions of an agent's current and future activities, where these activities can be of different kinds. This requires the observation of many agents over time to build temporal models that represent how combinations of features tend to change. At any given time, an agent's features can then be attributed to an activity model with a certain probability representing how likely the feature changes are explained by the model. Since normative behaviors may change, models need to be updated, new models may need to be created to better explain normative behaviors, and likewise old models may need to be removed. A concrete example of activity recognition deals with motion patterns, which describe how agents move through space and time through the use of sequential trajectory models. For example, in observing the behaviors of cars at a crossing, we may use normative models to describe turning activities or continue-straight activities. If the road network changes, i.e. traffic lights get installed or removed, or roadwork is performed, then the models need to be updated to accurately reflect the new normative behaviors observed. Being able to detect and predict short-term activities can be informative in predicting long-term activities, such as workers' commutes. These concepts can in turn be used in runtime verification tasks.

An example of simple motion patterns (left) being composed into more complex motion patterns (right) for activity recognition and prediction

Streams and Field Robotics Applications

In the setting of field robotics, we consider autonomous robotic systems operating in an uncontrolled physical domain rather than a controlled environment, physical or otherwise. While this greatly boosts the potential impact of these technologies, the domain also presents many challenges associated with the uncertainty imposed by the physical world. An autonomous robotic system observes the world through sensors, which provide uncertain and highly time-sensitive information for the system to act on. By gathering timely data from various sensors,n we can start to build up a snapshot of the world. However, this snapshot is both noisy and incomplete, and thus requires reasoning to fill in the gaps.

Being able to reason about one's surroundings is extremely useful when considering applications such as execution monitoring. During plan execution, low-level systems need continuous and timely updates about the state of the world and how it changes over time. Similarly, a planner might wish to check pre- and post-conditions at a much more abstract level. In either case, streams originating from sensors need to be incrementally refined, producing streams at increasingly higher levels of abstraction. For example, information used in an unmanned aerial vehicle (UAV) may originate in color and thermal cameras, inertial measurement units, GPS (global positioning system) sensors, geographic information systems, direct communication with ground operators or other autonomous agents, or a variety of other sources at highly varying levels of abstraction. Such information must be integrated and processed using a number of techniques including image processing, data and information fusion techniques, and higher-level reasoning processes such as qualitative spatial reasoning.

Selected Publications

[10] Daniel de Leng and Fredrik Heintz. Towards Adaptive Semantic Subscriptions for Stream Reasoning in the Robot Operating System. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 2017. [ Conference | .pdf ]
[9] Mattias Tiger and Fredrik Heintz. Stream Reasoning using Temporal Logic and Predictive Probabilistic State Models. In International Symposium on Temporal Representation and Reasoning (TIME), Copenhagen, Denmark, October 2016. [ Conference | .pdf ]
[8] Daniel de Leng and Fredrik Heintz. Qualitative Spatio-Temporal Stream Reasoning With Unobservable Intertemporal Spatial Relations Using Landmarks . In AAAI Conference on Artificial Intelligence, Phoenix, Arizona, February 2016. [ Conference | .pdf ]
[7] Fredrik Heintz. Semantically Grounded Stream Reasoning Integrated with ROS. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, November 2013. [ Conference | .pdf ]
[6] Fredrik Heintz, Jonas Kvarnström, and Patrick Doherty. Bridging the sense-reasoning gap: DyKnow - Stream-based middleware for knowledge processing. In Journal of Advanced Engineering Informatics, 2010. [ Journal | .pdf ]
[5] Fredrik Heintz, Jonas Kvarnström, and Patrick Doherty. A Stream-Based Hierarchical Anchoring Framework. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , St. Louis, Missouri, October 2009. [ Conference | .pdf ]
[4] Patrick Doherty, Jonas Kvarnström, and Fredrik Heintz. A Temporal Logic-based Planning and Execution Monitoring Framework for Unmanned Aircraft Systems . Journal of Autonomous Agents and Multi-Agent Systems, 2009. [ Journal | .pdf ]
[3] Fredrik Heintz and Patrick Doherty. A Knowledge Processing Middleware Framework and its Relation to the JDL Data Fusion Model . Journal of Intelligent and Fuzzy Systems, 17(4):335-351, February 2006. [ Journal | .pdf ]
[2] Fredrik Heintz and Patrick Doherty. DyKnow: A Framework for Processing Dynamic Knowledge and Object Structures in Autonomous Systems . In Proceedings of the International Workshop on Monitoring, Security, and Rescue Techniques in Multiagent Systems (MSRAS) , June 2004. [ Conference | .pdf ]
[1] Fredrik Heintz. Chronicle Recognition in the WITAS UAV Project - A Preliminary Report . In Proceedings of the National Swedish Artificial Intelligence Workshop (SAIS) , Skövde, Sweden, March 2001. [ .pdf ]

Page responsible: Patrick Doherty
Last updated: 2018-12-13