1. BACKGROUNDIndustrial partners:
People in the project:
Background and industrial need
It has been identified by the car industry that there is a strong need for increased tool support in order to handle the increasing functionality and complexity for the next generation of engine control systems. This need origins from increased law regulations and diagnosis requirements for handling faults, with the implication that the amount of data managed by the engine control system has increased drastically, as well as the complexity of the engine control system itself. Databases have been envisioned as one tool to improve the situation. This research focuses on incorporation of database and transaction support in engine control system for automobiles, with the vision that all data transactions, including extremely fast and critical transactions in the control loop as well as transactions outside the control, e.g., for diagnosis, should be carried out by a real-time database that is integrated with the engine control system. Emphasis is given to software architectural considerations of the database and the engine control system, and models for transaction execution under temporal constraints on data and transactions (primarily expressed as absolute validity intervals, relative validity intervals, deadlines etc), where the workload consists of multi-class transactions requiring differentiated processing due to transaction criticality and real-time performance requirements.
The impact of successful deployment of a database in this type of system is high. First, by using a central repository for data management, one can avoid unnecessary storing of data at the different processes, which enhances software maintainability and fosters better software evolution due to the simpler structure and the removal of data subscription models. Second, this also simplifies the programmers' tasks since large parts of synchronization can be performed by the database, and that time constraints, such as data validity, can be enforced by the database.
2. PROJECT OVERVIEWThe project officially started january 2002, and the work is divided into three tracks:
Track 1: Construction of algorithms and schemes for managing updates of data items in a real-time database. In the vehicular subsystems, denoted ECU - electronic control unit, it is important to use fresh and accurate values when taking decisions or controlling the external environmnet via actuators. Freshness can be represented by a finite validitiy interval (age) attached to data items, and this validity intervals are considered when applications request data. The underlying work for enabling this is our previous work with formalization of validity intervals on data, where we identified a mismatch between current on-going research and the real-world requirements of our application. In this work we investigate how to model validity intervals that are dynamic in the sense that they can change, or depends, on different modes of the system and conditions in the environment. Previous and earlier research has focused on temporal correctness of data item where the validity intervals are fixed. However, in many applications, in particular the ECUs and EECU (Engine ECU), a fixed interval is not satisfactory as they often result in underutilized systems due to worst-case assignment of the interval. In this work we focus on (i) supporting dynamic validity intervals of data, i.e., the accepted age of data is sensitive to the current system state (e.g., the engine temperature changes rapidly when just recently started, but is rather stable once it has reached its working temperature); (ii) develop schemes and algorithms for managing updates of data items, considering the dynamic validity intervals.
This work is carried out by Thomas Gustafsson under the supervision of Jörgen Hansson.
Track 2: Development of a real-time database (aka real-time data repository) suitable for integration into the EECU (engine electrical control unit) software. Having an EECU system with an embedded database enables us to evaluate proposed algorithms and mechanisms in an actual real-world setting, in particular as the system is connected to an engine simulator that generates realistic input for the engine control software. Having a realistic (non-simulated) platform helps both in benchmarking performance and revealing the true performance behavior of new developments (in comparison to existing systems), but also to get hands-on experience on the nature of the problems faced by industry. The repository has been developed as part of Master's projects [3,4] under the supervision of Thomas Gustafsson and Jörgen Hansson. The repository can currently be executed on "soft target" and "hard target". The real-time data repository is executing on top of a real-time operating system (Rubus developed by Articus Systems).
Track 3: Construction and evaluation of concurrency control algorithms in a real-time database. In real-time embedded systems there can exist a need to have long-lived transations. It is important that all data used by transactions are originating from the same time. If there are long-lived transactions in a system, a history of active data items is needed in order to be able to assign the correct version of a data item to the transaction. This can be achieved with a multi-version concurrency control algorithm. We are currently investigating how a variant of multi-version timestamp ordering concurrency control can be used in our database implementation on an EECU.
3. RESULTS AND DEVELOPMENT SINCE 2000This is a new project which officially started in January 2002 with the recruitment of Thomas Gustafsson (the project had start-up meetings during summer and fall 2001 to discuss project focus).
- Track 1: Our work has focussed on resource management algorithms, primarily scheduling, for ensuring data freshness. In this regard, the following contributions have been achieved [1,2,5,7]:
- We have introduced a new notion of data freshness, where the value of a data item and a data validity bound is used to measure freshness. Data validity bounds are also applicable to existing on-demand algorithms.
- We have developed a scheme for handling changes in data items allowing for adaptability to new states. The scheme controls the update of base data items (aka sensor data) and schedules updates of (derived) data items. Essentially, when a base data item changes a check is made to determine whether the change is significant compared to the previously used value of the base data; if the change is significant, then derived data items should be recomputed using the new base data. We have adopted this scheme both for the proposed algorithm as well as existing on-demand scheduling algorithms. Results show that the the scheme adjusts the number of necessary updates that need to be made in order to achieve desired level of data freshness.
- When a data item is used it has to be fresh. This implies that the validity of a derived data item is dependent on the validity of the data items involved in deriving it. To achieve this, we have developed algorithms ODDFT and ODBFT that schedule updates of data items and take care of data validity bounds by modeling the expected error of a data item. The maximum deviation of the value of a data item is approximated and is used by the scheduling algorithms to prioritize the updates. To evaluate the algorithms, we have used a simulator (on-going work focusses on implementing these algorithms on the real platform). The performance experiments show that the proposed algorithms perform better, for all types of loads, than consistency-centric on-demand algorithms. In comparison to through-put centric algorithms, the proposed algorithms perform better at light to moderate loads, while for heavy loads it is worse.
- Assuming deterministic calculations an extension to ODDFT that can determine if an update is needed and skips it if possible has been developed. This algorithm, denoted On-Demand Top-Bottom traversal with relevance check (ODTB), uses a pregenerated schedule which drops the scheduling time, i.e., overhead, of the algorithm compared to ODDFT. Further, a relevance check on updates about to be triggered by ODDFT has also been developed. Simulation results show that ODTB lets more calculations to finish and use consistent data.
- Track 2: The current version of the real-time data repistory has support for [3,4,6]:
- A database API (application programmers interface) that handles requests and retrievals of data stored in a central repository. The API is constructed with data validity intervals in mind. The read and write operations in the API know about how long a data value lives, i.e., its absolute validity interval, and it is possible to put constraints on all the data that a transaction uses, that is, the API also has support for the relative validity interval.
- Support for concurrency control, both pessimistic (HP2PL) and optimistic (OCC-BC), and multiversion concurrenc control using similarity.
- Support for transactions and their handling, i.e. encapsulation of operations into transactions that do the updates towards the repository.
- Support for scheduling transactions according to the EDF (earliest deadline first) policy.
- Support for data validity bounds and the scheduling algorithms ODDFT, ODBFT, and ODTB.
- Track 3: Multiversion concurrency control [6,7]:
- A general multiversion concurrency control with similarity (MVTO-S), i.e., validity bounds are used, has been developed. This algorithm is a combination of an updating algorithm with relevance check, e.g., ODTB, and a concurrency control algorithm using several versions of data items. MVTO-S has the properties of guaranteeing that transactions are presented an up-to-date view of the database from the time when the transaction was started. Three implementations of MVTO-S have been evaluated using the database developed in track 2. These evaluations are compared to well-established single-version concurrency control algorithms not using similarity. The implementations of MVTO-S perform better than HP2PL and OCC-BC and are also able to guarantee the up-to-date snapshots.
4. SELECTED PUBLICATIONS Thomas Gustafsson and Jörgen Hansson, "Dynamic on-demand updating of data in real-time database systems", In Proceedings of ACM SAC 2004 - Track on Embedded Systems: Applications, Solutions, and Techniques. ps, pdf.
 Thomas Gustafsson and Jörgen Hansson, "Scheduling of updates of base and derived data items in real-time databases", Technical report, Department of computer and information science, Linköping University, Sweden, 2003. ps, pdf.
 Marcus Eriksson, "Efficient Data Management in Engine Control Software for Vehicles - Development of a Real-Time Data Repository", Master's thesis, Department of computer and information science, Linköping University, Sweden, 2003. ps, pdf.
 Martin Jinnelöv, "Analysis of an Engine Control System in Preparation of a Real-Time Database", Master's thesis, Department of computer and information science, Linköping University, Sweden, 2002. ps, pdf.
 Thomas Gustafsson and Jörgen Hansson, "Data Management in Real-Time Systems: a Case of On-Demand Updates in Vehicle Control Systems", in Proceedings of the 10th IEEE Real-Time and Embedded Technology and Applications Symposium, Toronto, Canada, 2004. ps, pdf.
 Hugo Hallqvist, "Data Versioning in a Teal-Time Data repository", Master's thesis, Department of computer and information science, Linköping University, Sweden, 2004.
 Thomas Gustafsson, "Maintaining Data Consistency in Embedded Databases for Vehicular Systems", Licentiat Thesis, Department of computer and information science, Thesis no 1138, Linköping University, Sweden, 2004.
5. POSTERSPoster 1 and 2.