Hide menu

TDDE19 Advanced Project Course - AI and Machine Learning

Projects

Projects

Students will be divided in a group of around six students, each group is assigned to a project (according to the preference and skills of each student).

At the end of the project, the students are expected to provied:

  • Source code of library/program
  • Documentation of how to use the software (API, command line...)
  • A short description of the work that has been accomplished (which algorithms are used, what kind of results were obtained...)

Important: the deadline for the selection of projects is Wednesday 30th of August at 12:00, you need to send me an email (cyrille.berger at liu.se) with the following information:

  • Your LiU login.
  • A ranked list of prefered projects (details on the projects below)
  • If you want to work as a group of several people together, send me a single email with all your LiU logins
  • Multi Robots / Humans System

    Customer: Cyrille Berger

    Context: with the increasing capability of robotic system, a lot of research and development is done around building system where multiple robots and multiple humans work together to accomplish a mission. There are multiple scenarios where such a system is usefull, for instance, in a search and rescue after a big castrophe or to find lost people in nature. For such a scenario, it is convenient to have multiple type of robots, Unmanned Aerial Vehicles (UAV) can be used to explore, create maps of the environment, locate victims, carry supply... Unmanned Ground Vehicles (UGV) can be used to clear paths for rescuers, build shelter... Humans can then focus on tasks that robots cannot accomplish, for instance, providing medical assistance.

    The availability of multiple robots open the opportunity for interesting collaboration, for instance a UAV can quickly scan an area and generate a map that is used by a UGV to navigate. This also create challenges on how the robotic systems and human should communicate with each others.

    Example of robotic system involved:

    Tasks:

    • Path planning for a ground robot helped by a small UAV.
    • Gesture and speach recognition so that a human can give commands to a robotic system.
    • Using a virtual reality system (steamvr, occulus...) to display information about a robotic system and allows humans to remote control them.

    Related courses: AI Robotics (TDDE05), AI Programming (TDDD10), Multi-Agents (TDDE13)

    References:

    • OMPL library
    • Autonomous Rover Navigation on Unknown Terrains, Simon Lacroix, Anthony Mallet, David Bonnafous, Gérard Bauzil, Sara Fleury, Matthieu Herrb, Raja Chatila:. I. J. Robotics Res. 21(10-11): 917-942 (2002) (PDF)
    • PCL

    Advanced dialog system for robotic system

    Customer: Cyrille Berger

    Context: having the ability to control a robotic system through voice commands would allow the operator to get assistance from the robot while his hands are busy. However this is a complex task to accomplish from the software perspective, it requires to recognize speech, understand the command, understand the current state of the system and have the ability to answer to the human. Those tasks are usually implemented in a dialog system.

    Many of the currently deployed voice control system (siri, ok google, Mycroft...) have very simple dialog, they react to specific templates (ie setup a meeting at 13:00 with Peter) and have simple answer. Setting up a mission with a robotic system is a complex task which might require the dialog system to ask many different questions to the user and do many check on whether the user request is acceptable.

    The goal of this project is to implement a dialog system for a robotic system, that can be used for simple query (where is robot X?) or setup complex mission.

    References:

    • Social Robots that Interact with People, Cynthia Breazeal, Atsuo Takanishi and Tetsunori Kobayashi, Springer Handbook of Robotics. Ed. Bruno Siciliano and Oussama Khatib. Berlin: Springer, 2008 (GVRL)
    • Spacy
    • Spoken dialogue systems: the new frontier in human-computer interaction, Pierre Lison and Raveesh Meena (ACM)
    • OpenDial Toolkit: opendial-toolkit

    Objects recognition for robotic systems

    Customer: Cyrille Berger

    Context: advances in machine learning with deep learning techniques in particular are giving impressive results in object recognition in images. One of the main problem of such techniques is that they require the creation of a very large dataset where each object is manually annotated. A robot equipped with a stereovision camera system has the possibility to look around and produce depth images, this might make it easier to segment objects in images, and to automatically generate the dataset.

    The goals of this project are to work with classification techniques and make them easilly available to a robotic system and to work on making it easy and automatic to generate training set.

    Tasks:

    • Make it easy for robots (using ROS) to use different object recognition algorithms, possibly implement some of them
    • Setup a training framework for a human operator to easily teach new objects to the robot

    Related courses: AI Robotics (TDDE05), Computer Vision (TSBB15), Machine-Learning (TDDE15)

    References:

    • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun (ARXIV).
    • 3D point cloud segmentation: A survey, Anh Nguyen, Bac Le, Robotics, Automation and Mechatronics (RAM), 2013 6th IEEE Conference on, (ieeexplore).
    • PCL

    Obstacles avoidance for UAVs

    Customer: Olov Andersson

    Context: A challenging aspects of deploying small robots (e.g. quadcopters) in a real-world situation is to be able to avoid obstacles using light-weight sensors with limited available computational power. In this project, we suggest the students work with quadcopters (or simulations thereof) equipped with sensors (such as camera, LIDAR or possibly a kinect-like device) to detect obstacles and generate trajectories to avoid them.

    Examples of our safe trajectory generation research and a visual segmentation that could be used for obstacle detection with camera:

    Video of tests with our earlier work on safe trajectory generation with real quadcopter here.

    Task suggestions (chosen in discussion with project customer):

    • Test and benchmark obstacle detection from sensor data, using one or more approaches:
      • Camera-based obstacle detection is of particular interest lately. Many algorithms have been suggested in the literature (NVIDIA tutorial, Real-time for cars, SegNet Web demo). Many of those algorithms are made and tested for cars from a street perspective, and sometimes using very powerfull hardware. Our goal would be to evaluate the detection accuracy with video coming from a UAV perspective and with more limited hardware, like in the NVIDIA tutorial.
      • LIDAR-based detection is simpler and potentially more robust but the sensor is expensive and bulky. Make use of suitable ROS packages.
      • Kinect-type sensors can be simple and efficient indoors but easily gets blinded by the sun outdoors. Make use of suitable ROS packages.
    • Improve the trajectory generation side of things based on Model-Predictive-Control and Machine-Learning work (see refs) that has been done in our research group:
      • For instance with better model learning/identification.
      • For instance with updated trajectory solver / policy learner.
    Part or all of the system may be simulated. Depending on time and competencies, we also have access to real robots.

    Related courses: AI Robotics (TDDE05), Computer Vision (TSBB15), Machine-Learning (TDDE15), Control Theory

    References:

    RoboCup @home with the Softbank Pepper platform

    Customer: Mattias Tiger

    Context: The RoboCup@Home league aims to develop service and assistive robot technology with high relevance for future personal domestic applications. Linköping University has in previous years been competing in the RoboCup Soccer league and would like to investigate the possibility of also competing in the @home league. The focus of the @home league is for robots to be able to perform tasks in human envorinments as well as by interacting with humans.

    The scenario of interest that we would like you to focus on is as follows. The robot Pepper is placed in an unknown environment and is tasked with exploring this environment in order to build a map (for localization) and to find objects of interest (e.g. humans). When a map is built the robot should be able to move around in the environment (in a safe way) and interact with people. Each person is to be approached and welcomed. When all persons have been welcomed the robot should approach each person in turn again and ask if the robot can be of assistance. It should for example be able to answer some queries such as "Is this the B-building?" or leading the person to a position when asked "Can you show me to the kitchen?" (Where "kitchen" is a place with a specific QR-code and observed in the exploration phase.). The robot should stay within a designated number of meters from its starting location at all times (including during exploration). The software stack should be built in ROS and with as many of-the-shelf components as possible. The customer has been building similar software stacks in ROS in the past and will provide you with a list of suitable componenets (ROS packages etc.) as well as point you to useful example repositories to start from.

    Tasks:

    Customer: Mattias Tiger

    • Assemble a full AI-robotics software stack in ROS using of the shelf components/packages. This includes Motion planning, Task planning, SLAM, Markov Localization, Exloration, Speech recognition, Speech syntesis, Human detector, QR-code detector (and possibly additional object detectors).
    • Write a domain description in PDDL (used by the task planner).
    • Write a high-level desicion making module which implementas the scenario(s), which in turn use the rest of the AI-robotics software stack.
    • Set up a simulator and demonstrate the scenario in the simulator. (Useful for development and debugging.)
    • Demonstrate the scenario at some area in the B-/E-building using the Pepper robot.
    • Extend the scenario to make the tasks and interactions more interesting.

    References:

    Stream reasoning

    Customer: Fredrik Heintz

    • Probabilistic Logic Stream Reasoning over Continuous Data, start from the paper of Nitti etal (https://lirias.kuleuven.be/bitstream/123456789/472064/1/camera.pdf) which provides an approach for doing probabilistic logic reasoning over continuous data such as perceptions and either integrate it into the stream reasoning framework developed at LiU or extend it in some other way. For real robot data the Nao robots could be used. Code is available here: https://github.com/davidenitti/DC
    • Multi-Hypothesis Stream Reasoning, extend the current stream reasoning engine to evaluate temporal logical formulas over streams of sets of states (compared to streams of states as today). The brute force solution is to evaluate the formula over every potential complete state stream that can be created from the stream of sets of states. There many relatively simple optimizations and improvements that can be made.

    Teaching a robot to play football

    Customer: Fredrik Heintz

    • Automatic Machine Learning Data Generation, use the Linköping Humanoids Nao robots to automatically collect machine learning data to for example automatically tune the ball detector or the pose calibration for different conditions.
    • An Automated Robot Test Bed, use the Linköping Humanoids Nao robots to provide a fully automated learning test bed. Ideally, it should be enough to plug in the robots, set up the experiment and then let the robots learn everything themselves. A relevant (but hard) scenario would be to automatically train a goal keeper and a penalty kicker. The hardest part would probably be to get the ball back to the penalty spot and to prevent the robots from getting damaged due to excessive falling. Maybe a vacuum cleaner can be used to collect the ball and some form of protection be placed on the robots to limit the damages when falling. Defining a simpler scenario is of course another possibility.
    • Bayesian Optimization / Programming by Optimization for the RoboCup Standard Platform League, take the current perception pipeline of the Linköping Humanoid team, expose as many relevant configuration parameters as possible and then use Bayesian optimization to learn the optimal set of parameters. See http://www.prog-by-opt.net/ and https://arxiv.org/abs/1012.2599

    Page responsible: infomaster
    Last updated: 2018-08-31