Examensarbeten och uppsatser / Final Theses
Framläggningar på IDA / Presentations at IDA
If nothing is stated about the presentation language then the presentation is in Swedish.
Due to current distance mode thesis presentations during spring of 2020 will take place online. See more information on the page for online presentations (also link in the menu). If password is required to access the online presentation, please contact the examiner (type in the examiner's name in the search bar in the top right, and choose "Sök IDA-anställda" in the menu).
- 2021-04-23 kl 15:30 i https://liu-se.zoom.us/j/67964170159?pwd=WktBQlJTd3hQdWpRSHlwTzE4NjdTQT09
Group Coordination between Agents using Deep Reinforcement Learning
Författare: Johan Karlsson
Opponent: Jonathan Lundgren
Handledare: Johan Källström
Examinator: Fredrik Heintz
Nivå: Avancerad (30hp)
Intelligent computer controlled entities can help facilitate research, for example by substituting human controllers. Currently, Reinforcement Learning (RL) show great potential as a tool for researchers to use in complex and difficult problems. However, training time and final behavior of the RL algorithms is heavily dependent on the basic premises of the agents; and on the used algorithms themselves. End-to-end training, that is using raw input data only, is often intractable and simplifications or extensions need to be made to achieve desired behavior without limitless resources. Two different RL algorithms were studied, Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO), in a 2D multiagent simulator that included non-convex obstacles and limited sensor perception. Desired behavior was movement along a path as a group while avoiding complex obstacles using only a range finder sensor and self-state sensor to perceive their surroundings. Additionally, the usage of extended perception capabilities and demonstrations from an expert model was evaluated in terms of achieved behavior after a set timelimit. It was clear that expert initialization was useful; even using a rudimentary expert lead to better behavior, although considerably slower for PPO. Supplementary input in terms of immediate perception of other agents and the location of the path also helped; for DDPG it was crucial for stable learning.
- 2021-04-27 kl 12:15 i https://liu-se.zoom.us/j/68814966888?pwd=WXZPc2IzbENOcndpTndreWhvc083Zz09
Evaluating CNN-based Models for Unsupervised Image Denoising
Författare: Johan Lind
Opponent: Carl Brage
Handledare: Jonas Wallgren
Examinator: Cyrille Berger
Nivå: Avancerad (30hp)
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images.
This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images.
Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures.
The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them.
Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM.
The other two networks - GRDN and MultiResUNet - on the other hand caused performance ranging from poor to decent depending on the metric and dataset used.
Page responsible: Ola Leifler
Last updated: 2020-06-11