Hide menu

Examensarbeten och uppsatser / Final Theses

Framläggningar på IDA / Presentations at IDA


Se även framläggningar annonserade hos ISY och ITN i Norrköping / See also presentations announced at ISY and ITN in Norrköping (in Swedish)

If nothing is stated about the presentation language then the presentation is in Swedish.

På grund av rådande distansläge kommer framläggningar våren 2020 ske på distans. Se mer information på sidan om digitala framläggningar (även länk till vänster). Vid krav på lösenord för att komma in till exjobbspresentationen, vänligen kontakta examinator för lösenord (skriv in personens namn i sökfältet uppe till höger och välj "Sök IDA-anställda" i menyn).
/
Due to current distance mode thesis presentations during spring of 2020 will take place online. See more information on the page for online presentations (also link in the menu). If password is required to access the online presentation, please contact the examiner (type in the examiner's name in the search bar in the top right, and choose "Sök IDA-anställda" in the menu).

WExUpp - kommande framläggningar
  • 2020-09-24 kl 15:00 i https://liu-se.zoom.us/j/67132275715

    A Study of Existing Techniques for Building an Efficient GraphQL Server

    Författare: Lukas Lindqvist
    Opponenter: Jacob Lundberg, Marcus Odlander
    Handledare: Sijin Cheng
    Examinator: Olaf Hartig
    Nivå: Avancerad (30hp)

    In 2016 Facebook open-sourced GraphQL, which they internally developed to minimize the amount of data that was needed by their mobile clients. A GraphQL service can be
    very chatty with the data store that it is backed by, this thesis identified and examined three different techniques to minimize this and also how these could improve the performance of the GraphQL service. The three techniques studied were batching, where multiple requests to the data store were baked into one. A request-specific cache, during the resolution of a single query sometimes the same data is needed multiple times, the cache were then used to no make multiple trips to the data store. Lastly, field-specific resolvers, where the data for a specific field is resolved as late as possible as to not overfetch data from the data store. The results show that big reductions in the amount of requests to the data store can be made, especially by the batching technique. The other two techniques also showed improvements, however they were not as big. Similar results were seen when it came to the throughput.

  • 2020-09-30 kl 10:15 i https://liu-se.zoom.us/j/67175333925

    An Approach to Extending Ontologies in the Nanomaterials Domain

    Författare: Olumide Leshi
    Opponent: Simon Mehari
    Handledare: Huanyu Li
    Examinator: Patrick Lambrix
    Nivå: Avancerad (30hp)

    As recently as the last decade or two, data-driven science workflows have become increasingly popular and semantic technology has been relied on to help align often parallel research efforts in the different domains and foster interoperability and data sharing. However, a key challenge is the size of the data and the pace at which it is being generated, so much that manual procedures lag behind. Thus, eliciting automation of most workflows.

    In this study, the effort is to continue investigating ways by which some of the tasks performed by experts in the nanotechnology domain, specifically in ontology engineering, could benefit from automation. An approach, featuring phrase-based topic modelling and formal topical concept analysis is further motivated, together with formal implication rules, to uncover new concepts and axioms relevant to two nanotechnology-related ontologies.

    A corpus of 2,715 nanotechnology research articles helps showcase that the approach can scale, as seen in a number of experiments conducted. The usefulness of document text-ranking as an alternative form of input to topic models is highlighted as well as the benefit of implication rules to the task of concept discovery. In all, a total of 203 new concepts are uncovered by the approach to extend the referenced ontologies.

  • 2020-10-02 kl 16:00 i https://liu-se.zoom.us/j/66871673410

    A Performance Comparison of Auto-Generated GraphQL Server Implementations

    Författare: Markus Larsson, David Ångström
    Opponenter: Christoffer Akouri, Jesper Eriksson
    Handledare: Sijin Cheng
    Examinator: Olaf Hartig
    Nivå: Grundnivå (16hp)

    To create a GraphQL schema and server implementation requires time, effort and knowledge. This is however a requirement to run GraphQL over your current legacy database. For this reason multiple server implementation tools have been created by vendors to reduce development time and instead auto-generates a GraphQL schema and server implementation using an already existing database. This bachelor thesis will pick, run and compare the benchmarks of two different server implementation tools. This is done using a benchmark methodology based on technical difficulties (choke points). The result of our benchmark suggests that the throughput is larger for Hasura compared to PostGraphile whilst the query execution time as well as query response time is similar. PostGraphile is better at paging without offset as well as ordering, but on all other cases Hasura outperforms PostGraphile or shows similar results.

  • 2020-10-05 kl 10:15 i https://teams.microsoft.com/dl/launcher/launcher.html?type=meetup-join&deeplinkId=09191253-f365-4aec-9e42-b1ae16d399d7&directDl=true&msLaunch=true&enableMobilePage=true&url=%2F_%23%2Fl%2Fmeetup-join%2F19%3Ameeting_M2ZhZjgyZGUtNjg1Mi00MmY4LWJiYzQtZDE4ZmI2NjhlM2Y1@thread.v2%2F0%3Fcontext%3D%257b%2522Tid%2522%253a%2522913f18ec-7f26-4c5f-a816-784fe9a58edd%2522%252c%2522Oid%2522%253a%252263f9fe05-f889-45bb-bbea-497c2e74fc9b%2522%257d%26anon%3Dtrue&suppressPrompt=true

    Kravställning av användarupplevelsen vid felsökning och debugging av nätverk med mikrotjänster

    Författare: Isak Sestorp
    Opponent: Daniel Wassing
    Handledare: Anders Fröberg
    Examinator: Erik Berglund
    Nivå: Avancerad (30hp)

    Felsökning och debugging av fel i nätverk med mikrotjänster är en komplex uppgift på grund av mångfalden i hur både nätverk med mikrotjänster och mikrotjänster i nätverk är sammansatta. Eftersom nätverk ofta innehåller ett stort antal mikrotjänster är det svårt att förstå var fel har inträffat och ett högt krav på domänexpertis för att göra det. Kommunikationsflödet mellan mikrotjänster är centralt för att förstå varför ett fel uppstått. Hur kommunikationsflöden mellan mikrotjänster kan visualiseras för användarupplevelsen i arbetsprocessen vid felsökning och debugging har därför undersökts.

  • 2020-10-12 kl 08:00 i https://liu-se.zoom.us/j/62462017401

    Improving performance of a Mixed Reality Application on the edge with hardware acceleration

    Författare: Christoffer Akouri, Jesper Eriksson
    Handledare: Klervie Toczé
    Examinator: Simin Nadjm-Tehrani
    Nivå: Grundnivå (16hp)

    Using specialized hardware to accelerate workloads have the potential to bring great
    performance lifts in various applications. Using specialized hardware to speed up the
    slowest executing component in an application will make the whole application execute
    faster, since it cannot be faster than its slowest part. This work investigates two modifications to improve an existing virtual reality application with the help of more hardware
    support. The existing virtual reality application uses a server computer which handle
    virtual object rendering, these are later sent to the mobile phone, which is the end user.
    Modifications done on the server part of the application, where the Simultaneous Localization And Mapping (SLAM) library was changed and the choice from software encoder
    and decoder were changed to the present hardware ones on the server computer for the
    video streaming. Small changes had to be made to the client-side application to allow the
    latency measurement to work when changing the server-side encoder.
    Accelerating the SLAM with CUDA showed an increase in the number of processed
    frames each second, and frame processing time, at the cost of latency between the end and
    edge device. Using the hardware encoder and decoder resulted in no improvement considering latency or processed frames, in fact, the hardware encoders and decoder performed
    worse than the base configuration. The reduced frame processing time indicates that the
    CUDA platform has use if the additional latency that occurred from the implementation
    is reduced or removed.



Page responsible: Ola Leifler
Last updated: 2020-06-11