Hide menu

IDA Machine Learning Seminars - Spring 2015


Wednesday, February 4, 3.15 pm, 2015.

What You See is Less Than What You Get - Estimating Visually Non-Observable Object Properties
Hedvig Kjellström
, Computer Vision and Active Perception Lab (CVAP), Royal Institute of Technology (KTH).
Abstract: The great majority of object analysis methods are based on visual object properties - objects are categorized according to how they appear in images. Visual appearance is measured in terms of image features (e.g., SIFTs) extracted from images or video. However, besides appearance, objects also have many properties that can be of interest, e.g., for a robot who wants to employ them in activities: Temperature, weight, surface softness, and also the functionalities or affordances of the object, i.e., how it is intended to be used. One example is chairs. Chairs can look vastly different, but have one thing in common: they afford sitting.
I will present some of the work in my group at KTH on modeling object affordances and functionality.
Location: Visionen
Organizer: Mattias Villani


Wednesday, March 4, 3.15 pm, 2015.

The Democratization of Optimization
Kristian Kersting
, Computer Science Department, TU Dortmund University.
Abstract: Democratizing data does not mean dropping a huge spreadsheet on everyone's desk and saying, "good luck," it means to make data mining, machine learning and AI methods useable in such a way that people can easily instruct machines to have a "look" at the data and help them to understand and act on it. A promising approach is the declarative "Model + Solver" paradigm that is behind many recent revolutions computing in general: instead of outlining how a solution should be computed, we specify what the problem is using some modeling language and solve it using highly optimized solvers. Analyzing data, however, involves more than just the optimization of an objective function subject to constraints. Before optimization can take place, a large effort is needed to not only formulate the model but also to put it in the right form. We must often build models before we know what individuals are in the domain and, therefore, before we know what variables and constraints exist. Hence modeling should facilitate the formulation of abstract, general knowledge. This not only concerns the syntactic form of the model but also needs to take into account the abilities of the solvers; the efficiency with which the problem can be solved is to a large extent determined by the way the model is formalized. In this talk, I shall review our recent efforts on on relational linear programming. It can reveal the rich logical structure underlying many AI and data mining problems such as MAP inference and LP SVMs both at the formulation as well as the optimization level. Ultimately, it will make optimization several times easier and more powerful than current approaches and is a step towards achieving the grand challenge of automated programming as sketched by Jim Gray in his Turing Award Lecture.
Location: Visionen
Organizer: Jose M. Peña


Wednesday, April 1, 3.15 pm, 2015.

Gaussian process optimization for approximate Bayesian inference
Johan Dahlin
, Dept. of Electrical Engineering, Linköping University
Abstract: We discuss a novel method for approximate Bayesian parameter inference in nonlinear state space models (SSMs). The method is an iterative procedure based on Gaussian process optimisation in combination with sequential Monte Carlo. The Gaussian process prior is utilized to create a surrogate function of the posterior distribution, which is used to create a Laplace approximation. We demonstrate that the method returns reasonable parameter estimates with a lower computational cost than other gradient-free alternatives. Furthermore, we incorporate recent advances in approximate Bayesian computation (ABC) to handle SSMs with intractable likelihoods. We make use of this to infer the parameters in a challenging stochastic volatility model with alpha-stable returns using real-world data.
Location: Visionen
Organizer: Mattias Villani


Wednesday, April 29, 3.15 pm, 2015.

Deep Gaussian Processes
Neil Lawrence
, Dept. of Computer Science, University of Sheffield.
Abstract: In this talk we describe how deep neural networks can be modified to produce deep Gaussian process models. The framework of deep Gaussian processes allow for unsupervised learning, transfer learning, semi-supervised learning, multi-task learning and principled handling of different data types (count data, binary data, heavy tailed noise distributions). The main challenge is to handle the intractabilities. In this talk we review the variational bounds that are used under the framework of variational compression and give some initial results of deep Gaussian process models.
Location: Visionen
Organizer: Mattias Villani


Wednesday, May 27, 3.15 pm, 2015.

Improving Web 2.0 Recommendation Leveraging User Comments via Latent Model Regularization
Min-Yen Kan
, School of Computing, National University of Singapore.
Abstract: The Web has experienced a renaissance in the form of user-generated resources. This characteristic of the new Web has presented new challenges in retrieving, managing and utilizing the great volume of Web resources. We report on our work specifically focused on two item-centric applications, leveraging comments to improve the prediction of item popularity and for clustering items into semantic groups.
For popularity prediction, we propose an alternative solution by leveraging user comments, which are more accessible than view counts. Due to the sparsity of comments, traditional solutions that are solely based on view histories do not perform well. To deal with this sparsity, we mine comments to recover additional signal, such as social influence. By modeling comments as a time-aware bipartite graph, we propose a regularization-based ranking algorithm that accounts for temporal, social influence and current popularity factors to predict the future popularity of items.
For Clustering items (i.e., web resources like videos, images) into semantic groups, we systematically investigate how user-generated comments can be used to improve the clustering of Web 2.0 items. To accommodate such quality imbalance, we invoke multi-view clustering, in which each data source represents a view, aiming to best leverage the utility of different views. To combine multiple views under a principled framework, we propose CoNMF (Co-regularized Non-negative Matrix Factorization), which extends NMF for multi-view clustering by jointly factorizing the multiple matrices through co-regularization.
Location: Visionen
Organizer: Arne Jönsson




The seminars are typically held every fourth Wednesday at 15.15-16.15 in Visionen.
For further information, or if you want to be notified about the seminars by e-mail, please contact Mattias Villani.


Page responsible: Fredrik Lindsten
Last updated: 2015-07-19