Hide menu

IDA Machine Learning Seminars - Spring 2017

Wednesday, February 1, 3.15 pm, 2017.

On priors and Bayesian predictive methods for covariate selection in large p, small n regression
Aki Vehtari,
Computer Science, Aalto University
Abstract:I first present recent development in hierarchical shrinkage priors for presenting sparsity assumptions in covariate effects. I review an easy and intuitive way of setting up the prior for based on our prior beliefs about the number of effectively nonzero co-efficients in the model. I also discuss the computational issues when using hierarchical shrinkage priors. I emphasise the separation between prior information on sparsity and decision theoretic approach for selecting a smaller set of covariates having good predictive performance. I briefly review comparison of Bayesian predictive methods for model selection and discuss in more detail projection predictive variable selection approach for regression.
Location: Ada Lovelace (Visionen)
Organizer: Mattias Villani

Wednesday, March 1, 3.15 pm, 2017

Visualizing Data using Embeddings
Laurens van der Maaten
, Facebook AI Research
Abstract: Visualization techniques are essential tools for every data scientist. Unfortunately, the majority of visualization techniques can only be used to inspect a limited number of variables of interest simultaneously. As a result, these techniques are not suitable for big data that is very high-dimensional.
An effective way to visualize high-dimensional data is to represent each data object by a two-dimensional point in such a way that similar objects are represented by nearby points, and that dissimilar objects are represented by distant points. The resulting two-dimensional points can be visualized in a scatter plot. This leads to a map of the data that reveals the underlying structure of the objects, such as the presence of clusters.
The talk presents techniques to embed high-dimensional objects in a two-dimensional map. In particular it focuses on a technique called t-Distributed Stochastic Neighbor Embedding (t-SNE) that produces substantially better results than alternative techniques. We demonstrate the value of t-SNE in domains such as computer vision and bioinformatics. In addition, we show how to scale up t-SNE to sets with millions of objects, and we present variants of the technique that can visualize objects of which the similarities cannot appropriately be modeled in a single map (such as semantic similarities between words) and that can visualize data based on partial similarity rankings of the form "A is more similar to B than to C".
The work presented in this talk was done jointly with Geoffrey Hinton and Kilian Weinberger.
Location: Ada Lovelace (Visionen)
Organizer: Leif Jonsson

Wednesday, March 29, 3.15 pm, 2017

Machine Learning in Production: Challenges and Choices
Theodoros Vasiloudis
, SICS Swedish ICT
Abstract: As machine learning (ML) finds its way into more and more areas in our life, software developers from all fields are asked to navigate an increasingly complex maze of tools and algorithms to extract value out of massive datasets. Despite the importance that machine learning programs have in production systems, the specific challenges they pose have not been studied extensively.
In this talk we will present an overview of the literature on machine learning in production and discuss the challenges of a complete deployment pipeline: design, implementation, testing, deployment, and monitoring. The talk will include considerations for issues like data readiness, algorithm and software selection, and we'll try to point out some common mistakes and misconceptions in the development and deployment of machine learning systems.
Location: Ada Lovelace (Visionen)
Organizer: Oleg Sysoev

Wednesday, April 26, 3.15 pm, 2017

Deep Learning with Uncertainty
Andrew Gordon Wilson
, Cornell University
Abstract: In this talk, we approach model construction from a probabilistic perspective. First, we introduce a scalable Gaussian process framework capable of learning expressive kernel functions on large datasets. We then develop this framework into an approach for deep kernel learning, with non-parametric capacity, inductive biases given by deep architectures, full predictive distributions, and automatic complexity calibration. We will consider applications in image inpainting, crime prediction, epidemiology, counterfactuals, autonomous vehicles, astronomy, and human learning, including very recent state of the art results.
Location: Visionen
Organizer: Per Sidén

Wednesday, May 24, 3.15 pm, 2017

Corpus Curation, Latent Semantics, and the Theory of Topic Modeling
David Mimno
, Cornell University
Abstract: Topic models have been in widespread use for more than a decade. But we are only now starting to recognize what these models are really doing and how and why people actually use them. In this talk I'll cover recent theoretical work that places topic models in a larger context that includes LSA and word embeddings. I'll also cover practical work that recognizes the choices made in using topic models to study documents, from epistemology to stemming and stopword removal. These results have specific implications for how we use statistical document models. But the effect of those choices also inform us about what these models are really doing.
Location: Ada Lovelace (Visionen)
Organizer: Måns Magnusson

Page responsible: Mattias Villani
Last updated: 2017-07-11