IDA Machine Learning Seminars - Fall 2015
Wednesday, September 16, 3.15 pm, 2015.Deep Convolutional Networks and their Impact on Solving Large Scale Visual Recognition Problems
Hossein Azizpour, Computer Vision and Active Perception Lab, Royal Institute of Technology, Sweden.
Abstract:In this talk we give an overview of modern deep convolutional networks (their architecture, individual components, training algorithms and training data) and the dramatic improvement in performance their introduction has made to many standard visual recognition problems (object detection and image classification). We hope to convince that the "deep learning" hype has substance especially in relation to learning powerful image representations from large repositories of labelled datasets. We will also present some of the work performed at the Computer Vision Group at KTH on learning generic image representations using deep ConvNets and show their great potential for transfer learning.
Organizer: Mattias Villani
Friday, October 2, 1.15 pm, 2015.Deep Neural Networks for Visual Pattern Recognition
Dan Ciresan, Istituto Dalle Molle di studi sull'intelligenza artificiale (IDSIA), Lugano, Switzerland.
Abstract:GPU-optimized Deep Neural Networks (DNNs) excel on visual pattern recognition tasks. They are successfully used for automotive problems like pedestrian and traffic sign detection. DNNs are fast and extremely accurate, making it possible to automatically segment and reconstruct the neuronal connections in large sections of brain tissue for the first time. This will bring a new understanding of how biological brains work. DNNs power automatic navigation of a quadcopter in the forest.
Organizer: Mattias Villani and ContextVision AB.
Wednesday, October 14, 3.15 pm, 2015.Dumbed down models for language
Anders Søgaard, Center for Language Technology, University of Copenhagen.
Abstract: Statistical NLP is an interminable fight against overfitting. In the early years, we blamed the curse of dimensionality, but poor evaluation set-ups using held-out in-sample data for evaluation led us to belief that we would eventually win. Today we begin to realize that we need to dumb down or regularize our models some more. We survey common and less common regularization methods and show their importance to robust performance across linguistic variation. Finally, we discuss recent work on more expressive (deeper) models for NLP, asking ourselves when and why they work.
Organizer: Lars Ahrenberg
Wednesday, November 18, 3.15 pm, 2015.Visual Object Recognition via Biologically-based Error Driven Learning in Thalamocortical Circuits
Randall C. O'Reilly, Department of Psychology and Neuroscience, University of Colorado Boulder.
Abstract:The question of whether the brain uses something like error backpropagation to learn has been fraught with controversy since the algorithm was developed in the 1980â€™s. Recent developments in deep convolutional neural networks learning via backpropagation has renewed interest in this question. Building on extensive work in this area, I will present some new ideas about how error-driven learning might work in the neocortex, leveraging properties of the deep cortical layers and their interactions with the thalamus. We have applied these models to visual object recognition, and demonstrated promising initial results in figure-ground processing, where top-down attentional dynamics and predictive learning work together to dynamically extract the figure from the background, making the subsequent recognition job easier for higher layers.
Organizer: Rita Kovordányi
Wednesday, December 9, 3.15 pm, 2015.Applications of Constraint Reasoning and Optimization in Data Analysis
Matti Järvisalo and Antti Hyttinen, Dept. of Computer Science, University of Helsinki.
Abstract:Integration of the fields of constraint solving and machine learning has recently been identified as an important research direction with high potential. Constraint solvers offer today generic tools for efficiently and optimally solving hard computationalproblems in a variety of real-world and AI domains. In this talk, we present an overview of our recent work on harnessing Boolean constraint optimization procedures to data analysis, especially, to learning optimal structures of various classes of probabilistic graphical models such as Bayesian networks, causal graphs, and chain graphs.
Organizer: Jose M. Peña
The seminars are typically held every fourth Wednesday at 15.15-16.15 in Visionen.
For further information, or if you want to be notified about the seminars by e-mail, please contact Mattias Villani.