Invited Speakers

Title:

Ontology Design Patterns for Large-Scale Data Interchange and Discovery

Abstract:

Ontology design patterns have been conceived as modular and reusable building blocks for ontology modeling. Motivated by our ongoing work on Earth Science applications, we show how ontology design patterns also lend themselves to large-scale data integration under heterogeneity. In particular, they simplify several key aspects of the ontology application life cycle, including collaborative modeling and updates, incorporation of different perspectives, data-model alignment, and social barriers to adoption.

Short bio:

Pascal Hitzler is Director of the Data Semantics Laboratory at the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio, U.S.A. From 2004 to 2009, he was Akademischer Rat at the Institute for Applied Informatics and Formal Description Methods (AIFB) at the University of Karlsruhe in Germany, and from 2001 to 2004 he was postdoctoral researcher at the Artificial Intelligence institute at TU Dresden in Germany. In 2001 he obtained a PhD in Mathematics from the National University of Ireland, University College Cork, and in 1998 a Diplom (Master equivalent) in Mathematics from the University of Tübingen in Germany. His research record lists over 250 publications in such diverse areas as semantic web, neural-symbolic integration, knowledge representation and reasoning, machine learning, denotational semantics, and set-theoretic topology. He is Editor-in-chief of the Semantic Web journal by IOS Press, and of the IOS Press book series Studies on the Semantic Web. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010 which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association's Choice Magazine, and has translations into German and Chinese. He is on the editorial board of several journals and book series and on the steering committee of the RR conference series, and he frequently acts as conference chair in various functions, including e.g. General Chair (RR2012), Program Chair (AIMSA2014, ODBASE2011, RR2010), Track Chair (ESWC2013, ESWC20111, ISWC2010), Workshop Chair (K-Cap2013), Sponsor Chair (ISWC2013, RR2009, ESWC2009). For more information, see http://www.pascal-hitzler.de.

Title:

Concepts in Motion

Abstract:

The history of ideas traces the development of ideas such as evolution, liberty, or science in human thought as represented in texts. Recent contributions (Michel 2011) suggest that the increasing quantities of digitally available historical data can be of invaluable help to historians of ideas.

However, these and similar contributions usually apply generic computer methods, simple n-gram analyses and shallow NLP tools to historical textual material. This practice contrasts strikingly with the reality of research in the history of ideas and related fields such as history of science. Researchers in this area typically apply painstakingly fine-grained analyses to diverse textual material of extremely high conceptual density. Can these opposites be reconciled? In other words: Is a digital history of ideas possible?

Yes, I argue, but only by requiring historians of ideas to provide explicitly structured semantic framing of domain knowledge before investigating texts computationally (models in the sense of Betti and van den Berg 2014), and to constantly re-input findings from the interpretive point of view in a process of semi-automatic ontology extraction.

(joint work with Hein van den Berg)

Betti, Arianna, and Hein van den Berg. “Modeling the History of Ideas.” forthcoming in British Journal for the History of Philosophy 22 (3), 2014. http://j.mp/BettivandenBerg

Michel, Jean-Baptiste, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, et al. “Quantitative Analysis of Culture Using Millions of Digitized Books.” Science 331 (6014), 2011: 176–82. doi: 10.1126/science.1199644

Short bio:

Arianna Betti is Professor of Philosophy of Language at the University of Amsterdam. After studying historical and systematic aspects of ideas such as axiom, truth and fact (Against facts, MIT Press, 2015, in press), she is now trying to trace the development of ideas such as these with computational techniques. She has been doing research at the universities of Krakow, Salzburg, Graz, Leiden, Warsaw and Lund and held research grants among others from the European Research Council, the Italian CNR, and the Dutch NWO and CLARIN-NL. She is a member of the Young Academy of KNAW and several other international organisations dealing with research policy and topics such as science and society, open access and sustainability of research.

Title:

Ontology engineering for and by the masses: are we already there?

Abstract:

We can assume that most of the attendees to this conference have created or contributed to the development of at least one ontology, and many of them have several years of experience in ontology development. The area of ontology engineering is already quite mature, hence creating ontologies should not be a very difficult task. We have methodologies that guide us in the process of ontology development; we have plenty of techniques that we can use, from the knowledge acquisition stages to ontology usage; we have tools that facilitate the transition from our ontology conceptualizations to actual implementations, including support for tasks like debugging, documenting, modularising, reasoning, and a large etcétera. However, how many ontology developers are there now in the world? Are they hundreds, thousands, tens of thousands maybe? Not as many as we may like… In fact, whenever I setup an heterogeneous ontology development team in a domain, I still find lots of difficulties to get the team running at full speed and with high quality results. In this talk I will share some of my most recent experiences on the setup of several small ontology development teams, composed of a combination of city managers, policy makers and computer scientists, for the development of a set of ontologies for an upcoming technical norm on “Open Data for Smart Cities”, and will discuss on the main success factors as well as threats and weaknesses of the process, with the hope that this can give some light towards making ontology engineering more accessible to all.

Short bio:

Oscar Corcho is an Associate Professor at Universidad Politecnica de Madrid. His research activities are focused on Semantic e-Science and Real World Internet, although he also works in the more general areas of Semantic Web and Ontological Engineering. In these areas, he has participated in a number of EU projects (DrInventor, Wf4Ever, PlanetData, SemsorGrid4Env, ADMIRE, OntoGrid, Esperonto, Knowledge Web and OntoWeb), Spanish R&D projects, as well as privately-funded projects like ICPS funded by the World Health Organisation, and HALO funded by Vulcan Inc. Previously, he worked as a Marie Curie research fellow at the University of Manchester, and was a research manager at iSOCO. He holds a degree in Computer Science, an MSc in Software Engineering and a PhD in Computational Science and Artificial Intelligence from UPM. He was awarded the Third National Award by the Spanish Ministry of Education in 2001. He has published several books (including "Ontological Engineering" which is used as a reference book in many universities worldwide), and more than 100 papers in journals, conferences and workshops. He has participated in the organisation or in the programme committees of many relevant international conferences and workshops.

Invited speaker for the Doctoral Consortium

Title:

The many ways of research in semantic technologies

Abstract:

Our discipline is strongly connected to computing, and as computer scientists we are used to explore our world through building it: We develop systems that can then be tested, benchmarked, evaluated. We invent new technologies, tools and methods, as much as we observe the way they are being used. As such, our practices are not established as clear and formal research methodologies as much as they are in other disciplines, including the sciences and the social sciences. In this talk, we will discuss how, despite these differences, our research in semantic technologies (ought to) follow common, comparable and robust research methodologies that integrate these practices. We will in particular, through concrete examples, discuss these practices in comparison to other disciplines, and show how they in fact correspond to many different research methodologies, to which different criteria might apply (e.g. reproducibility, significance, etc).

Short bio:

Mathieu D'Aquin is a research fellow at the Knowledge Media institute (KMi) of the Open University in Milton Keynes, UK. The major common point between all his research activities is the Semantic Web, and especially methods and tools to build intelligent applications relying on formalised knowledge distributed online. He has especially been involved in the development of the Watson Semantic Web search engine, and in many applications of its APIs. As part of several projects, he has worked on many aspects of building and exploiting the Semantic Web, including ontology building, ontology modularization, ontology matching, ontology evolution, ontology publication, etc. More recently, he has been working on aspects related to the use of semantic technologies and the Semantic Web for monitoring and managing online personal information. A major part of his activities also concerns contributing and leading activities around applying linked data technologies and principles for the Open University, and the education sector in general. These activities are carried-out through the LinkedUp project, as well as through developing and exploiting the data.open.ac.uk platform.