Hide menu

Human-Centered Systems (HCS)


Seminars


Thursday March 20, 13-15

Room: Alan Turing

Speaker: Brian Cantwell Smith, University of Toronto

Solving the Halting Problem (and Other Mischief in the Foundations of Computing)

Abstract:

The unsolvability of the halting problem is one of the most famous results in computer science. Curiously, though, the halting problem is easy to solve, if you use non-standard encodings. But as everyone knows, non-standard encodings are illegal. Computability theory requires that numbers be represented on tapes by "reasonable" encodings. This fact raises three foundational questions: 1. If reasonable representations are so important, why are they so little studied? 2. What are the conditions on an encoding for it to be "reasonable"? 3. Where do these reasonableness constraints come from? Are they fundamentally physical, semantical, mathematical, or logical?

I will propose answers to all three questions. Doing so, though, will require turning our classical understanding of the theory of computing upside down, with implications not only for computer science, but also for artificial intelligence and cognitive science.

Biography:

Brian Cantwell Smith is Professor of Information, Philosophy, and Computer Science at the University of Toronto, where he is also Director of the Coach House Institute, home of the McLuhan Program in Culture & Technology. His research focuses on the conceptual foundations of computation and information, and on metaphysics, ontology, and epistemology. He is the author of “On the Origin of Objects” (MIT, 1996), two volumes of papers forthcoming (in 2014) from Harvard University Press entitled “Indiscrete Affairs,” and a long-rumored 7-volume series on the philosophy of computation to be published by MIT Press. After receiving a doctorate from MIT for research on reflection, he was Principal Scientist at the Xerox Palo Alto Research Centre (PARC), adjunct professor in Philosophy and Computer Science at Stanford University, a founder of the Stanford-based Centre for the Study of Language and Information (CSLI), and a founder and first President of Computer Professionals for Social Responsibility (CPSR). In 1996 he moved to Indiana University in Bloomington, and from 2001 to 2003 held the Kimberly J. Jenkins University Professorship of Philosophy and New Technologies at Duke University.

Tuesday April 1, 13-15

Room: Alan Turing

Speaker: Anders Jansson, Uppsala universitet

Autonomy and Level of Automation

Today, there is a movement towards enforced conformity between humans and artefacts. This is evident in human factors research, cognitive engineering, human-computer interaction and cognitive science. Human behaviors are explained in terms of information processing activities, and machines are supposed to be human-like. This endeavor towards joint cognitive systems is to some extent laudable and productive – intuitive interfaces and well-adapted systems enhance and augment our abilities in a number of situations and contexts. In this talk, however, I depart from this general trend and suggest a Human-Machine Discrimination framework for distribution of control and authority between humans and machines. The two levels in the framework discriminate between humans and technology to signify the separation of authority from automation. Automation and design concepts are regarded as hypotheses about the relationship between technology and cognition/collaboration. Such hypotheses must be subject to empirical investigations, as we otherwise may confuse ourselves with ideas about “autonomous cars” and “responsible robots”. Mishaps in terms of automation surprises are conceptually different from errors caused by non-intuitive design solutions. Moreover, design solutions that impede the utilization of expertise can be found on both levels. A model is presented, specifying the properties from which relationships between technology and human cognition can be formulated as general hypotheses. Analysis, design and evaluation of any system can be carried out with the help of the model. I will give some examples how the model has been used in our research, which mainly concerns how to design for skilled professionals and experienced users. Referring to the examples, I will also comment on concepts like situation awareness, system awareness and edge awareness.

Biography:

Anders Jansson is Associate Professor in Computer Science, Human-Computer Interaction as well as Associate Professor in Psychology at Uppsala University. Research topics are human decision-making in complex dynamics systems, cognitive work analysis, methods for knowledge elicitation, human work interaction design, effects of automation, visual design of complex information, and human factors.
Some of the recent papers:

Andersson, A.W., Jansson, A., Sandblad, B. & Tschirner, S. (2013). Recognizing complexity: Visualization for skilled professionals in complex work situations. In Ebert. A., Domik, G, Gershon, N, Scheler, I & van der Veer, G. (Eds.), Building Bridges – HCI, Visualization, and Cognitive Ergonomics. Proceedings from the 2011 IFIP WG 13.7 Workshop on Human-Computer Interaction and Visualization, HCIV 2011, Lisboa, Portugal.

Erlandsson, M., & Jansson, A. (2013). Verbal reports and domain-specific knowledge: a comparison between collegial and retrospective verbalisation. Cognition, Technology and Work, 15, 239-254.

Jansson, A. & Erlandsson, M. (2013). Recognizing complexity – A prerequisite for skilled intuitive judgments and dynamic decisions. Paper presented at the SPUDM24 Conference, Barcelona, Spain, August 22nd, 2013.

Jansson, A., Erlandsson, M., Fröjd, C. & Arvidsson, M. (2013). Collegial collaboration for safety: Assessing situation awareness by exploring cognitive strategies. Proceedings from the 14th Interact Conference, Workshop on Human-Work Interaction, Cape Town, South Africa, September 2nd, 2013.

Jansson, A., Stensson, P., Bodin, I., Axelsson, A. & Tschirner, S. (2014). Authority and level of automation: Lessons to be learned in design of in-vehicle assistance systems. Proceedings from the 16th International Conference on Human-Computer Interaction, Heraklion, Crete, Greece 22-27 June 2014.

Stensson, P., & Jansson, A. (2014). Autonomous technology – Sources of confusion: A model for explanation and prediction of conceptual shifts. Ergonomics, 57, DOI: 10.1080/00140139.2013.858777

Stensson, P & Jansson, A. (2014). Edge awareness – A dynamic safety perspective on four accidents/incidents. Proceedings of the 5th International Conference on Applied Human Factors and Ergonomics AHFE 2014, Kraków, Poland 19-23 July 2014.

Tuesday April 8, 13-15

Room: Alan Turing

Speaker: Nils Dahlbäck

Cognitive Science at the Crossroads

Andy Clark ended his book Mindware " 'Mindware as software' That was a good slogan once. But it has served its purpose, and it is time to move on". OK, Clark mignt be right. But where do we go from here? To answer this, I claim that we first must understand how we got here. In other words what, if anything, that was wrong with the 'mindware as software' approach. In this talk I will try to sketch if not an answer to these questions, at least some suggestions for where to go to find the answers, both on where we come from, but more importantly, though less detailed, where we might go from here, or at least which paths that can be taken.


Page responsible: Arne Jönsson
Last updated: 2014-04-01