AUT - Mind Theory

AUT

Mind Theory

MindUnderstanding Original Intentionality

One can produce intelligent behaviour using a formal symbol system, but the system itself does not understand what the symbols mean. How can we create a machine that can understand? Different approaches have been suggested recently, and these include Brooks' subsumption architecture [1], Harnad's hybrid system [2], and Reeke and Edelman's Darwin automata [3].

  1. Brooks, R.A. (1991) Intelligence without Representation. Artificial Intelligence 47: 139-159.
  2. Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
  3. Reeke Jr., G.N and Edelman, G.M. (1988) Real Brains and Artificial Intelligence. In the AI Debate: False starts, real foundations, S.R. Graubard (Ed) 143 -173, MIT Press.

We note that all the above are concerned with getting the right "hardware" for the machine to interact with its environment. It is the hope of these researchers that the mind would emerge when their machines are finally allowed to do so. Our research programme takes a different approach.

We are trying to understand how infants come to understand their world and the first step is to work out the essential information made available to the infants via their senses. Hence this project is closely related to our work on robotics and natural language.

Sample publications

Yeap, W.K. (1997) Emperor AI, where is your new mind? Artificial Intelligence Magazine 137-144.

Last updated: 25 Jun 2009 11:32am

AUT University, New Zealand | Copyright © | Privacy | Site map | IT support | Website feedback | Student feedback