Situating Cognition within the Virtual World
Paul R. Smart and Katia Sycara
6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the
Affiliated Conferences, AHFE 2015
Cognitive architectures and virtual environments have a long history of use within the cognitive science community. Few studies, however, have sought to combine the use of these technologies to support computational studies into embodied, extended, situated and distributed (EESD) cognition. Here, we explore the extent to which the ACT-R cognitive architecture and the Unity game engine can be used for these purposes. A range of issues are discussed including the respective responsibilities that the cognitive architecture and game engine have for the implementation of specific processes, the extent to which the representational and computational capabilities of cognitive architectures are suited to the modeling of EESD cognitive systems, and the extent to which the kind of embodiment seen in the case of so-called ‘embodied virtual agents’ resembles that seen in the case of real-world bio-cognitive systems. These issues are likely to inform the focus of future research efforts concerning the integrative use of virtual environments and cognitive architectures for the computational modeling and simulation of EESD cognitive processes.
An interesting little paper covering some issues around integrating Unity with present situated cognition systems such as ACT-R http://act-r.psy.cmu.edu/ and THESEUS – their own simulation framework.
Not much theory here, but worth noting regarding the utility of processing visual features in the environment into abstract representations suitable for ACT-R to process.
Interesting observation, they quote Clark “Clark  thus distinguishes between ‘mere embodiment’, ‘basic embodiment’ and ‘profound embodiment’. He suggests that profound embodiment is primarily a feature of bio-cognitive systems, and that this form of embodiment is unlike that seen in the case of synthetic agents.”
My point here is that, this applies to humans in the space. Using certain interfaces should increase this level of embodiment. A good idea would be to create a scale for this, so that researchers can assess the experience of embodiment, to maybe predict responses in humans.
They note the difficulty of integrating EESD forms of situated cognition into games systems. Which I find interesting. Why don’t they form embodied scripts in the environments which are trained to interact with each other in a similar way to models of situated cognition, regarding our bodily movements? Sounds like PhD project to me. Surely these have been used in robotics, so should translate to a computational architecture easily, and, with Unity’s nice scripting setup, should be able to be executed relatively easily, but only on a powerful machine. 🙂
1. Clark, A. Supersizing the Mind: Embodiment, Action, and Cognitive Extension, Oxford University Press, New York, New York, USA, 2008
2. Smart, P. R., Scutt, T., Sycara, K., Shadbolt, N. R. in: Turner, J. O., Nixon, M., Bernardet, U., DiPaola, S. (Eds.) Integrating Cognitive Architectures into Virtual Character Design, IGI Global, Hershey, Pennsylvania, USA, in press.