The goal of computational cognitive modeling is to understand the human mind and its associated processes. Whilst behavioural observations can give us an insight into the influence of certain stimuli (e.g. an individual wincing at the sound of a dentist’s drill), these findings are limited to describing superficial processes of human behaviour. In the same way that it is virtually impossible to completely understand a complex computational system by simply testing its output, the underlying cognitive processes involved in exhibiting a specific behaviour cannot be discerned accurately from observational inference. Moreover, given the complexity of the mind, the experimenter can never be entirely sure which parameters are contributory to the observed behaviour. Computational models therefore allow us to understand the finer details of cognitive processes. Their detailed, precise, and sequential steps provide a great deal of conceptual clarity, however, this is not to say that computational models should be themselves taken as theory. For a process to be replicated by a computational model its theoretical basis must be known. Thus computational models promote theory generation, and can subsequently be used as conceptual tools for psychological experiments.
The Turing Machine is the earliest known example of a computational model, created by Alan Turing in 1936 in response to the concern of computer engineers that certain mathematical tasks were not computable . The Turing Machine is a device capable of reading, writing and erasing symbols on an infinitely long strip of tape, according to a set of rules. As with most computational systems, Turing’s model is process-based; that is, its focus is on how cognition materializes. The device is essentially a physical symbol system – a system based on the principle that computation is nothing more than symbol manipulation. Given a set of input symbols or values and a set of rules, the machine would produce a set of output symbols or values, within a finite number of steps. For a system to be classified as such, it must contain a set of symbols, which can be strung together to yield a structure; contain a set of processes that operate on said symbols; and it must be located in a wider world of real objects . Allen Newell and Herbert A. Simon argue that this system has the necessary and sufficient means for general intelligent action , but does this mean it is capable of accurately modeling cognitive processes? If we go by the logic of Ulric Neisser, who defines cognition as the process by which sensory input is transformed, reduced, elaborated, stored, recovered, and used , then the brain, and therefore cognition, is nothing more than a very complex physical symbol system. Turing’s model