Friday, January 05, 2007

EEC versus Cognitivism

There has been a debate going on for some time now regarding two conceptual frameworks in cognitive science. The standard, mainstream framework is usually called cognitivism or "the information processing view" of cognition. Bluntly speaking it states that the human mind is a piece of software consisting of computational structure and stored knowledge (representations) that is running on (or as they say somewhat less bluntly: physically instantiated by) a piece of hardware aka the brain. The other conceptual framework is called situated/embodied cognition, which are actually two frameworks that sort of go hand in hand and the total of which I will hereforth call "embodied embedded cognition" or EEC. EEC states that the human mind is an emergent property of both the physical body and its characteristics, the physical local environment and its structure, and the processes of the brain. I will concentrate on embeddedness / situatedness in what follows.

Embeddedness/situatedness refers to the idea that 'the world itself' can do some (or all?) of the computations that we would usually hold to be a product of the processes of the brain. A famous example, cited by Lave, is about someone who, when asked to make a cake for 3 people when the original recipe was for 4 people and involved 500 grams of dough. What this person does is that he simply makes the dough for 4 people, squeezes the dough into a pancake, and then takes out 1/4'th of the circle. The remainder is just about 500 * 3/4, which is the amount that was needed. In this case, the brain of the person in question did not have to calculate the assignment 'in his head', instead, the physical properties of the world were used in concordance with our perceptual abilities (the visual ones) that happen to be such that allow us to quickly and quite directly perceive and cut out a quarter of a circle. (It is actually not unlike explicitly, physically, calculating the assignment (500 / 4) and then the result [* 3], which is what we learn if we have to multiply by ratio's by heart, since "division is multipyling by the reverse", as they say in Dutch schools.

Anyway.

Some people are die-hard cognitivists, other people are hard-core EECers. Practically, I would like to pursue EEC empirically, give it a chance, and see where it ends. If the framework is no good, it will eventually die out. If it is viable, it will live and we will learn from it. But this is not what this blog is about. This blog is about the fundamental theoretical issue of which framework is the preferred framework if we would have to choose *now*, based on what we know at this moment.

To me, the most interesting *theoretical* question of the debate between the opponents would be: which of the two frameworks constitutes the ultimate 'ground' on which the other framework is built? Because as anyone would acknowledge, the typical processes that are taken as fundamental in each of the frameworks *do happen*, at least descriptively, or in our conscious experience. So if one of the frameworks would turn out to be false, the processes that this framework took as fundamental would still need to be explained in the vocabulary of the *other* framework. To be precise: if cognitivism is false, EEC would need to be able to explain how people can represent facts, store knowledge, and 'reason', follow procedures (by heart!). For example, EEC would have to explain how it is possible that people *can* solve the assignment 500*3/4th by heart if they need to, without the help of pancakes. Conversely, if EEC is false, cognitivism needs to explain how this dependency of human cognizers on the external world can be sustained by a cognitivist system. Using 'knowledge in the world', as Donald Norman calls it, is a process that exists, but if cognitivism is the game, then it must explain how to do it using representations and computations.

Let us see what both camps would argue. Since cognitivism is mainstream, I probably need a bit more words to argue for EEC being the ultimate ground.

For a cognitivist, all this talk about situatedness is just a 'complex' version of cognitivism all along. The cognitivist would say: yes, of course the computational-representational system could *use* the physical world as a means for storing information, or as a tool for quickly computing stuff that internally would be more demanding. There are machines, road-signs, etc etc.. Nobody denies that. A cognitivist view of the mind does not mean that the cognitivist brain isn't aloud to use 'smart tricks' if it knows of them. But the 'currency' in which information is traded between brain and world consists of hard cognitivist coins nevertheless. Whenever you 'use' the world in order to let it calculate for you, you need to make contact with the world, and when you do, you do so in your guise of being a computational-representational system. There is no other way of interacting with the world cognitively than in terms of information-processing. Meaningful signs need to go in, and behavior needs to come out. In the Lave example, you need to be able to perceive, encode and store the information that came from the visual system that was observing the pancake. You need to understand that there is a smart way of calculating 500 * 3/4 without using your inner resources. You need the knowledge in order to know what to do in order to implement this smart plan of yours. You need to device a motor plan that actually let's you cut 1/4th of a pancake. You need, in sum, a complete 'psychological system' in order to do the 'embedded stuff' that these EEC-ers brag about.

For an EEC-er, things are completely the other way around. People like Merleau Ponty try to completely build up a new way of looking at the same old psychological phenomena, such that embodiment and embeddedness are now more fundamental, and 'prior' to processes like computation and storage of knowledge. This is somewhat counterintuitive because we have learned to talk about our experience in a cognitivist way, but phenomenologists say we should 'bracket out' this talk since it is artificial and return to the core experience itself. Anyhow, our intuition and conscious inner speech about what we are and how we do the things we do can of course not be trusted, even in a materialist empirical science. So, what EEC-ers say, e.g., is that in low level motor planning as well as in low level perception, EEC-like processes already exist. Perception and action are tightly coupled systems, such that, even the most low-level perception of objects is already 'aided' by certain specific actions undertaken by the agent, in the world. Perception is dependent on action! For example, the perception of objects is not just based on a passively received pattern of excitations on the retina, but it is also dependent on the pattern of *actions* taken by the agent as it saccades with its eyes in the scene of interest. That is: movement of the eyes *constructs* a series of inputs that contains the sort and type of information it contains precisely because this series was generated by movement of the eyes. In other words: part of what we 'see' is our own actions reflected in the effect these actions have on the world. And this is *low level*, in the order of milliseconds. All the rest of cognition needs to be built from these basic perceptual building blocks. E.g., take our perception of objects. Objects, as we perceive them, are not objective 'things' in the world, but already a blend of 'what is out there' and 'how we, with our physical capacities approach that what is out there'. Perception is already embedded and embodied. See also William Gibson's 'affordances' in this respect. Now if EEC is something that is already there in low level processes, then the question is: can we push this conceptual framework higher up (and why not?). Does it (Why wouldn't it?) scale up all the way to centre-court psychological processes? Even if everyday cognitive phenomena, like memory processes or planning and reasoning, need some 'extra' explanatory apparatus involving talk of computations and representations, it could still be argued that this 'extra' is just a 'complex version of EEC processes. EEC is the fundamental basis, and only somewhere upstream in the complexity of things are we able to perform 'computations' and 'store knowledge'.

As it stands, I think that cognitivism *appears* to have a better, detailed story about how EEC-like processes could be implemented by a classic information-processing system. I say appears, because it is my strong feeling that a lot of the *words* that cognitivism uses actually need a lot more explaining if one comes right down to asking what these words mean, precisely. Cognitivism has been very smart in assuming this Cartesian split between 'software' and 'hardware', which makes it such that practically anything can be 'assumed to exist' on the software level without the cognitivist having to explain exactly how it is 'implemented', as long as the process in question can "in principle" be implemented on an information-processing system. And what process could not? (Noncomputable processes? Nonrepresentational processes? But what are they?). So, cognitivists have the easy way out: as a framework it can account for any type of process that researchers come up with. But is also makes the cognitivist coin rather valueless. For if everything is possible in an information processing system, why are people not 'everything'? Why are we only the bunch of phenomena that we actually are?

EEC has a problem in that the question of how EEC-processes 'generate' full-blown cognition (planning a hike in the mountains, having a conversation with your partner about something that happened yesterday, remembering, by heart, or via reasoning/recalling, where you left your socks the day before, immediately recognizing a person, knowing his name, being able to recall your relation to this person, and so on) simply cannot even be *articulated* in an EEC way. The cognitivist response (oh well, it is just all computational procedures, even the EEC stuff is), is not possible. Suppose I sit in a train and think about what to have for dinner and what to remember to buy at the grocery's that evening. How would the train's interior, the lights, the other people present, the music on my I-pod, how would those physical actualities possible contribute to this cognitive process that is actually taking place right there and then? Yes, if I had a paper and pencil, I could come up with an EEC story. But I have no paper and pencil. I am thinking, creating and remembering my meal for the evening *in my head*. On my own. No environment present. I could saccade my eyes for all my worth but it wouldn't help me in remembering the recipe would it?

In conclusion, both frameworks have problems. Cognivism superficially is stronger, but I think that this is artificial due to the fact that in everyday life we talk in cognitivist vocabulary already so it easily seems as if cognitivism has 'explained' something where in fact it has explained nothing, it only resonated with our everyday way of naming and talking about things (which in itself is not an explanation of anything at least not scientifically). The problems that AI systems have in dealing with 'common sense' behavior are a case in point here. We are easily seduced by cognitivism. But EEC offers no real alternative either - yet. It is up to the EEC-ers (us, that is) to explain how cognition (real cognition) is possible in an EEC system without secretly, implicitly, assuming a cognitivist system that *does* all these fancy embedded embodied tricks.

Afterthought:
Merleau Ponty and his phenomenology could be a viable starting point, if not for the Very Complex and Inunderstandable Writings of these French guys. The problem with these kinds of writings is that once you understand it, you, for yourself, can use MP's ideas to 'explain' the EEC fundamentals allright, but then still the rest of the academic world does NOT understand MP and so you have put yourself on some island together with the other 2 people that understood what he is trying to say.