Friday, January 05, 2007

EEC versus Cognitivism

There has been a debate going on for some time now regarding two conceptual frameworks in cognitive science. The standard, mainstream framework is usually called cognitivism or "the information processing view" of cognition. Bluntly speaking it states that the human mind is a piece of software consisting of computational structure and stored knowledge (representations) that is running on (or as they say somewhat less bluntly: physically instantiated by) a piece of hardware aka the brain. The other conceptual framework is called situated/embodied cognition, which are actually two frameworks that sort of go hand in hand and the total of which I will hereforth call "embodied embedded cognition" or EEC. EEC states that the human mind is an emergent property of both the physical body and its characteristics, the physical local environment and its structure, and the processes of the brain. I will concentrate on embeddedness / situatedness in what follows.

Embeddedness/situatedness refers to the idea that 'the world itself' can do some (or all?) of the computations that we would usually hold to be a product of the processes of the brain. A famous example, cited by Lave, is about someone who, when asked to make a cake for 3 people when the original recipe was for 4 people and involved 500 grams of dough. What this person does is that he simply makes the dough for 4 people, squeezes the dough into a pancake, and then takes out 1/4'th of the circle. The remainder is just about 500 * 3/4, which is the amount that was needed. In this case, the brain of the person in question did not have to calculate the assignment 'in his head', instead, the physical properties of the world were used in concordance with our perceptual abilities (the visual ones) that happen to be such that allow us to quickly and quite directly perceive and cut out a quarter of a circle. (It is actually not unlike explicitly, physically, calculating the assignment (500 / 4) and then the result [* 3], which is what we learn if we have to multiply by ratio's by heart, since "division is multipyling by the reverse", as they say in Dutch schools.

Anyway.

Some people are die-hard cognitivists, other people are hard-core EECers. Practically, I would like to pursue EEC empirically, give it a chance, and see where it ends. If the framework is no good, it will eventually die out. If it is viable, it will live and we will learn from it. But this is not what this blog is about. This blog is about the fundamental theoretical issue of which framework is the preferred framework if we would have to choose *now*, based on what we know at this moment.

To me, the most interesting *theoretical* question of the debate between the opponents would be: which of the two frameworks constitutes the ultimate 'ground' on which the other framework is built? Because as anyone would acknowledge, the typical processes that are taken as fundamental in each of the frameworks *do happen*, at least descriptively, or in our conscious experience. So if one of the frameworks would turn out to be false, the processes that this framework took as fundamental would still need to be explained in the vocabulary of the *other* framework. To be precise: if cognitivism is false, EEC would need to be able to explain how people can represent facts, store knowledge, and 'reason', follow procedures (by heart!). For example, EEC would have to explain how it is possible that people *can* solve the assignment 500*3/4th by heart if they need to, without the help of pancakes. Conversely, if EEC is false, cognitivism needs to explain how this dependency of human cognizers on the external world can be sustained by a cognitivist system. Using 'knowledge in the world', as Donald Norman calls it, is a process that exists, but if cognitivism is the game, then it must explain how to do it using representations and computations.

Let us see what both camps would argue. Since cognitivism is mainstream, I probably need a bit more words to argue for EEC being the ultimate ground.

For a cognitivist, all this talk about situatedness is just a 'complex' version of cognitivism all along. The cognitivist would say: yes, of course the computational-representational system could *use* the physical world as a means for storing information, or as a tool for quickly computing stuff that internally would be more demanding. There are machines, road-signs, etc etc.. Nobody denies that. A cognitivist view of the mind does not mean that the cognitivist brain isn't aloud to use 'smart tricks' if it knows of them. But the 'currency' in which information is traded between brain and world consists of hard cognitivist coins nevertheless. Whenever you 'use' the world in order to let it calculate for you, you need to make contact with the world, and when you do, you do so in your guise of being a computational-representational system. There is no other way of interacting with the world cognitively than in terms of information-processing. Meaningful signs need to go in, and behavior needs to come out. In the Lave example, you need to be able to perceive, encode and store the information that came from the visual system that was observing the pancake. You need to understand that there is a smart way of calculating 500 * 3/4 without using your inner resources. You need the knowledge in order to know what to do in order to implement this smart plan of yours. You need to device a motor plan that actually let's you cut 1/4th of a pancake. You need, in sum, a complete 'psychological system' in order to do the 'embedded stuff' that these EEC-ers brag about.

For an EEC-er, things are completely the other way around. People like Merleau Ponty try to completely build up a new way of looking at the same old psychological phenomena, such that embodiment and embeddedness are now more fundamental, and 'prior' to processes like computation and storage of knowledge. This is somewhat counterintuitive because we have learned to talk about our experience in a cognitivist way, but phenomenologists say we should 'bracket out' this talk since it is artificial and return to the core experience itself. Anyhow, our intuition and conscious inner speech about what we are and how we do the things we do can of course not be trusted, even in a materialist empirical science. So, what EEC-ers say, e.g., is that in low level motor planning as well as in low level perception, EEC-like processes already exist. Perception and action are tightly coupled systems, such that, even the most low-level perception of objects is already 'aided' by certain specific actions undertaken by the agent, in the world. Perception is dependent on action! For example, the perception of objects is not just based on a passively received pattern of excitations on the retina, but it is also dependent on the pattern of *actions* taken by the agent as it saccades with its eyes in the scene of interest. That is: movement of the eyes *constructs* a series of inputs that contains the sort and type of information it contains precisely because this series was generated by movement of the eyes. In other words: part of what we 'see' is our own actions reflected in the effect these actions have on the world. And this is *low level*, in the order of milliseconds. All the rest of cognition needs to be built from these basic perceptual building blocks. E.g., take our perception of objects. Objects, as we perceive them, are not objective 'things' in the world, but already a blend of 'what is out there' and 'how we, with our physical capacities approach that what is out there'. Perception is already embedded and embodied. See also William Gibson's 'affordances' in this respect. Now if EEC is something that is already there in low level processes, then the question is: can we push this conceptual framework higher up (and why not?). Does it (Why wouldn't it?) scale up all the way to centre-court psychological processes? Even if everyday cognitive phenomena, like memory processes or planning and reasoning, need some 'extra' explanatory apparatus involving talk of computations and representations, it could still be argued that this 'extra' is just a 'complex version of EEC processes. EEC is the fundamental basis, and only somewhere upstream in the complexity of things are we able to perform 'computations' and 'store knowledge'.

As it stands, I think that cognitivism *appears* to have a better, detailed story about how EEC-like processes could be implemented by a classic information-processing system. I say appears, because it is my strong feeling that a lot of the *words* that cognitivism uses actually need a lot more explaining if one comes right down to asking what these words mean, precisely. Cognitivism has been very smart in assuming this Cartesian split between 'software' and 'hardware', which makes it such that practically anything can be 'assumed to exist' on the software level without the cognitivist having to explain exactly how it is 'implemented', as long as the process in question can "in principle" be implemented on an information-processing system. And what process could not? (Noncomputable processes? Nonrepresentational processes? But what are they?). So, cognitivists have the easy way out: as a framework it can account for any type of process that researchers come up with. But is also makes the cognitivist coin rather valueless. For if everything is possible in an information processing system, why are people not 'everything'? Why are we only the bunch of phenomena that we actually are?

EEC has a problem in that the question of how EEC-processes 'generate' full-blown cognition (planning a hike in the mountains, having a conversation with your partner about something that happened yesterday, remembering, by heart, or via reasoning/recalling, where you left your socks the day before, immediately recognizing a person, knowing his name, being able to recall your relation to this person, and so on) simply cannot even be *articulated* in an EEC way. The cognitivist response (oh well, it is just all computational procedures, even the EEC stuff is), is not possible. Suppose I sit in a train and think about what to have for dinner and what to remember to buy at the grocery's that evening. How would the train's interior, the lights, the other people present, the music on my I-pod, how would those physical actualities possible contribute to this cognitive process that is actually taking place right there and then? Yes, if I had a paper and pencil, I could come up with an EEC story. But I have no paper and pencil. I am thinking, creating and remembering my meal for the evening *in my head*. On my own. No environment present. I could saccade my eyes for all my worth but it wouldn't help me in remembering the recipe would it?

In conclusion, both frameworks have problems. Cognivism superficially is stronger, but I think that this is artificial due to the fact that in everyday life we talk in cognitivist vocabulary already so it easily seems as if cognitivism has 'explained' something where in fact it has explained nothing, it only resonated with our everyday way of naming and talking about things (which in itself is not an explanation of anything at least not scientifically). The problems that AI systems have in dealing with 'common sense' behavior are a case in point here. We are easily seduced by cognitivism. But EEC offers no real alternative either - yet. It is up to the EEC-ers (us, that is) to explain how cognition (real cognition) is possible in an EEC system without secretly, implicitly, assuming a cognitivist system that *does* all these fancy embedded embodied tricks.

Afterthought:
Merleau Ponty and his phenomenology could be a viable starting point, if not for the Very Complex and Inunderstandable Writings of these French guys. The problem with these kinds of writings is that once you understand it, you, for yourself, can use MP's ideas to 'explain' the EEC fundamentals allright, but then still the rest of the academic world does NOT understand MP and so you have put yourself on some island together with the other 2 people that understood what he is trying to say.

7 comments:

Sander said...

Photons (these things that are real, because you can spray them...) are a wave and they are a particle. Depending on the experiment you're doing, one or the other theory seems to be true. These apparently conflicting theories are both valid. One cannot be derived from the other. They are both good models of a photon.
Could it be that EEC and cognitivism are both valid models? neither of them is false. It just depends on the experiment you are doing which one is applicable. Or is this view a little too pragmatic for you?

If I have to choose I would vote for EEC. EEC solves some other problems. I'm just a layman, but as far as I understand cognition it implies that the human mind (which has such a vague definition that it could hardly be subject of science anyway..) can be reduced to algorithms. Well, algorithms can be run on a computer, so our mind could be transferred to a computer? We could create mental clones?
Maybe we would finally come to some very good understanding of our mind and brain. I expect that the conclusion would be that the human mind will only run in a special biochemical soup contained in some billions of interconnected cells. Any change to that environment will result in something that would not be equivalent to the human mind.

So when you sit in the train with your I-pod on your head. You think that your shopping list is independent of your environment? What about your stomach? The coffee smell when you enter Utrecht? Weren't you listening to this same music with a nice cold beer? So the creation of the list might be heavily influenced by the environment (including your own body). Remembering is another problem. But could you remember the list if it contained 10 unrelated illogical items? What would happen if I told you my shopping list and you told me yours and do each others shoppings? Could you remember a recipe of something you've never eaten? How much do we really remember? What do we remember?

If our mind was based on some computational algorithms and storage one would expect that our ability for arithmetic and our memory would be much greater.

To me it seems that our memory is merely a bunch of fuzzy associations, an environment (some biochemical soup) where the mind draws information from. The most miraculous thing is that from this environment there emerges a single (?) stream of language (and vision) based thoughts. Why? Is this due to an evolutionary artifact that allows us to communicate? Could it be that we have others 'thoughts' that we are not aware of just because they don't make it to our language center? But why does our shopping list have to pass through our language center? Do we have to stream the shopping list to be able to remember it in the supermarket? Do you also buy things that were not on your explicit list, something you didn't name or visualize, like toilet paper or diapers?

Anonymous said...

Hi guys,

Nice talk. But.....

Cognitivism is surely an older term than EEC. So it is "fair" to compare them? One thing that strikes me is that you both want compare models on validness while the ideas you have about "validness" are not the same. Obiously Sander as a physisist has some other braincells connected to "validness" in his head than Pim the artificial intelligence specialist.

I thought a long time the brain is "model-able" but growing older and more experienced (children, jobs, etc) I am not so sure anymore. I "feel" in the comments of Sander and knowing him as a friend that he agrees on this. But it is a little bit of a discussion killer to say that the brian is not model-able.

The thing in my experience is that a good model speaks for itself. I find it questionable that a model is understood only by a very select group. A good model "feels" right and then you prove it.

A good model makes full acceptance if it is simple and easy to understand. Some models are so elegant that nobody really "understand" them. I thinking of the theorems of Euler http://en.wikipedia.org/wiki/Euler's_formula that are used for example to compute the behaviour of sinusiodal functions. The theorem cant be "proven" but is so elegant and so "usable" that you will be laughed at if you would question them. Also by me by the way :-) I miss the proof in your stories. In cognitive science a model can be proven too. That is very nice in the article I referred to in another comment (Bruno Marchall). Here is the proof of some characteristics of a "model" of reality but not the acceptance and "belief" in the model.

Nice guys... and thanks!
ps
no blogs but fotographs!
http://www.flickr.com/photos/bvdkamp/

Jules said...

Reading this discussion I cannot help but relating it to the article I am currently writing on The origin of consciousness in the breakdown of the bicameral mind (1976), by Julian Jaynes. (see my new website www.erikweijers.nl, section Psychologie)

Jaynes reminds us of the important historical fact that most current mental behavior is a descendant of behavior that was once external behavior. For example, ‘to make a decision’ used to be a matter of throwing sticks or drawing lots, peering in the intestines of animals, looking in the stars… etc. So it was a lot like a perceptual, very interactive process between human and environment. According to Jaynes, these external behaviors were the absolute precondition for later stages of internal decision-making, that came about by virtue of analog and metaphor. (only after you have learned to throw dice, you can throw ‘analogical dice in your head’).

I am not sure how this relates exactly to the EEC/Cognitivism discussion. But the historical order from external computation to internal computation seems important here.

The question remains to what degree the computation that we conceive to be internal is still to some degree an interactional, EEC process.

jelle said...

hey guys!
thanks for the replies. when was this written? I don't understand blogging well enough I guess, there must be some way to automatically keep track of when somebody is writing a reply. I used to get emails from blogger when you'd submitted. apparently not anymore or I didn't notice them.

Lot's of thoughts on Sander's reply. Cognitivism was indeed the natural alley of artificial intelligence: if it's just algorithms, we could build it ourselves! The big problem is how to assess whether you've succeeded in your attempt to do so. The turing test was one such assessment method, which provided a host of philosophers with a few years work.

Anonymous suggests that the brain is not model-able. Another way of saying this is saying that it is computationally intractable or non-computable. This is, as I understand it, what would interest Iris, she knows a lot about the discussion between those who say that the brain is computational or not and whether it makes sense to claim the latter at all (isn't 'everything' computational, ultimately?). I could post some of our old emaildiscussions about this topic if you like.

NB: claiming the brain is incomputable is something else from claiming that 'the cognitive system' is incomputable, since at least to me the brain is just one of the structures that contributes to cognition. It could be that the brain is incomputable but, on a higher aggregation level, the brain-body-world system is very much computable, perhaps described best by something as elegant as the Euler-thingy you referred to. On the other hand, it could also be that, while the brain is complex but still perfectly computable, once we try to describe its interactions with the body and the world, the system becomes too complex and principally incomputable/non-model-able.

Options options options, and no proofs at all... sorry!

(I don't like proofs. Proofs are so much hard work, you get sweaty and all. I like ideas: they just pop up... :-)

jelle said...

And now for Eric's contribution. I am definitely going to check out Jaynes.

You say:
>> The question remains to what degree the computation that we conceive to be internal is still to some degree an interactional, EEC process.<<

Exactly. This is the core question. If the internal computation is somehow 'fundamentally disconnected' from the EEC-like processes (that were 'prior'), then what you have in effect is cognitivism-on-top-of-EEC. But this cognitivism operates qualitatively different from EEC and can/should be studied in isolation. But if 'cognitive processes', that we usually conceive of as being internal, are in fact still 'EECy' (grounded in body and world), then this fact should be part of the theory and the models that emerge out of this theory.

I think language is crucial here. Andy Clark describes language as 'the ultimate artefact'. If we see our capacity for language as a 'tool', not unlike te sticks and the dice we used to use, then 'interacting with language' provides us with enormous behavioral flexibility and decision-making powers. But now the question shifts to: how is it possible that we use language? (and does that imply a cognitivistic model? etc etc)

One way or another: I think it is good to note, as you do, that an EEC-like behavioral system somehow 'came first' and that things like abstract reasoning, language, decision-making and planning are only evolutionary late achievements that are probably still relying on the older behavioral-control processes. even the physical make-up of the brain suggests this: first the 'reptile brain', then the 'emotional brain' then the 'cognitive brain' (and what will come next?)

jelle said...

O and Sander, of course it is not both cognitivism *and* EEC at the same time. That wouldn't be any fun!

Jules said...

Hi there Jelle! Nice that you responded to the comments.

The approach of Cognitivism-on-top ;-) is what I think would be Jaynes's position - if he had lived to engage in the debate.

Note that language was already present in the sticks-and-bones phase of human decision making. Only that the vocabulary contained no words for 'internal processes'. The 'mental sphere' or whatever you want to call it, had yet to be invented.