Thursday, December 14, 2006

What's inside?




One and something year old Jonas is playing games. Self-designed, highly entertaining games - at least for his two biggest fans. The semantics of the game are sometimes difficult to assess, but the behavior has a clear, observable, repeatable structure and it is clearly something new that was not witnessed before.

For instance, Jonas will walk to his mother, slap her knees with both hands, laugh with excitement, turn round, walk to his father who sits at the other side of the table, slap his knees with both hands, laugh with excitement, and repeat the procedure again and again, walking back and forth between his two parents. I had not seen this kind of structured, repetitive behavior before, it emerged somewhere in the last week or weeks. The behavior is marked by some arbitrarily sequence of acts that is being repeated over and over.

And now, the little man is asleep. As fatherly pride slowly fades, philosophical reflection appears on the scene.

If we try to describe what we, observers, experience while watching the child, we would quickly choose some appropriate words for what we experience to be 'a new behavioral capacity'. In the example above the words are 'playing games'. Cognitive science excels in the next step, which is to replace these words with other words that supposedly catch the 'essence' of the observed phenomenon. We can expect analyses such as this one: Playing games, is "essentially" the ability to express and follow a rule-structure.

At the same time, neuro-scientists are carefully investigating what has changed in the neural organisation of the brain, since it is clear, at least to these scientists, that new behavioral capacities must be caused by significant changes in brain organisation.

Suppose the brain-scientist indeed discovers a structural brain change that is reliably associated with the onset of 'game playing' behavior in small children. Would that be proof of the theory that the new neural organisation is responsible for "the ability to express and follow a rule structure"?

I'm afraid not.

What we have, as a fact, is that there is a brain change that correlates with a behavioral change. The behavior is complex, it is real, it is located in space and in time, it emerges within a physical and social environment. The 'game' is played using a physical body with physical properties. It is played in a context of social relations between father, mother and child. It emerged out of a situation in which the mother and father were sitting opposite each other at a table from which the child was just leaving after having had dinner. Within this *practical, real* situation, a brain change had its effect. Game playing was the result.

Abstracting away from the observed phenomenon to the underlying 'essence' is a dangerous activity. It is often grounded in values, beliefs and perceptions of the observer. It is also constrained by the language in which the abstraction is expressed.

But even if the abstraction is a valid one, there is no proof whatsoever that the brain change is in itself corresponding to (representing) this 'essence' that is described by the abstraction. It could be that the brain change, in itself, is something very different. Something that, in itself, has nothing to do with 'following a rule-structure'. The newly observed behavior of game playing, and its associated 'essence' - following rules - might be emerging only when the 'updated' brain is operating in an appropriate physical and an appropriate social environment, preceded by an appropriate history of actions.

Cognitive science has a strong history of what Churchland calls 'vertical analysis', in which behavior is broken down into several, vertical, 'columns', corresponding to the classic 'faculties' of mind, and its modern heirs, the computational-representational 'modules'. "Rule-following" is such a module. But in the process of breaking down the observed phenomenon into meaningful abstractions (essences), we can also choose to cut reality in horizontal slices. A horizontal slice corresponds to a full-blown, functional, complete sensorimotor loop in which parts of the brain, the body and the environment take part. Structurally new behaviors that clearly mark cognitive developmental phases, such as the emergence of 'game playing', might be explained by the development of a new second order influence within the brain upon such a horizontal, existing, and operational, sensorimotor cycle. The brain change thus comprises not of a new module that represents the new 'capacity', but it comprises of a new *bias* upon the existing agent-environment interaction.



As they say in pop-music: the band and the audience make the show.

Friday, November 24, 2006

Interactions (3)

I would like to add a bit to my ideas about the concept of interaction. It is very abstract, these are just first thoughts that have to be worked out later.

We discussed interaction between things, between organism and things, and between organisms (specifically: between humans). There is another division that I like to introduce here, which divides the concept of interaction in yet two more forms. This division is not based on the kinds objects that interact, rather it is a typology of the kind of effect that the interactional process itself may have.

Consider two machines interacting. That is: each of the machines performs acts that have an effect in their respective environments. Both machines are part of each other environments. Moreover, there are reliable relations between the actions of the one machine and the effect such an action produces in the pattern of actions of the other machine. In common terms we would say that each machine 're-acts' to the other machine's actions. Another way of saying this is to say that the two machines 'inter-act'.

Although each machine influences the pattern of actions of the other machine, the rules that govern such interactions are fixed. That is, the interaction changes the behavior of both machines, but it does not change the pattern of interaction itself. I call these rules, or patterns if you whish, the 'structure' of the interaction.

Now the big divide I want to introduce is between systems that can, or cannot, change the structure of the interaction, by interacting.

The first category of interaction I call 'fixed interaction'. The latter case I call 'developmental interaction'.

Interacting machines generally are fixed systems. It is a technological-empirical question whether we will one day come to know of machines being able to develop, through their interactions, their own interactional structure. The current examples in Alife and AI do not convince me, yet. [cf. arguments of a.o. Tom Ziemke]

Organisms, however, interacting with their passive environments or with other organisms, constitute active interactive systems. (I just state this as a fact. It is of course very well possible to have a discussion about the validity of this claim). Such systems change their interactional structure, by interacting. This means that the rules that govern the interaction change. The psychological interpretation would be that such a system is able to learn from experience.

I want to end this discussion for the moment, but not before sharing with you a glimpse of where all this is leading: If we ask ourselves, what is an organism? What is the essence that makes something alive, and what makes a system an active, behaving system? It is my belief that such a system is a developmental interactive system and that most of what we call 'the organism', is in fact interactional structure that has developed, both on philogenetic and ontogenetic timescales, in so called 'layers' (I will explain this in a later blog). Because the newly developed interactional structure has a stability, we often forget that this structure is *interactional*, it is part of the interactional system, not merely 'part of the organism'. In fact, there is no 'organism' if we do not consider it in the context of its environment. But this is for tomorrow!

Friday, November 03, 2006

My funny Valentine




Jass musicians (and likewise blues and pop musicians) never play what the score tells them to do. To be precise: they will deviate heavily from the prescribed rythm. Most notes will be played later or earlier. If one plays exactly what is written the music will sound dull, obligatory. A good example of this is when big orchestra's or choirs play well-known pop-songs (you know: London Philharmonic playing Nirvana's Smell's like teen spirit, with the heavy violin sections). What misses in these orchestra's is the 'groove' of the original song. It is because the orchestra is playing exactly what the score says.

Of course, the 'right way' of playing it, is what the performers do, not what is written down. So it is not so much that the musicians are making errors, rather than te fact that writing it down in a score is just a poor method of representing jazz.

The deviations from the score are not random, however. A researcher at the NICI once gave at talk based on having measured all deviations from all notes in the written score in several versions of Chet Baker's 'My Funny Valentine'. As it turned out, the deviations, as a whole, form a well-defined pattern. If a jazz-performer follows this pattern, the music will sound right, people will regard the music as 'a good piece of jazz'. In fact, the only version of Chet's Valentine that couldn't be mapped to this pattern was a bootleg version that Chet never wanted to bring out because he wasn't satisfied with it, for some reason.

It is interesting to see that the 'jazz flavor', which marks the difference between a dull reproduction of the written score and a lively, groovy jazz performance, is itself completely explained by a mathematical formula. In theory, we could make a computer play lively jazz if we put the pattern that the researcher discovered and the written score together. In that sense, if we take the score and the computational pattern of deviations, we have 'explained' My Funny Valentine. But does it also explain anything about how Chet actually plays My Funny Valentine? With this I mean: does it say anything about the possible *causal mechanism* that leads Chet to play MFV in this particular way? I feel not.

Of course, there is the possibility that Chet actually embodies a computational mechanism that takes the score, transforms all the notes by adding and substracting the required deviations here and there, and then outputs the result on his trumpet. To be sure, on some level of *description* Chet *does* embody such a mechanism since this *is* exactly what he outputs on his trumpet. But I propose to make a distinction between such descriptive characterisations of the mechanisms at work, and the *real* (whatever that may be) mechanisms that actually caused the notes to be played. In particular, I see no reason why networks of cells in the *brain* would actually have to form computational patterns like the one described above in order to let this same pattern emerge out of Chet's trumpet. It could very well be, for example, that certain typical parameters on the level of Chet's breathing, lip-tension, and so on, add to the emergence of this pattern. The pattern of deviations from the score (the jazziness of jazz) might emerge as a *collective property* of Chet-as-he-is-playing-the-piece. A pattern that evolves 'in the flow of things', not something that has been 'planned' by a control system. Perhaps induced by subcortical, emotional parameter-changes and body-posture, lip-tension, and so on. Such a pattern would not have to be explicitly represented as some kind of computational *program* inside the brain, even if it emerges every time Chet plays the song. (Nobody 'programs' traffic jams, still they emerge on the same hotspots almost everytime).

Most researchers are not interested or they heavily disagree with this alternative. The first group is generally not interested in 'causal mechanisms'. They simply seek to find 'patterns' in behavior. The patterns themselve are enough 'explanation'. The second group disagrees strongly with the difference between the 'descriptive computational patterns' as observed and the 'real computational patterns' that are proposed as an underlying cause. Which is sort of the same thing, really.

Your lips are laughable
Unphotographable
Yet your my favorite work of art..

Tuesday, October 24, 2006

If you can spray them, they're real

Yessireeya. There it is once again: the age-old discussion between the realists and the instrumentalists.

It came up in a reply by Sander to my earlier posting on perception. Sander suggested that theories are only 'useful' and 'just mathematics'. Electrons are formulae, not real things. I replied quoting a philosopher who's name I'd forgotten, about spraying electrons, but it turned out that the example was about spraying photons (oh well) The correct story, with the name of the philosopher included, I refound on the web here.

'When Ian Hacking, for example, once asked a physics colleague what he was doing, the physicist replied that he was "spraying photons". Impressed, Hacking wrote: "From that day forth I've been a scientific realist. As far as I'm concerned, if you can spray them, then they are real."'

(I just love Google: I searched on "spraying" and "philosophy". Got a first hit!)

Turns out that this article that Google popped up for me is very interesting in itself. A researcher did an internet poll among physicists asking what kind of things they consider to be real, and which things they consider to be 'not real'. The list is long, including things like "concepts", "phlogiston", "electrons", "earth", "colours", basically everything you can name. He then writes a long review, discussing all issues involved, and along the way you basically learn about all the different philosophical positions one can possibly take.

Like, unreal dude!

Friday, October 13, 2006

Nijntje's restaurant




Currently I've been listening a lot to a CD with tracks that tell the story of a particular female rabbit. If you happen to sync in my developmental zone, you probably know about this rabbit. Here name is Nijntje.


In one of the songs Nijntje is playing 'restaurant'. She puts on a paper hat, made of a news paper, and sings "Now I am the chef of Nijntje's restaurant".

From a certain age, children start to use physical objects as symbols, representing something else. The 'something else' can be a physical object, but it can also be some agreement, some fact, or a relation between objects. In the example above, there is a double-take as far as representational content is concerned: first, the newspaper is used as a stand-in for a real chef's hat. Second, putting on this chef's hat automatically signifies that Nijntje is now a chef. (In fact, there is a three-double take considering that Nijntje is implicitly representing a small child, where in fact she is a rabbit, or even mor factual: she doesn't exist at all). It is remarkable, if you come to think about it, how young children, in the age of 4+, are actually able to understand such subtle semantical relations.

There are several possible explanations to this representational capacity of young children (and adults, likewise). What I want to discuss here, however, is that regardless of the underlying mechanism, one sees that throughout development and over cultures human beings have a tremendous *need* to use physical objects as representational stand-ins, in going about their daily business. It is as if we couldn't achieve what we are achieving everyday, if we were not able to use physical stand-ins. Consider the effort Nijntje would have to display in convincing the other children that she is a chef in a restaurant, if she didn't have some physical object with which to symbolize this role in the game.

Adults still have the same need as children, although we have learned, in various ways, to strech our abilities of dealing without them. But at significant points we need physical stuff in order to help us pin down the ideas and concepts that flow in our mind. In my work as a teacher I couldn't explain anything without using practical examples, metaphors, pictures, schemata, and physical models. Our students learn that end-users of technology often have difficulty in using the technology precisely because the designer has not provided the user with obvious physical clues that reveal the underlying 'mental model' of the interaction. That is: if you don't have a clue about how to operate the machine, it is probably because there are no physical clues, that represent the functional possibilities and how to operate them. A good interface provides a natural mapping: the physical form of the interface 'maps' in a natural way to the functional effects each part of the interface will have. When designers want to explain to customers, or end-users, about the design they have in mind, and whether or not this satisfies the customers whishes and demands, it is also obligatory to make the design 'tangible', to physically represent your ideas, either in a scale-model or a good sketch.

All of this shows that human beings, even in adult life, always need a chef's hat in order to understand the restaurant-game...

Monday, October 09, 2006

Interactions (2)

The mind-body problem is about how mind-stuff relates to body-stuff. Body-stuff is the stuff we all know about (tables, chairs, molecules, planets, billiart-balls), whereas mind-stuff describes the knowing itself: the thoughts, ideas, opinions, beliefs, desires, cravings and intentions that make up the mental realm.

The mind-body problem is essentially the question of how to make a thought out of a kilogram of brain matter. The question itself is already 'strange', imagine how the (ultimate and correct) answer would look like.

Now to turn to my previous discussion on interactions (here); the difference between interaction on the communicative level (between two active agents) and interaction on the physical level (between two physical systems), is not unlike the difference between mind and body.

When we say that two human beings 'interact', what we mean by the word interaction is a transmission of messages in, to take Fodor's term, the language of thought. Human beings do so by physically moving about their bodies (which is detected by the other's visual system) and emitting sound-waves (which is picked up by the nerve cells in our ears), but these physical changes are not crucial to the interaction that is taking place. In order to understand the interaction we have to look at all these events on a meaningful level, and the meaningful level, whenever human interaction is concerned, is on the "informational level". And with information it is meant: messages that are passed from one mental system to another.

When we say that two physical systems interact (for example, one nerve cell's nerve impuls ignites, via a chemical transmission system, another nerve cell that is connected to it), the meaningful level *is* the physical level. Physical levels (there are lot's of them, with their own translation problems between them) have a general description in terms of energy. I'm not that much of a phycisist, but as I understand it one of the laws of thermodynamics states that "things" (physical systems) over time generally get more and more disorderly. When a system has less order, it also contains less energy. The energy leaks out, and the system 'falls apart', so to speak. Any sand-castle will eventually become a flattend pile of sand again. We all turn to dust, someday. The disorderly-ness of a system is measured by its "entropy". Lot's of noise in a system means a high entropy. Lot's of rigidness/structure in a system means low entropy.

Now the funny thing is that although there is a huge theoretical gap between "people talking to one another" and "nerve cells talking to one another", there is a very straightforward way in which the concept of "information" is related to the concept of "entropy". Shannon equated information with uncertainty in this article, and since uncertainty can be - sort of - equated with entropy, information is entropy! Which seems paradoxical but that is because I'm being overly blunt here, see a discussion on this topic.

Now at first I thought this might be interesting because a theoretical closure between what they call thermodynamics and information theory via the concept of entropy might be, in effect, a solution to the mind body problem. Thermodynamics is about stuff, and information theory is about communication, about agents sending messages to one another (this is really the level at which Shannon speaks about it in the article and in his writings there is always a 'sender' and 'receiver' involved, how are not machines, but are considered to be sentient active agents)

However, I quickly found out that *within* entropy theory there is a lot of discussion about what the concept really means. For instance, it is only a matter of wordchoice that Shannon chose entropy to equal certain concept in his theory that he needed a name for. On the above website it is said:

"The story goes that Shannon didn't know what to call his measure so he asked von Neumann, who said `You should call it entropy ... [since] ... no one knows what entropy really is, so in a debate you will always have the advantage'" (see reference)

Still, all of this has to go into my thesis. If you want to know even more about it, read this book, by Ashby, one of the founders of cybernetics.

This last cybernetics link is also very interesting and funny with many anecdotical references. In it, it is described that Heinz von Foerster allegedly has said the following:

"FEEDBACK: An unpoetic inexpressive word that shrieks for replacement. Correct use of the word would refer to eating your own vomit. ".

Saturday, October 07, 2006

Interactions

Someday I want to write a PHD-thesis on the concept of interaction. Just for starters, let's make a quick inventory of types of interaction, based on the number of active agents involved.

1. No active agents involved
In physical systems with nog active agents, like human beings, other animals, or artificial intelligent systems involved, there is physical interaction based on the flux of energy between two physical sub-systems. For instance, when a billiartball hits another billiartball, energy is moved from one ball to the other, and the two balls thereby influence one another's behavior. Or when two chemical substances meet, there might be a chemical reaction, leading to a new stable state, in which some or all of the chemical substances have been combined, or falled apart, etc.. It is interaction of stuff with stuff. Stuff (or energy, which is the same) is being exchanged, moved about, all involved systems change state, i.e. they undergo some behavioral change, and new stabilities arise as a result of that.

2. Two active agents involved.
Interaction between two agents is different from interaction between two 'substances' since the interaction does not take place in the form of energy flow but in the form of *information* flow. Another word for this kind of interaction is "communication". It means that two active agents continously *interpret* the physical changes as they are received on their sensors, coming from the other agent, *as signals*, with an associated *meaning*. The level at which the interaction has meaning is on this informational level alone, the physical level is not interesting just so long as it exists, otherwise the information channel could not be physically realised, which would render communication impossible. But, where in case of the billiartballs the physical structure and energy processes in the system determined the nature of the interaction, in the case of two active agents interacting the nature of the interaction is determined not by the physical structure and energy flux that realises this interaction, but by the *meaning* of the communicative message that is send from one agent to the other.

Of course, one could envision a situation in which two agents bump into each other, as a purely physical accident. But in this case, I argue, the agent's should not be conceived of as active agents, but as passive physical systems only.

3. One active agent involved.
This is the most difficult case because it embodies a blend of the two definitions of interaction above. It is the case where a human agent 'interacts' with his (her) environment. Such kind of interaction is of central concern to cognitive science. The usual question there is: how does the active agent come to understand the (physical) environment or, how does the active agent know how to act appropriately in it? (which basically amounts to the same thing depending on your philosophy). Sensors on the active agent register input, the agent generates behavioral output on its 'actors' (body movement), which in turn leads to new sensory input, and so on. This perception-action cycle, which evolves over time, defines "the interaction". The interaction can be 'functional' with respect to the internal goals of the agent, or, likewise, it can be 'appropriate given the environmental situation. Biologically, one often speaks of 'adaptive' behavior, which relates to underlying evolutionary forces.

Now, what I want to discuss, in a later blog, is how one can mix physical interaction with communication, because I have a feeling that type 3 above contains some theoretical problems...

Thursday, October 05, 2006

Movie-talk

Today I've been watching two movies at the same time, zapping between the one and the other with my remote control. Both movies were interesting. One movie was about a famous club called studio 54. The owner was rich, famous, eccentric, addicted. He hand-picked his guests every night. Of all the hundreds of people shouting in the line, only few would get in. His 'door-policy', was the key to the succes of the club, however unfair such a policy might be, of course. Reflecting on it, I thought about how the talent for one specific thing might build you a complete empire (and bring you tons of money). Even being the famous owner, he still hand-picked the guests. This made the club special, attracting more special people (with money). He created a myth, but I did so by doing just one thing: going out there picking his guests. Of course he could not hand over this task to someone else, because the club was built on *his* talent and this talent was personal, which made the club personal, and the personality (unicity) of the club was the fundamental reasion why people wanted to be there in the first place. The great thing about going to this club is that once you were in, you knew that you had been 'handpicked' by this one guy.

The club, as it evolved, was driven entirly by a talent (based on intuition) and a forceful internal drive for success, and a continuous need for having more and more of it. Talent, intuition, need for succes; these are concepts completely alien to most of cognitive science. But as this movie shows: such human characteristics drive large parts of our society (the club being just an example, perhaps even a metaphor, of the human ways in general).

The other movie was about a guy who lied about almost everything, pretending to be a school-teacher, a crook, a policeman, and at some point even being appointed a doctor in a hospital, without having had the education (learning to talk and act like a doctor from television shows). Reflecting on this, I thought about how a pattern of behavior, on the outside, can be so completely fundamentally not be the same as "the real thing", which is somehow defined "on the inside". He was not a real doctor, but nobody noticed, since he said the right words. He even got somebody the right medicin or cure just by 'going with the behavioral flow' of things. Did he actually cause anything functional to happen in that hospital? Is it possible to get the effects using a system in which there is 'nobody home'? Ultimately, at least in this movie, his scheme exploded, something was bound to go wrong at some point, and it did. But if that wouldn've happened, would we have a right to say that this guy was not a doctor? Or should we accept the idea that a doctor is however does exactly as a doctor should do? What I ask here is of course exactly what Turing has asked of computers in his famous thesis on artificial intelligence: a classic.

The two movies, in all, couldn't have been more different. The one being about basic internal human capacities that are unexplained by objective science, the other being about the objective behavioral view of a human being, knowing that inside there is no body home. Objective science versus human intuition. Although I'm a real scientist, I liked the Studio 54 movie better. Why would that be?

Tuesday, September 26, 2006

I see!

Cognitive psychology is often introduced by asking questions like: “How do we perceive the objects in our environment?” This question is then translated in slightly more technical terms as: “How does the brain create a meaningful picture on the basis of the light-rays that stimulate our retina?” The process that makes makes pictures from light-rays is called the process of visual perception. Instead of discussing the various theories that try to explain what the mechanism behind this process might be, I would like to question the question itself.

In my view, the question itself is seriously flawed, based on three errors of thought. I will call these errors the observer-error, the system error and the metaphorical error. (But perhaps better names can be found, or have been found, for each of the errors, elsewhere, since I have not made the effort of doing a literature search on this topic).

The observer error
The first thing to note about asking questions like the one above is that it involves asking after the explanation of something that might not even exist in the first place. We ask: How do we create meaningful pictures in our head? Well, perhaps we don’t create such pictures, so the entire question becomes empty. Let’s think about the question from this point of view and try to analyse how we come to take for granted that ‘making a picture in your head’ is a valid starting point for investigation. I seem to know that I create pictures in my head (Hey, I do see, right?). But how do I know that you create these meaningful pictures? For an intuitive observer, it might seem to be a clear fact that human beings create a picture of the world inside their heads. But there is no observable evidence for this on the outside. Note that all research on visual perception is *based* on first asking the above question. The fact that visual perception is some kind of information processing mechanism that should produce, as output, *a picture* (an image, a pattern, "the thing that we see", or what have you), is taken for granted. It is a starting point. This means that the further empirical evidence coming from research on visual perception cannot be counted as evidence for the existence of the phenomenon-to-be-explained. The evidence merely has something to say on the question of what the mechanism for visual perception is, *assuming* that it is something that will produce a "picture" in our heads.

Now why is it so difficult to believe that we are *not* creating pictures in our head? Well, this has to do with the fact that researchers are always human beings, and we, ourselves, are in the business of visual perception, and we are consciously aware of this ‘fact’. The personal reflection of how visual perception *is experienced* by us confounds with the scientific, observable definition of the phenomenon that needs to be explained. Now, there is nothing wrong with taking a conscious experience as a phenomenon to be explained, but we should be clear about the status of the phenomenon. So what we should ask instead is: How is the conscious experience of a meaningful, visual, picture of the world created? This is a different question. All options are open. We could, for instance, claim that the meaningful picture is just an illusion of our consciousness and that the sensory input on our retina does nothing to create such a picture in reality. We dream our way through life, one could say. We could also claim that the visual input is causing coherent pictures to arise in consciousness, but that this is not very *interesting*, because the function of visual input is mainly to directly constrain behavioral output, and that the coherent pictures and patterns, i.e., the "seeing" that we experience, is only an after-effect, a side-issue.

Whatever the answer, the observer-error states that we should not confound our personal experience of the phenomenon with the scientific (observable) definition of the phenomenon. In the scientific definition, all we have is either a physical process (light falling on the retina), or a conscious living being *reporting* that he "sees things". There is an intuitive, but dangerous habit of automatically assuming some information-processing device (a ‘machine’ that processes the light on our retina to produce the consciously experienced picture in our head) which is posed in between the physical and the conscious phenomenal levels. But this device is not a real phenomenon at all, it is a theory in itself, and it is a theory that could be false.

The system error
This has to with time, and with the assumption on the direction of the flow of events in the phenomenon to be explained. When we ask: how does the light on our retina get to be transformed into a meaningful picture of the world?, it is automatically assumed that *first* there is light on the retina, only after which this light is transformed, step by step, into a meaningful picture. The flow of events, through time, is linear (sequential, procedural), starting with the light out in the world (what is sometimes called one of the 'sensory qualities'), and ending with the conscious visual experience. The metaphorical comparison with a machine that receives input and produces output, going through a series of sequential steps is easily made. This picture of process flow is ubiquitous in college textbook explanations of the structure and processes in the brain. It is how I tell it to my students: Light hits a dog, reflects, hits our retina, excites nerve cells, which excite further nerve cells, which sets in motion a train of nerve impulses that travel from the eye into the optic chiasm, into the lower brain areas, into the cortex, starting in V1, splitting up into the dorsal (on the top of your head) "where system" (where is the dog) and into the ventral (on both sides of your head) "what system" (it's a dog!). Upstream, somehow all visual information gets to be "integrated", producing a coherent picture of a dog. Then you get to shout "hey, it's a dog over there", or something like that. But this picture is a charicature of what is really happening, and I’ll tell you why.

For starters, in the brain, activity runs in more than one direction, since every nerve cell is heavily connected with both feedforward as well as feedback connections. So, no matter which nerve-cell in the ‘stream’ we take as a starting point, activity is then spread both upwards and downwards, at the same time. It is one thing to acknowledge the fact that parallel processing takes place in a system (as everybody does), it is another thing to really think through the causal consequences. Consider that the phenomenon of me observing a dog takes some time in itself. I see something that will turn out to ‘be’ a dog, I continue to look at it, and in the continuous process of seeing this thing I see more and more of the dog. I might at first see that it is a brown living animal, even before I see it is a dog. But seeing that it is a brown living animal suggests that the complete visual stream is already activated (producing the brown living animal experience, right?). But that means that before we actually recognize the dog as a dog, massively parallel processing is going on. So even before we ‘recognize’ a dog being a dog, thousands and thousands of neural impulses are already shooting upwards and downwards all over the system. So what does it then mean to say that we “process the sensory input and thereby recognize the dog”? It means nothing, because the recognition of the dog might just as well be attributed to the feedback activity from the cortex back to the eye, instead of the activity that goes from the eye to the cortex. We might even consider the possibility that the observer just wanted to see the dog, which sort of places the causal origin of the perception of the dog completely inside the observers head. The whole process didn’t start with light falling on the dog at all, it started with my own mental activity!

I do not wish to claim that visual perception is a kind of dreaming that has nothing to do with what happens outside of us. But what I want to conclude here is that visual perception is not, at least not necessarily, a sequential process that starts out there and ends in here. In the brain, multiple parallel processes run upwards, downwards and sideways in a massively connected network. What gross simplification to tell our students that recognition of a dog is a five step procedure that hops from the eye to the temporal lobe and that’s all there is to it?

The metaphorical error
I will try to be quick about this one because my space is running out here. The metaphorical error is related to the errors above. It encompasses the above ones, I guess. It refers to the idea that in asking the question: “How does our brain create a meaningfull picture on the basis of visual input?”, we automatically, implicitly, apply a metaphor to human cognition that might be false. The metaphor is of course the metaphor of the information-processing machine. A real, physical machine that is, in the strong sense, comparable to other mechanical devices such as telephone-communication systems, bicycles, trains, and computers. (So I do not mean a machine in the general sense, in which all processes are by definition ‘machines’). If visual perception is a process that is executed by a machine, then this automatically necessary that this machine have some input, and what better input to take than the light that reflects on the retina? And if this machine is to have some output, than what better candidate might there be than our ‘visual thoughts’, the visual experiences we have when we look, see, observe the world out there? But there is no proof whatsoever that visual perception is such a machine, or explained by such a machine. In fact, it is a non-machine in the first place, since it has physical stuff as input and meaningfull information as output. Such a machine cannot exist. The metaphor is a smoke curtain that obscures the fact that between light on our retina an meaningfull pictures in our head lies a complete mind-body problem (how to get from stuff to ideas), a problem that has not been solved. You cannot just imagine a machine to solve that problem, the problem is fundamental. We probably need a completely different conception of cognition in general in order to solve it.

Ok

So, Jelle, if this is all wrong then, what is your own idea, what is visual perception if it is not the machine you just discarded?

Well, I don’t know, of course. But I do have some hints, some questions to ask. I will pose them here as my final thoughts:

• What comes first, seeing, or acting? When I move my eye, is that considered to be a response, or the active search for a new stimulus?
• What is the goal, the utility, of visual perception? Surely a device so complex as the visual system (the physical stuff that is, eye, brain, and so on) did not evolve in order to create pictures inside our head. Could it be that the adaptive value of visual perception has more to do with sustaining a satisfactory relation with our environment, instead of just seeing the environment?
• Is there really a difference between the inner eye (imagination/dreaming, and so on) and ‘real seeing’? Or is the perceived difference between the two phenomena itself an illusion we create ourselves?
• Who is doing the seeing? We automatically ask: How do people... (follows some cognitive function). But what is this ‘people’ we speak of? Is it my brain that sees? Is it the person Jelle? Is it the Jelle that other people speak about in their native language? Is it the Jelle you just imagined writing this short essay? As long as we haven’t solved the problem of what a human being is (and at which explanatorial level(s) he ‘exists’), how can we answer such a question?
• Should we explain behavior, or the mind? Is the mind really a phenomenon, or is it already in itself a theory (as Churchland would have it) that might be false?

Although I didn’t set out to end up where I do now, all these questions seem to run directly towards the good old mind-body problem again. Time to stop, I guess! If you want to read more about this famous problem in philosophy, you can do it here. See you next time!

Wednesday, September 20, 2006

Monsieur Tan

So everybody please meet Monsieur Tan:

(The below text is from a comment I wrote to Sander in my other blog)

"...aphasia is a neuropsychological disorder where the patient specifically loses the ability to speak (there are various forms, of course). The most famous example is "Monsieur Tan", a patient of a doctor called Paul Broca, back in 18-something. This patient could not say anything but the word 'tan' (with full intonation suggesting sentences with different words and meanings, but the output was just that: tan tan tan tan). Broca hypothesized that the disorder was due to the fact that a specific area of the brain was damaged. When opening up mr tan after his death, Broca indeed found that a special area ("Broca's area") was damaged.

Well, this discovery just about started the whole of cognitive neuroscience and neuropsychology.

Broca's original article can be found here"

Tuesday, September 19, 2006

The systems reply

Tomorrow a student of mine will present a talk on Searle's famous article Minds Brains and Programs. This student and I will probably be among the few in the classroom interested in those kinds of things. It is very theoretical, it is philosophy, it doesn't get you anywhere, it doesn't help you making money. It is just a discussion about the question of whether computers can really *understand* or whether computers are, at best, merely a *model* of the process of understanding. Fell asleep already did you? Well, be glad you're at home, and not sitting in this classroom, tomorrow, having to listen to it and even having to 'form an opinion' and 'discuss the topic with your neighbour'. But just in case you *are* interested in this question, here're some thoughts...

Rereading Searle I was reminded of the discussions that Iris and I have been having about the word "computation". This discussion I didn't start with Iris, I've had similar discussions with lot's of people. I clearly remember my Big Talk in Nijmegen, where I presented my results of my internship for my fellow students and teachers. I was claiming (hell, why not?) that cognition is not computational and that it is, instead, best explained by the workings of a nonlinear dynamical system. One of the teachers (Lambert Schomaker) put it to me empathically: But Jelle, *everything* is computational. How can you say that cognition is not computational? Even the rotation of the earth around the sun is computational! And furthermore, *Everything* is a dynamical system! How can you say that cognition is a dynamical system? Even the moon in its orbit around the earth is a dynamical system. So what are you claiming here!?

Let's go back a few steps in time, and start up this discussion by looking inside Pim's old copy of "The mind's I". Within this book is embedded the famous article by Searle, called Minds, Brains and Programs. If you already know this article you can skip to the next paragraph.. now. And within this article, Searle argues against the possibility of Strong Artificial Intelligence (AI). Strong AI claims that computers cannot only be used merely as 'models' of cognitive processes, instead, a computer program that performs some function that is comparable to human competence *is*, by matter of fact, a cognitive system. To put it simple: if it talks like a duck, walks like a duck, it's a duck. So if you build a computer that can process stories, and give responses the same way that I process stories and responses to it, then this computer can be said to really *understand* these stories, just like I *understand* them. To be sure: Searle is against this idea. He says (follows the Chinese Room Experiment): Suppose I sit in a closed room, with a large book of rules, and these rules tell me how to create an appropriate response to some linguistic input that is given to me via a small window, then I wouldn't necessarily by performing these rulebased mappings come to *understand* what I was doing. But the people outside the room would quickly come to believe that I really understood the input, since I was giving sensible reply's. (Searle takes the example of Chinese: suppose you have a book that gives you the procedure of writing an appropriate Chinese response to some Chinese input, you could by use of the book fool any native Chinese speaker without in reality actually understanding anything of Chinese).

One of the responses to his article, which he actually encorporated in the article, is called The Systems Reply. If you already know of this reply you can skip to the next paragraph... now. The systems reply says that "understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part". In other words, *you* didn't understand Chinese, but you, the rulebook and everything else you needed in order to do your input-output mappings, taken together, as a system, *did*.

Somewhere in discussing this reply, he says the following, which I would like to quote here in full:

"If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes. "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room—telephone, tape recorder, adding machine, electric fight switch—also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don't really take it seriously, and they don't think anyone else will either. I propose, for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that."

Now, although I disagree with Searle on many a thing (in fact, I still think The Systems Reply holds, and Searle is not succeeding in discarding it succesfully, and in the above quote he is being very rhetorical, as always), I think he has a valid point on this 'side issue'. The point, in my words, is that "computation" just has two meanings. One meaning of the word computation refers to the idea that all of reality can be described as a dynamic system in which variables are coupled in one way or another, and if some variable 'maps' onto another variable in some reliable way, we could say that a transformation, aka a computation has 'occurred'. But this kind of 'computation' has nothing to say about cognition or mental processes. It doesn't even by necessity say anything about the reality of these computational processes, because any physicist that is just a little bit of an instrumentalist in his hart will tell you that such dynamic systems and the computations that go on in these systems are *models* of reality, not the real stuff. Nobody would claim that the computer model of an atom, rotating on your PC, *is* actually an atom. The mathematical talk of systems and computations is a *language* in which we communicate scientific ideas. It is not the real thing. (But you could just as well hold that this language is actually referring to something very real that is in one way or another 'just like' that which the language describes). But the realism/instrumentalism distinction is really not at issue here anyway.

What's at issue here is that there is another meaning of the word computation that refers to cognitive processes exclusively. It states that cognitive processes are 'computational' and by that it is meant computational in a very special sense. This theory tells us that the state of the world (if there is such a thing as 'the state' of 'the world') is 'encoded' by the perceptual system as a perceptual 'representation', and that this representation is 'processed' as a set of symbols internal to the cognitive system. To be precise, it is the brain that physically 'instantiates' these symbols. The activity of the brain is representing them, and these symbols interact with one another via a set of 'procedures' (rules, computations), in such a way that the sensory representation of the world is reliably coupled to some 'intelligent' behavioral response.

Saying that all of nature is computational proves nothing about, what I call, (in reference to Searle's strong AI), Strong Computationalism, as described above. As Searle says above " the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred". In my view, Lambert Schomaker was talking about computation in the 'universal' sense and thereby blurring the distinction between mental and nonmental processes. What I was claiming then, as I still do now, is that mental processes are not 'computational' in the Strong sense.
...

Hey, did you all fall asleep there!?? Wasn't anybody listening to what I was saying??? Ok then, let's take a coffee break!

-=-=-=-

Searle's famous article Minds Brains and Programs


..