Tuesday, May 08, 2007

Window of opportunity

I'd like to try and formulate precisely the idea that is only a vague notion in swirling somewhere between my brain and the keyboard at this moment...

First where it came from:
1. In neural theory there is the idea that groups of neurons can communicate best if their firing rate is synchronized to the same frequency. 40 hz (40 spikes per second of the famous neuronal discharge that runs wavelike accross the membrane (axon) of the neuronal cell, from the core to the outer 'dendrites') seems to be a popular 'channel' in the human cortex. When the cells are synchronized, their 'windows for communication are open'. (source > FCDC)

2. According to our idea (Pim, Iris, Roel, Jelle) the brain and the physical environment interact and from this interaction dynamic cognitive structure is formed.

My idea is that the process in 2. is not unlike the process in 1. viz. that succesful cognitive structure can only be formed if the activity in the brain and the coupled activity in the environment (i.e. the physical changes in the environment) are 'synchronized'. Only then can we have a window of opportunity in which a stability (attractor) can be formed. You might also say that when environment and brain have the same rythm, the channel between them is open for communication.

EEC and PD

I teach two courses on participatory design. This is the method of involving the end-users in that actual process of designing new products or services.

Theoretically I am interested in embodied, and even more in embedded cognition. Embedded cognition states that thinking is a process that emerges out of the interaction between brain, body and world. Parts of the world get recruited to perform important functions in a cognitive proces. They serve as external memory aids, external representations, visual clue's, they constrain possible behaviors (reducing choice options), present automatic orderings. The physical world is also the medium by which previous behavior of the agent itself, by the traces it leaves in the physical world, gets to influence subsequent patterns of behavior.

Yesterday I read a paper by Hollan, Hutchins and Kirsh. I already knew Hutchins and Kirsh as representatives of an embedded cognition perspective. As it turns out, Holland works in the same dept. in USCD (San Diego) and he is a representative of the participatory design movement (or to be more precise: ethnographic methods, which is in somewhere in the ballpark). The paper presents an argument of why, on the basis of an EEC perspective on cognition, one should do research and design using ethographic methods.

The argument runs as follows: Since parts of the local environment become part of the cognitive strategy that users use in dealing with a technology (think of an airline pilot in a cockpit), their expert knowledge is very 'personal' and the meaning of the interactions that the user engages in, and the function that certain parts of the artefact get to play in this interaction, is highly personal as well. This means that an objective, third-person investigation, especially in a sterile psychological lab, is not going to give you any inside into what is really going on in this user's real-life activities. EEC patterns of behavior grow in an historical process in which chance events and personal histories of users can have big influences on the resulting roles of the interface components in the activity of the user. So, we have to follow an empathic perspective on research and design, doing lot's of observation, interviews 'on site' and perhaps even 'participatory design': letting the user be part of the design team, as an expert of a knowledge domain that other people (not being the user) simply have no access to.

This is a nice article for me because it combines two of my interests that I hadn't really linked so explicitly. Apparently the various activities I engage in have some inner logic that I wasn't aware of yet. I wonder how Salsa dancing is going to fit in the picture...

Thursday, May 03, 2007

Embedded neuromodulation

This post gives a summary of an article I've just read. It is mainly a note for myself.
The article can be downloaded here

The article describes experiments with an autonomous Kephera robot. Sporns et al have modeled a neuromodulatory system, resembling the dopamine reward system in the brain. This dopamine reward system influences the plasticity of the robot. Whenever unexpected reward takes place, the dopamine system get's active, this leads to 'value-dependent learning', both the sensorimotor connections get directly affected (normal stimulus-response associations are formed) and the neuromodulatory system itself get's affected.

QUOTE from a related article
Value signals combine temporal specificity
(they are phasic and short-lasting) with spatial uniformity
(they affect widespread projection regions and act as a
single global signal). Value enters into traditional Hebbiantype
synaptic rules as a third factor, in addition to factors
representing pre- and postsynaptic activity. Because of their
phasic nature, value signals effectively gate plasticity, in
addition to influencing its magnitude and direction (see
below). Value affects plasticity more or less uniformly
throughout the widespread cortical and subcortical regions
to which value systems project.

The interesting thing is that they put this system (which I do not completely grasp at the moment but at any rate resembles something like a sensorimotor system that gets 'laden' with internal, bodily based 'value', depending on reward, which is like Damasio in a way, i.e.: embodiment) in a real environment with objects. The behavior of the robot in the world influenced the subsequent inputs of the robot, because at the beginning, reward giving objects (the red objects) were dispersed quite homogenous in the environment, but the behavior of the robot lead to the effect that clusters of red objects were formed. The result was that at first there was a quite predictable timing/rythm in which the robot would first detect, visually, a red object, then grab it (feeling it with a touch sensor, thereby receiving the reward, which was supposed to model 'tasting/eating' it). But later on, all red objects were clustered, so after an initial delay, suddenly the robot would get massive amounts of reward in short time intervals.

QUOTE from the article:
Our experiments document a progressive alteration of an
environmental variable (the spatial distribution of reward
throughout the environment) due to the behavioral activity
of the robot. This alteration, in turn, has consequences on
synaptic patterns encoding predictions about the
occurrence of future rewards.
It is especially noteworthy that the differences between
early and late phases in experiments with high object
densities are neither the result of purposeful
rearrangements of the environment by either robot or
experimenter, nor are they due to the adjustment of
“internal” variables over time such as learning rates, cell
response functions, or motor variables. Instead they are the
outcome of the coupling between brain, body and
environment. This coupling is strongly reciprocal.
Behavior affects the statistics of reward timings which
drive synaptic plasticity through activation of a
neuromodulatory system. In turn, synaptic changes alter
the coupling between visual and motor units which affects
behavior.
ENDQUOTE

Here they even suggest a possible role for embeddedness (i.e. reshaping your own environment) in the emergence of addiction:

QUOTE from the article
The experiments discussed in this paper may shed light
on the activity and functional role of neuromodulatory
systems (in particular, dopamine) in the course of
“natural”, self-guided behavior. The “attractive force”
exerted by clusters of rewarding objects, resulting in
restricted trajectories of robot movement and navigation as
well as repeated “rapid-fire” sequences of reward
encounters are especially intriguing. Disruptions of the
neurobiological bases of reward processing are thought to
form a major cause for lasting behavioral changes and,
eventually, chronic disease (addiction) in humans. Our
results suggest the hypothesis that a pattern of persistent
reward-seeking behavior may in part be generated as a
result of a progressive reshaping of the environment
coupled with long-lasting synaptic changes in specific
neural structures. Future experiments will investigate this
hypothesis in detail.
ENDQUOTE

For me this article shows that neuromodulation (embodiment) and embeddedness can be part of a larger perspective in which brain, body and world form a tightly coupled system, where the causal work depends on interaction between both world-events (behavior that reshapes the environment) and internal modulatory signals (reward leading to changes in synaptic connectivity - and hence, in the speed/ease of learning). In this article it is shown how this could work out in practice.

Tuesday, May 01, 2007

Time

As of this week I am in a philosophy group at work. I get one day per week to write philosophical papers - to be published in professional journals.

The first paper will have to be a follow up on the paper that Iris, Pim, Roel and me wrote (and will be published in Theory and Psychology). This paper deals with the question of what the new 'role' of the brain is when one starts thinking about intelligent behavior as being emergent out of an interplay between brain, body and environment. In contrast, of course, stands the cognitivist idea of the brain as being a sort of computer that takes in sense data, deciphers from these sense data a message in the language of thought (i.e.: 'meaning'), and then, on the basis of internally stored knowledge of the world, produces a response in the form of a behavior/act. The new way of seeing the brain, we venture, is that whenever the brain comes into action, *there is already behavior going on*. Lot's of behavior emerges out of the interplay between our body and the world. On a low level, parts of our nervous system help in forming structural couplings between the organism and its environment, leading to adaptive behaviors. This is not based on representing the environment internally, but based on forming a stable 'relation' between brain and aspects of the environment. Once such a low-level sensorimotor relation has been formed, higher parts of the brain can (but don't always) put a *bias* onto this lower level system. This bias works as an internal control parameter in a dynamical system. Increasing the value of the bias gradually can lead to dramatic qualitative changes in behavior. But the bias itself did not cause the behavior, it is just a bias. You need a fully operational sensorimotor loop for the higher brain activation to be able to effect it's causal work. Just like you cannot steer by just having a steering wheel in your hands, you also need to have a car that the steering wheel is attached to. But perhaps this is a bad analogy.

The next step is to take this starting point and write a new paper which has to involve, in some way, the subject of TIME, since time is the main theme of our philosophy group. I proposed that 'timing' is essential for the formation of structural couplings between organism and environment. I will be working out this idea in the coming weeks here in this weblog.

Monday, February 26, 2007

Abstractions

This blog (I mean: this post) is meant mainly for a discussion between Iris, Pim, Roel and me. But everybody is happily invited to join. This post is about the concept of 'abstraction' in relation to 'representational/non-representational theories of mind'. It is based on my reading of the following paper: Shimon Edelman (2003) But will it scale up? Not without representations.... [snap].

Ok here goes.

First let’s be clear on what is means to have (or to be) a representation.

Back to square one.
“A Rep is something that is used by a system as a stand-in for something else” Haugeland

My question would be this one: Does abstraction automatically mean ‘representations present’?

Let’s consider a few kinds of processes that could or could not be said to be representational.

Type 1 : Symbols of classes
Suppose I see a cow on a picture in a children’s book of animals. Next, I see a real cow in the field. I ‘know’ that both these ‘cows’ have ‘something’ in common. I say: These are both the same (these are both cows). And someone understanding English would agree, perhaps after adding ‘in some way they are the same, yes’ (implicating that in some other ways they are not!).

Now because of these things being the same I am allowed to call each of them by the same name: “cow”. The word cow is thus a representation of ‘that which makes both these objects ‘the same’. But it is not precisely defined what the word ‘cow’ actually refers to. So one can have ‘stand-ins’ of ‘somethings’ even when the something itself is ill-defined or even unknown (even unexistend, as with unicorns). When I equate the picture and the real cow what I am in fact performing is an abstraction. I can make an abstraction from the real cow, and from the picture, and in the abstraction these two items become one. Or at least they become items that belong to the same ‘abstract class’ of things.

But of course, the cow in the picture is not a real cow (ceci n’est pas une vache). The cow in the picture is a representation of a cow, some cow, a general cow, the idea of a cow, or what have you. The picture, like the word ‘cow’ is more a ‘symbol’ of the ‘concept cow’, whereas the real cow is an exemplar of a real cow (or: a token of the type).

Still, I could also have performed an abstraction on two real cows in saying ‘these are both the same’. I would still be performing an abstraction but without there being an external representation involved. Or is that impossible? The question is whether one needs internal representation in order to perform abstractions

Type 2: Representations of particulars
Suppose now I own a cow. One day I decide to make a painting of this cow on canvas. The next year, my cow dies. I have paintings of all the cows I once owned in my living room, since I am a real cow-lover. Someone asks me: which of the cows did you love most? I immediately point to the third painting from the right: that cow I loved most. Now here’s your classical representation. The painting serves as a stand-in for the real cow, in order to ... (whatever). In this case: I use it to communicate about my feelings and to differentiate a particular cow from other cows.

3 Signals in a communication channel
Suppose now I see a cow on a television screen. The broadcasting is live, so when the cow moves (out there) I see it making the same movement (here on the screen), realtime (i.e. semi-realtime, there is always a lag). The cow on the screen, that is: the pattern of electronic bursts on the tube, or the illuminated pixels on my TFT, is, of course, not the real cow. But the cow that I see (on the screen) is the real cow. How can this be? The real cow is, one could say, ‘delivered to me’ via an artificial channel. But it doesn’t matter how it is delivered, what matters is what is being delivered. In this case: the real cow. Even if the movie was not even live but a replay of happenings hours ago, it would still be the real cow, as it existed hours ago. There is only a technological distance between me and the cow but this is nothing else from seeing the cow ‘real-life’. For considere me standing in the field with a pair of binoculars. Would one say that what I see in the binoculars is not the real cow? In the latter case the technological distance would be short, perhaps only as thick as the bi-focal glasses on my nose or the binoculars in my hand. But whether it is glasses or TV-equipment, that does not make the fundamental difference between real or fake cows. Only the physics of the communication channel are a bit different.

Are the electronic bursts on the tube are a ‘representation’ of the cow? I believe not. I am not ‘using’ these electronic bursts at all. Or rather, I am ‘using’ the electronic bursts on the TV tube, but I am using them in precisely the same way that I am using the electronic bursts inside my brain, i.e. the patterns of activity in my retinae, in my optical tract, in my thalamus, in my visual cortex. So say that all of these patterns are representations of the cow is just a matter of definition. I could also say that they are not representations of the cow, merely ‘channels of communication’. All of these signals, inside or outside my brain, are part of the communication channel that connects the cow to ‘me’. If you would care to call these channels ‘representational’, fine. But: note that these kinds of representations are qualitatively different from type 1 and 2 since in this case the cow is ‘right there’ with us. The signals are not a ‘stand-in’, they are the connecting medium between something that is ‘still there’ and the cognitive agent.

[Right now I am not discussing what all this means for the definition of “I”/“me”/agent]

4 Biasing signals originating in other channels
Suppose now I drive my car in rural country. It is raining, can’t see a thing. Or, to be precise, I can see something. Big dark blots of somethings seem to be blocking the road. I cannot see ‘what they are’, though. People? Cars? Then I hear a big MOOOOOOH. Upon second inspection I now see tails wagging, heads turning: several cows are stumbling about in front of my car. Here, we have a multichannel situation. The first channel contains a lot of noise. I cannot see the cows, only ‘vague images of somethings’. The second channel, via the auditory nerve, provides me with a signal that biases my visual perception of the world. Suddenly I now do see the cows, because the auditory signal helps me. In terms of Haugeland, the sound of the cow is not (necessarily) a representation of a cow: the animals are right there, I can see them, but I couldn’t’ve recognized them without the biasing signal. So it is an addition of some sort, not a replacement. In dynamical terms one would say that a dynamical process that is seeking (but not finding) it’s attractor is pushed over the edge into an attractor valley by an additional forcing variable.

Afterthoughts:

Just the other day I was thinking: My biggest problem with cognitive science is the problem associated with abstraction. Consider the cognitive process that underlies our utterance: “the picture in the book and the animal in the field are the same thing; they are both cows”. Now people can do this. And in many circumstances it is a useful, functional, intelligent, smart, coherent, appropriate thing to do. But that does not mean that there exists a phenomenon called ‘dealing with cows’ and that the behavior associated with watching a picture in a children’s book is ‘in some way the same as’ the behavior associated with watching a cow in the field. Especially not if, for instance, there happens to be no fence between you and the real cow in the field. Then, suddenly, one realises, that real cows are in a lot of ways not the same as pictures of cows in children’s books. Suddenly, one realises that cows are very big animals. Suddenly it becomes extremely important to differentiate between a cow and a bull. In other words: the interaction between you and a real cow is qualitatively different from the interaction between you and a picture of a cow. So although the capacity of being able to abstract away from real cows and pictures onto the abstract class of ‘cow’ is an important cognitive achievement, it is not true that in order to understand how we deal with either real cows and or pictures of cows we need to take this abstraction-process as the basic cognitive operation that explains our behavior. We do not generally deal with cows on the basis of an abstract concept of cow to which both pictures and real cows belong. We can (e.g. in school, in an intelligence test, in reading this text), but just as often we don’t. The real cognitive structure that explains our behavior in the case of the children’s book is, I think, completely different from the real cognitive structure that explains my behavior when I’m out there in the field facing a real cow. (To elaborate on that just a little bit: when a child is reading a book with a picture of a cow, I guess the child is not doing anything that is even remotely connected to real cow. What the child is doing is connecting words, behaviors, and pictures in a learning process. The first connection between the real cow and the picture of the cow is made because other people are connecting the word ‘cow’ to both of these experiences. Thus, the connection is highly artificial, explicitly trained by an external tutor and linguistically mediated. In that sanse the picture is more a symbol than a realistic ‘image’ of the thing it refers to. This can hardly be called a ‘fundamental property’, it is more like an advanced specialisation of our cognitive system. The fact that in medieval times pictures of things were attributed all kinds of psychological or magical properties is another case in point: it is apparently only a recent discovery that a picture of something is ‘just a picture of something’).

Friday, January 05, 2007

EEC versus Cognitivism

There has been a debate going on for some time now regarding two conceptual frameworks in cognitive science. The standard, mainstream framework is usually called cognitivism or "the information processing view" of cognition. Bluntly speaking it states that the human mind is a piece of software consisting of computational structure and stored knowledge (representations) that is running on (or as they say somewhat less bluntly: physically instantiated by) a piece of hardware aka the brain. The other conceptual framework is called situated/embodied cognition, which are actually two frameworks that sort of go hand in hand and the total of which I will hereforth call "embodied embedded cognition" or EEC. EEC states that the human mind is an emergent property of both the physical body and its characteristics, the physical local environment and its structure, and the processes of the brain. I will concentrate on embeddedness / situatedness in what follows.

Embeddedness/situatedness refers to the idea that 'the world itself' can do some (or all?) of the computations that we would usually hold to be a product of the processes of the brain. A famous example, cited by Lave, is about someone who, when asked to make a cake for 3 people when the original recipe was for 4 people and involved 500 grams of dough. What this person does is that he simply makes the dough for 4 people, squeezes the dough into a pancake, and then takes out 1/4'th of the circle. The remainder is just about 500 * 3/4, which is the amount that was needed. In this case, the brain of the person in question did not have to calculate the assignment 'in his head', instead, the physical properties of the world were used in concordance with our perceptual abilities (the visual ones) that happen to be such that allow us to quickly and quite directly perceive and cut out a quarter of a circle. (It is actually not unlike explicitly, physically, calculating the assignment (500 / 4) and then the result [* 3], which is what we learn if we have to multiply by ratio's by heart, since "division is multipyling by the reverse", as they say in Dutch schools.

Anyway.

Some people are die-hard cognitivists, other people are hard-core EECers. Practically, I would like to pursue EEC empirically, give it a chance, and see where it ends. If the framework is no good, it will eventually die out. If it is viable, it will live and we will learn from it. But this is not what this blog is about. This blog is about the fundamental theoretical issue of which framework is the preferred framework if we would have to choose *now*, based on what we know at this moment.

To me, the most interesting *theoretical* question of the debate between the opponents would be: which of the two frameworks constitutes the ultimate 'ground' on which the other framework is built? Because as anyone would acknowledge, the typical processes that are taken as fundamental in each of the frameworks *do happen*, at least descriptively, or in our conscious experience. So if one of the frameworks would turn out to be false, the processes that this framework took as fundamental would still need to be explained in the vocabulary of the *other* framework. To be precise: if cognitivism is false, EEC would need to be able to explain how people can represent facts, store knowledge, and 'reason', follow procedures (by heart!). For example, EEC would have to explain how it is possible that people *can* solve the assignment 500*3/4th by heart if they need to, without the help of pancakes. Conversely, if EEC is false, cognitivism needs to explain how this dependency of human cognizers on the external world can be sustained by a cognitivist system. Using 'knowledge in the world', as Donald Norman calls it, is a process that exists, but if cognitivism is the game, then it must explain how to do it using representations and computations.

Let us see what both camps would argue. Since cognitivism is mainstream, I probably need a bit more words to argue for EEC being the ultimate ground.

For a cognitivist, all this talk about situatedness is just a 'complex' version of cognitivism all along. The cognitivist would say: yes, of course the computational-representational system could *use* the physical world as a means for storing information, or as a tool for quickly computing stuff that internally would be more demanding. There are machines, road-signs, etc etc.. Nobody denies that. A cognitivist view of the mind does not mean that the cognitivist brain isn't aloud to use 'smart tricks' if it knows of them. But the 'currency' in which information is traded between brain and world consists of hard cognitivist coins nevertheless. Whenever you 'use' the world in order to let it calculate for you, you need to make contact with the world, and when you do, you do so in your guise of being a computational-representational system. There is no other way of interacting with the world cognitively than in terms of information-processing. Meaningful signs need to go in, and behavior needs to come out. In the Lave example, you need to be able to perceive, encode and store the information that came from the visual system that was observing the pancake. You need to understand that there is a smart way of calculating 500 * 3/4 without using your inner resources. You need the knowledge in order to know what to do in order to implement this smart plan of yours. You need to device a motor plan that actually let's you cut 1/4th of a pancake. You need, in sum, a complete 'psychological system' in order to do the 'embedded stuff' that these EEC-ers brag about.

For an EEC-er, things are completely the other way around. People like Merleau Ponty try to completely build up a new way of looking at the same old psychological phenomena, such that embodiment and embeddedness are now more fundamental, and 'prior' to processes like computation and storage of knowledge. This is somewhat counterintuitive because we have learned to talk about our experience in a cognitivist way, but phenomenologists say we should 'bracket out' this talk since it is artificial and return to the core experience itself. Anyhow, our intuition and conscious inner speech about what we are and how we do the things we do can of course not be trusted, even in a materialist empirical science. So, what EEC-ers say, e.g., is that in low level motor planning as well as in low level perception, EEC-like processes already exist. Perception and action are tightly coupled systems, such that, even the most low-level perception of objects is already 'aided' by certain specific actions undertaken by the agent, in the world. Perception is dependent on action! For example, the perception of objects is not just based on a passively received pattern of excitations on the retina, but it is also dependent on the pattern of *actions* taken by the agent as it saccades with its eyes in the scene of interest. That is: movement of the eyes *constructs* a series of inputs that contains the sort and type of information it contains precisely because this series was generated by movement of the eyes. In other words: part of what we 'see' is our own actions reflected in the effect these actions have on the world. And this is *low level*, in the order of milliseconds. All the rest of cognition needs to be built from these basic perceptual building blocks. E.g., take our perception of objects. Objects, as we perceive them, are not objective 'things' in the world, but already a blend of 'what is out there' and 'how we, with our physical capacities approach that what is out there'. Perception is already embedded and embodied. See also William Gibson's 'affordances' in this respect. Now if EEC is something that is already there in low level processes, then the question is: can we push this conceptual framework higher up (and why not?). Does it (Why wouldn't it?) scale up all the way to centre-court psychological processes? Even if everyday cognitive phenomena, like memory processes or planning and reasoning, need some 'extra' explanatory apparatus involving talk of computations and representations, it could still be argued that this 'extra' is just a 'complex version of EEC processes. EEC is the fundamental basis, and only somewhere upstream in the complexity of things are we able to perform 'computations' and 'store knowledge'.

As it stands, I think that cognitivism *appears* to have a better, detailed story about how EEC-like processes could be implemented by a classic information-processing system. I say appears, because it is my strong feeling that a lot of the *words* that cognitivism uses actually need a lot more explaining if one comes right down to asking what these words mean, precisely. Cognitivism has been very smart in assuming this Cartesian split between 'software' and 'hardware', which makes it such that practically anything can be 'assumed to exist' on the software level without the cognitivist having to explain exactly how it is 'implemented', as long as the process in question can "in principle" be implemented on an information-processing system. And what process could not? (Noncomputable processes? Nonrepresentational processes? But what are they?). So, cognitivists have the easy way out: as a framework it can account for any type of process that researchers come up with. But is also makes the cognitivist coin rather valueless. For if everything is possible in an information processing system, why are people not 'everything'? Why are we only the bunch of phenomena that we actually are?

EEC has a problem in that the question of how EEC-processes 'generate' full-blown cognition (planning a hike in the mountains, having a conversation with your partner about something that happened yesterday, remembering, by heart, or via reasoning/recalling, where you left your socks the day before, immediately recognizing a person, knowing his name, being able to recall your relation to this person, and so on) simply cannot even be *articulated* in an EEC way. The cognitivist response (oh well, it is just all computational procedures, even the EEC stuff is), is not possible. Suppose I sit in a train and think about what to have for dinner and what to remember to buy at the grocery's that evening. How would the train's interior, the lights, the other people present, the music on my I-pod, how would those physical actualities possible contribute to this cognitive process that is actually taking place right there and then? Yes, if I had a paper and pencil, I could come up with an EEC story. But I have no paper and pencil. I am thinking, creating and remembering my meal for the evening *in my head*. On my own. No environment present. I could saccade my eyes for all my worth but it wouldn't help me in remembering the recipe would it?

In conclusion, both frameworks have problems. Cognivism superficially is stronger, but I think that this is artificial due to the fact that in everyday life we talk in cognitivist vocabulary already so it easily seems as if cognitivism has 'explained' something where in fact it has explained nothing, it only resonated with our everyday way of naming and talking about things (which in itself is not an explanation of anything at least not scientifically). The problems that AI systems have in dealing with 'common sense' behavior are a case in point here. We are easily seduced by cognitivism. But EEC offers no real alternative either - yet. It is up to the EEC-ers (us, that is) to explain how cognition (real cognition) is possible in an EEC system without secretly, implicitly, assuming a cognitivist system that *does* all these fancy embedded embodied tricks.

Afterthought:
Merleau Ponty and his phenomenology could be a viable starting point, if not for the Very Complex and Inunderstandable Writings of these French guys. The problem with these kinds of writings is that once you understand it, you, for yourself, can use MP's ideas to 'explain' the EEC fundamentals allright, but then still the rest of the academic world does NOT understand MP and so you have put yourself on some island together with the other 2 people that understood what he is trying to say.

Thursday, December 14, 2006

What's inside?




One and something year old Jonas is playing games. Self-designed, highly entertaining games - at least for his two biggest fans. The semantics of the game are sometimes difficult to assess, but the behavior has a clear, observable, repeatable structure and it is clearly something new that was not witnessed before.

For instance, Jonas will walk to his mother, slap her knees with both hands, laugh with excitement, turn round, walk to his father who sits at the other side of the table, slap his knees with both hands, laugh with excitement, and repeat the procedure again and again, walking back and forth between his two parents. I had not seen this kind of structured, repetitive behavior before, it emerged somewhere in the last week or weeks. The behavior is marked by some arbitrarily sequence of acts that is being repeated over and over.

And now, the little man is asleep. As fatherly pride slowly fades, philosophical reflection appears on the scene.

If we try to describe what we, observers, experience while watching the child, we would quickly choose some appropriate words for what we experience to be 'a new behavioral capacity'. In the example above the words are 'playing games'. Cognitive science excels in the next step, which is to replace these words with other words that supposedly catch the 'essence' of the observed phenomenon. We can expect analyses such as this one: Playing games, is "essentially" the ability to express and follow a rule-structure.

At the same time, neuro-scientists are carefully investigating what has changed in the neural organisation of the brain, since it is clear, at least to these scientists, that new behavioral capacities must be caused by significant changes in brain organisation.

Suppose the brain-scientist indeed discovers a structural brain change that is reliably associated with the onset of 'game playing' behavior in small children. Would that be proof of the theory that the new neural organisation is responsible for "the ability to express and follow a rule structure"?

I'm afraid not.

What we have, as a fact, is that there is a brain change that correlates with a behavioral change. The behavior is complex, it is real, it is located in space and in time, it emerges within a physical and social environment. The 'game' is played using a physical body with physical properties. It is played in a context of social relations between father, mother and child. It emerged out of a situation in which the mother and father were sitting opposite each other at a table from which the child was just leaving after having had dinner. Within this *practical, real* situation, a brain change had its effect. Game playing was the result.

Abstracting away from the observed phenomenon to the underlying 'essence' is a dangerous activity. It is often grounded in values, beliefs and perceptions of the observer. It is also constrained by the language in which the abstraction is expressed.

But even if the abstraction is a valid one, there is no proof whatsoever that the brain change is in itself corresponding to (representing) this 'essence' that is described by the abstraction. It could be that the brain change, in itself, is something very different. Something that, in itself, has nothing to do with 'following a rule-structure'. The newly observed behavior of game playing, and its associated 'essence' - following rules - might be emerging only when the 'updated' brain is operating in an appropriate physical and an appropriate social environment, preceded by an appropriate history of actions.

Cognitive science has a strong history of what Churchland calls 'vertical analysis', in which behavior is broken down into several, vertical, 'columns', corresponding to the classic 'faculties' of mind, and its modern heirs, the computational-representational 'modules'. "Rule-following" is such a module. But in the process of breaking down the observed phenomenon into meaningful abstractions (essences), we can also choose to cut reality in horizontal slices. A horizontal slice corresponds to a full-blown, functional, complete sensorimotor loop in which parts of the brain, the body and the environment take part. Structurally new behaviors that clearly mark cognitive developmental phases, such as the emergence of 'game playing', might be explained by the development of a new second order influence within the brain upon such a horizontal, existing, and operational, sensorimotor cycle. The brain change thus comprises not of a new module that represents the new 'capacity', but it comprises of a new *bias* upon the existing agent-environment interaction.



As they say in pop-music: the band and the audience make the show.

Friday, November 24, 2006

Interactions (3)

I would like to add a bit to my ideas about the concept of interaction. It is very abstract, these are just first thoughts that have to be worked out later.

We discussed interaction between things, between organism and things, and between organisms (specifically: between humans). There is another division that I like to introduce here, which divides the concept of interaction in yet two more forms. This division is not based on the kinds objects that interact, rather it is a typology of the kind of effect that the interactional process itself may have.

Consider two machines interacting. That is: each of the machines performs acts that have an effect in their respective environments. Both machines are part of each other environments. Moreover, there are reliable relations between the actions of the one machine and the effect such an action produces in the pattern of actions of the other machine. In common terms we would say that each machine 're-acts' to the other machine's actions. Another way of saying this is to say that the two machines 'inter-act'.

Although each machine influences the pattern of actions of the other machine, the rules that govern such interactions are fixed. That is, the interaction changes the behavior of both machines, but it does not change the pattern of interaction itself. I call these rules, or patterns if you whish, the 'structure' of the interaction.

Now the big divide I want to introduce is between systems that can, or cannot, change the structure of the interaction, by interacting.

The first category of interaction I call 'fixed interaction'. The latter case I call 'developmental interaction'.

Interacting machines generally are fixed systems. It is a technological-empirical question whether we will one day come to know of machines being able to develop, through their interactions, their own interactional structure. The current examples in Alife and AI do not convince me, yet. [cf. arguments of a.o. Tom Ziemke]

Organisms, however, interacting with their passive environments or with other organisms, constitute active interactive systems. (I just state this as a fact. It is of course very well possible to have a discussion about the validity of this claim). Such systems change their interactional structure, by interacting. This means that the rules that govern the interaction change. The psychological interpretation would be that such a system is able to learn from experience.

I want to end this discussion for the moment, but not before sharing with you a glimpse of where all this is leading: If we ask ourselves, what is an organism? What is the essence that makes something alive, and what makes a system an active, behaving system? It is my belief that such a system is a developmental interactive system and that most of what we call 'the organism', is in fact interactional structure that has developed, both on philogenetic and ontogenetic timescales, in so called 'layers' (I will explain this in a later blog). Because the newly developed interactional structure has a stability, we often forget that this structure is *interactional*, it is part of the interactional system, not merely 'part of the organism'. In fact, there is no 'organism' if we do not consider it in the context of its environment. But this is for tomorrow!