This blog (I mean: this post) is meant mainly for a discussion between Iris, Pim, Roel and me. But everybody is happily invited to join. This post is about the concept of 'abstraction' in relation to 'representational/non-representational theories of mind'. It is based on my reading of the following paper: Shimon Edelman (2003) But will it scale up? Not without representations.... [snap].
Ok here goes.
First let’s be clear on what is means to have (or to be) a representation.
Back to square one.
“A Rep is something that is used by a system as a stand-in for something else” Haugeland
My question would be this one: Does abstraction automatically mean ‘representations present’?
Let’s consider a few kinds of processes that could or could not be said to be representational.
Type 1 : Symbols of classes
Suppose I see a cow on a picture in a children’s book of animals. Next, I see a real cow in the field. I ‘know’ that both these ‘cows’ have ‘something’ in common. I say: These are both the same (these are both cows). And someone understanding English would agree, perhaps after adding ‘in some way they are the same, yes’ (implicating that in some other ways they are not!).
Now because of these things being the same I am allowed to call each of them by the same name: “cow”. The word cow is thus a representation of ‘that which makes both these objects ‘the same’. But it is not precisely defined what the word ‘cow’ actually refers to. So one can have ‘stand-ins’ of ‘somethings’ even when the something itself is ill-defined or even unknown (even unexistend, as with unicorns). When I equate the picture and the real cow what I am in fact performing is an abstraction. I can make an abstraction from the real cow, and from the picture, and in the abstraction these two items become one. Or at least they become items that belong to the same ‘abstract class’ of things.
But of course, the cow in the picture is not a real cow (ceci n’est pas une vache). The cow in the picture is a representation of a cow, some cow, a general cow, the idea of a cow, or what have you. The picture, like the word ‘cow’ is more a ‘symbol’ of the ‘concept cow’, whereas the real cow is an exemplar of a real cow (or: a token of the type).
Still, I could also have performed an abstraction on two real cows in saying ‘these are both the same’. I would still be performing an abstraction but without there being an external representation involved. Or is that impossible? The question is whether one needs internal representation in order to perform abstractions
Type 2: Representations of particulars
Suppose now I own a cow. One day I decide to make a painting of this cow on canvas. The next year, my cow dies. I have paintings of all the cows I once owned in my living room, since I am a real cow-lover. Someone asks me: which of the cows did you love most? I immediately point to the third painting from the right: that cow I loved most. Now here’s your classical representation. The painting serves as a stand-in for the real cow, in order to ... (whatever). In this case: I use it to communicate about my feelings and to differentiate a particular cow from other cows.
3 Signals in a communication channel
Suppose now I see a cow on a television screen. The broadcasting is live, so when the cow moves (out there) I see it making the same movement (here on the screen), realtime (i.e. semi-realtime, there is always a lag). The cow on the screen, that is: the pattern of electronic bursts on the tube, or the illuminated pixels on my TFT, is, of course, not the real cow. But the cow that I see (on the screen) is the real cow. How can this be? The real cow is, one could say, ‘delivered to me’ via an artificial channel. But it doesn’t matter how it is delivered, what matters is what is being delivered. In this case: the real cow. Even if the movie was not even live but a replay of happenings hours ago, it would still be the real cow, as it existed hours ago. There is only a technological distance between me and the cow but this is nothing else from seeing the cow ‘real-life’. For considere me standing in the field with a pair of binoculars. Would one say that what I see in the binoculars is not the real cow? In the latter case the technological distance would be short, perhaps only as thick as the bi-focal glasses on my nose or the binoculars in my hand. But whether it is glasses or TV-equipment, that does not make the fundamental difference between real or fake cows. Only the physics of the communication channel are a bit different.
Are the electronic bursts on the tube are a ‘representation’ of the cow? I believe not. I am not ‘using’ these electronic bursts at all. Or rather, I am ‘using’ the electronic bursts on the TV tube, but I am using them in precisely the same way that I am using the electronic bursts inside my brain, i.e. the patterns of activity in my retinae, in my optical tract, in my thalamus, in my visual cortex. So say that all of these patterns are representations of the cow is just a matter of definition. I could also say that they are not representations of the cow, merely ‘channels of communication’. All of these signals, inside or outside my brain, are part of the communication channel that connects the cow to ‘me’. If you would care to call these channels ‘representational’, fine. But: note that these kinds of representations are qualitatively different from type 1 and 2 since in this case the cow is ‘right there’ with us. The signals are not a ‘stand-in’, they are the connecting medium between something that is ‘still there’ and the cognitive agent.
[Right now I am not discussing what all this means for the definition of “I”/“me”/agent]
4 Biasing signals originating in other channels
Suppose now I drive my car in rural country. It is raining, can’t see a thing. Or, to be precise, I can see something. Big dark blots of somethings seem to be blocking the road. I cannot see ‘what they are’, though. People? Cars? Then I hear a big MOOOOOOH. Upon second inspection I now see tails wagging, heads turning: several cows are stumbling about in front of my car. Here, we have a multichannel situation. The first channel contains a lot of noise. I cannot see the cows, only ‘vague images of somethings’. The second channel, via the auditory nerve, provides me with a signal that biases my visual perception of the world. Suddenly I now do see the cows, because the auditory signal helps me. In terms of Haugeland, the sound of the cow is not (necessarily) a representation of a cow: the animals are right there, I can see them, but I couldn’t’ve recognized them without the biasing signal. So it is an addition of some sort, not a replacement. In dynamical terms one would say that a dynamical process that is seeking (but not finding) it’s attractor is pushed over the edge into an attractor valley by an additional forcing variable.
Afterthoughts:
Just the other day I was thinking: My biggest problem with cognitive science is the problem associated with abstraction. Consider the cognitive process that underlies our utterance: “the picture in the book and the animal in the field are the same thing; they are both cows”. Now people can do this. And in many circumstances it is a useful, functional, intelligent, smart, coherent, appropriate thing to do. But that does not mean that there exists a phenomenon called ‘dealing with cows’ and that the behavior associated with watching a picture in a children’s book is ‘in some way the same as’ the behavior associated with watching a cow in the field. Especially not if, for instance, there happens to be no fence between you and the real cow in the field. Then, suddenly, one realises, that real cows are in a lot of ways not the same as pictures of cows in children’s books. Suddenly, one realises that cows are very big animals. Suddenly it becomes extremely important to differentiate between a cow and a bull. In other words: the interaction between you and a real cow is qualitatively different from the interaction between you and a picture of a cow. So although the capacity of being able to abstract away from real cows and pictures onto the abstract class of ‘cow’ is an important cognitive achievement, it is not true that in order to understand how we deal with either real cows and or pictures of cows we need to take this abstraction-process as the basic cognitive operation that explains our behavior. We do not generally deal with cows on the basis of an abstract concept of cow to which both pictures and real cows belong. We can (e.g. in school, in an intelligence test, in reading this text), but just as often we don’t. The real cognitive structure that explains our behavior in the case of the children’s book is, I think, completely different from the real cognitive structure that explains my behavior when I’m out there in the field facing a real cow. (To elaborate on that just a little bit: when a child is reading a book with a picture of a cow, I guess the child is not doing anything that is even remotely connected to real cow. What the child is doing is connecting words, behaviors, and pictures in a learning process. The first connection between the real cow and the picture of the cow is made because other people are connecting the word ‘cow’ to both of these experiences. Thus, the connection is highly artificial, explicitly trained by an external tutor and linguistically mediated. In that sanse the picture is more a symbol than a realistic ‘image’ of the thing it refers to. This can hardly be called a ‘fundamental property’, it is more like an advanced specialisation of our cognitive system. The fact that in medieval times pictures of things were attributed all kinds of psychological or magical properties is another case in point: it is apparently only a recent discovery that a picture of something is ‘just a picture of something’).
Monday, February 26, 2007
Subscribe to:
Posts (Atom)