Cognitive psychology is often introduced by asking questions like: “How do we perceive the objects in our environment?” This question is then translated in slightly more technical terms as: “How does the brain create a meaningful picture on the basis of the light-rays that stimulate our retina?” The process that makes makes pictures from light-rays is called the process of visual perception. Instead of discussing the various theories that try to explain what the mechanism behind this process might be, I would like to question the question itself.
In my view, the question itself is seriously flawed, based on three errors of thought. I will call these errors the observer-error, the system error and the metaphorical error. (But perhaps better names can be found, or have been found, for each of the errors, elsewhere, since I have not made the effort of doing a literature search on this topic).
The observer error
The first thing to note about asking questions like the one above is that it involves asking after the explanation of something that might not even exist in the first place. We ask: How do we create meaningful pictures in our head? Well, perhaps we don’t create such pictures, so the entire question becomes empty. Let’s think about the question from this point of view and try to analyse how we come to take for granted that ‘making a picture in your head’ is a valid starting point for investigation. I seem to know that I create pictures in my head (Hey, I do see, right?). But how do I know that you create these meaningful pictures? For an intuitive observer, it might seem to be a clear fact that human beings create a picture of the world inside their heads. But there is no observable evidence for this on the outside. Note that all research on visual perception is *based* on first asking the above question. The fact that visual perception is some kind of information processing mechanism that should produce, as output, *a picture* (an image, a pattern, "the thing that we see", or what have you), is taken for granted. It is a starting point. This means that the further empirical evidence coming from research on visual perception cannot be counted as evidence for the existence of the phenomenon-to-be-explained. The evidence merely has something to say on the question of what the mechanism for visual perception is, *assuming* that it is something that will produce a "picture" in our heads.
Now why is it so difficult to believe that we are *not* creating pictures in our head? Well, this has to do with the fact that researchers are always human beings, and we, ourselves, are in the business of visual perception, and we are consciously aware of this ‘fact’. The personal reflection of how visual perception *is experienced* by us confounds with the scientific, observable definition of the phenomenon that needs to be explained. Now, there is nothing wrong with taking a conscious experience as a phenomenon to be explained, but we should be clear about the status of the phenomenon. So what we should ask instead is: How is the conscious experience of a meaningful, visual, picture of the world created? This is a different question. All options are open. We could, for instance, claim that the meaningful picture is just an illusion of our consciousness and that the sensory input on our retina does nothing to create such a picture in reality. We dream our way through life, one could say. We could also claim that the visual input is causing coherent pictures to arise in consciousness, but that this is not very *interesting*, because the function of visual input is mainly to directly constrain behavioral output, and that the coherent pictures and patterns, i.e., the "seeing" that we experience, is only an after-effect, a side-issue.
Whatever the answer, the observer-error states that we should not confound our personal experience of the phenomenon with the scientific (observable) definition of the phenomenon. In the scientific definition, all we have is either a physical process (light falling on the retina), or a conscious living being *reporting* that he "sees things". There is an intuitive, but dangerous habit of automatically assuming some information-processing device (a ‘machine’ that processes the light on our retina to produce the consciously experienced picture in our head) which is posed in between the physical and the conscious phenomenal levels. But this device is not a real phenomenon at all, it is a theory in itself, and it is a theory that could be false.
The system error
This has to with time, and with the assumption on the direction of the flow of events in the phenomenon to be explained. When we ask: how does the light on our retina get to be transformed into a meaningful picture of the world?, it is automatically assumed that *first* there is light on the retina, only after which this light is transformed, step by step, into a meaningful picture. The flow of events, through time, is linear (sequential, procedural), starting with the light out in the world (what is sometimes called one of the 'sensory qualities'), and ending with the conscious visual experience. The metaphorical comparison with a machine that receives input and produces output, going through a series of sequential steps is easily made. This picture of process flow is ubiquitous in college textbook explanations of the structure and processes in the brain. It is how I tell it to my students: Light hits a dog, reflects, hits our retina, excites nerve cells, which excite further nerve cells, which sets in motion a train of nerve impulses that travel from the eye into the optic chiasm, into the lower brain areas, into the cortex, starting in V1, splitting up into the dorsal (on the top of your head) "where system" (where is the dog) and into the ventral (on both sides of your head) "what system" (it's a dog!). Upstream, somehow all visual information gets to be "integrated", producing a coherent picture of a dog. Then you get to shout "hey, it's a dog over there", or something like that. But this picture is a charicature of what is really happening, and I’ll tell you why.
For starters, in the brain, activity runs in more than one direction, since every nerve cell is heavily connected with both feedforward as well as feedback connections. So, no matter which nerve-cell in the ‘stream’ we take as a starting point, activity is then spread both upwards and downwards, at the same time. It is one thing to acknowledge the fact that parallel processing takes place in a system (as everybody does), it is another thing to really think through the causal consequences. Consider that the phenomenon of me observing a dog takes some time in itself. I see something that will turn out to ‘be’ a dog, I continue to look at it, and in the continuous process of seeing this thing I see more and more of the dog. I might at first see that it is a brown living animal, even before I see it is a dog. But seeing that it is a brown living animal suggests that the complete visual stream is already activated (producing the brown living animal experience, right?). But that means that before we actually recognize the dog as a dog, massively parallel processing is going on. So even before we ‘recognize’ a dog being a dog, thousands and thousands of neural impulses are already shooting upwards and downwards all over the system. So what does it then mean to say that we “process the sensory input and thereby recognize the dog”? It means nothing, because the recognition of the dog might just as well be attributed to the feedback activity from the cortex back to the eye, instead of the activity that goes from the eye to the cortex. We might even consider the possibility that the observer just wanted to see the dog, which sort of places the causal origin of the perception of the dog completely inside the observers head. The whole process didn’t start with light falling on the dog at all, it started with my own mental activity!
I do not wish to claim that visual perception is a kind of dreaming that has nothing to do with what happens outside of us. But what I want to conclude here is that visual perception is not, at least not necessarily, a sequential process that starts out there and ends in here. In the brain, multiple parallel processes run upwards, downwards and sideways in a massively connected network. What gross simplification to tell our students that recognition of a dog is a five step procedure that hops from the eye to the temporal lobe and that’s all there is to it?
The metaphorical error
I will try to be quick about this one because my space is running out here. The metaphorical error is related to the errors above. It encompasses the above ones, I guess. It refers to the idea that in asking the question: “How does our brain create a meaningfull picture on the basis of visual input?”, we automatically, implicitly, apply a metaphor to human cognition that might be false. The metaphor is of course the metaphor of the information-processing machine. A real, physical machine that is, in the strong sense, comparable to other mechanical devices such as telephone-communication systems, bicycles, trains, and computers. (So I do not mean a machine in the general sense, in which all processes are by definition ‘machines’). If visual perception is a process that is executed by a machine, then this automatically necessary that this machine have some input, and what better input to take than the light that reflects on the retina? And if this machine is to have some output, than what better candidate might there be than our ‘visual thoughts’, the visual experiences we have when we look, see, observe the world out there? But there is no proof whatsoever that visual perception is such a machine, or explained by such a machine. In fact, it is a non-machine in the first place, since it has physical stuff as input and meaningfull information as output. Such a machine cannot exist. The metaphor is a smoke curtain that obscures the fact that between light on our retina an meaningfull pictures in our head lies a complete mind-body problem (how to get from stuff to ideas), a problem that has not been solved. You cannot just imagine a machine to solve that problem, the problem is fundamental. We probably need a completely different conception of cognition in general in order to solve it.
Ok
So, Jelle, if this is all wrong then, what is your own idea, what is visual perception if it is not the machine you just discarded?
Well, I don’t know, of course. But I do have some hints, some questions to ask. I will pose them here as my final thoughts:
• What comes first, seeing, or acting? When I move my eye, is that considered to be a response, or the active search for a new stimulus?
• What is the goal, the utility, of visual perception? Surely a device so complex as the visual system (the physical stuff that is, eye, brain, and so on) did not evolve in order to create pictures inside our head. Could it be that the adaptive value of visual perception has more to do with sustaining a satisfactory relation with our environment, instead of just seeing the environment?
• Is there really a difference between the inner eye (imagination/dreaming, and so on) and ‘real seeing’? Or is the perceived difference between the two phenomena itself an illusion we create ourselves?
• Who is doing the seeing? We automatically ask: How do people... (follows some cognitive function). But what is this ‘people’ we speak of? Is it my brain that sees? Is it the person Jelle? Is it the Jelle that other people speak about in their native language? Is it the Jelle you just imagined writing this short essay? As long as we haven’t solved the problem of what a human being is (and at which explanatorial level(s) he ‘exists’), how can we answer such a question?
• Should we explain behavior, or the mind? Is the mind really a phenomenon, or is it already in itself a theory (as Churchland would have it) that might be false?
Although I didn’t set out to end up where I do now, all these questions seem to run directly towards the good old mind-body problem again. Time to stop, I guess! If you want to read more about this famous problem in philosophy, you can do it here. See you next time!
Tuesday, September 26, 2006
Wednesday, September 20, 2006
Monsieur Tan
So everybody please meet Monsieur Tan:
(The below text is from a comment I wrote to Sander in my other blog)
"...aphasia is a neuropsychological disorder where the patient specifically loses the ability to speak (there are various forms, of course). The most famous example is "Monsieur Tan", a patient of a doctor called Paul Broca, back in 18-something. This patient could not say anything but the word 'tan' (with full intonation suggesting sentences with different words and meanings, but the output was just that: tan tan tan tan). Broca hypothesized that the disorder was due to the fact that a specific area of the brain was damaged. When opening up mr tan after his death, Broca indeed found that a special area ("Broca's area") was damaged.
Well, this discovery just about started the whole of cognitive neuroscience and neuropsychology.
Broca's original article can be found here"
(The below text is from a comment I wrote to Sander in my other blog)
"...aphasia is a neuropsychological disorder where the patient specifically loses the ability to speak (there are various forms, of course). The most famous example is "Monsieur Tan", a patient of a doctor called Paul Broca, back in 18-something. This patient could not say anything but the word 'tan' (with full intonation suggesting sentences with different words and meanings, but the output was just that: tan tan tan tan). Broca hypothesized that the disorder was due to the fact that a specific area of the brain was damaged. When opening up mr tan after his death, Broca indeed found that a special area ("Broca's area") was damaged.
Well, this discovery just about started the whole of cognitive neuroscience and neuropsychology.
Broca's original article can be found here"
Tuesday, September 19, 2006
The systems reply
Tomorrow a student of mine will present a talk on Searle's famous article Minds Brains and Programs. This student and I will probably be among the few in the classroom interested in those kinds of things. It is very theoretical, it is philosophy, it doesn't get you anywhere, it doesn't help you making money. It is just a discussion about the question of whether computers can really *understand* or whether computers are, at best, merely a *model* of the process of understanding. Fell asleep already did you? Well, be glad you're at home, and not sitting in this classroom, tomorrow, having to listen to it and even having to 'form an opinion' and 'discuss the topic with your neighbour'. But just in case you *are* interested in this question, here're some thoughts...
Rereading Searle I was reminded of the discussions that Iris and I have been having about the word "computation". This discussion I didn't start with Iris, I've had similar discussions with lot's of people. I clearly remember my Big Talk in Nijmegen, where I presented my results of my internship for my fellow students and teachers. I was claiming (hell, why not?) that cognition is not computational and that it is, instead, best explained by the workings of a nonlinear dynamical system. One of the teachers (Lambert Schomaker) put it to me empathically: But Jelle, *everything* is computational. How can you say that cognition is not computational? Even the rotation of the earth around the sun is computational! And furthermore, *Everything* is a dynamical system! How can you say that cognition is a dynamical system? Even the moon in its orbit around the earth is a dynamical system. So what are you claiming here!?
Let's go back a few steps in time, and start up this discussion by looking inside Pim's old copy of "The mind's I". Within this book is embedded the famous article by Searle, called Minds, Brains and Programs. If you already know this article you can skip to the next paragraph.. now. And within this article, Searle argues against the possibility of Strong Artificial Intelligence (AI). Strong AI claims that computers cannot only be used merely as 'models' of cognitive processes, instead, a computer program that performs some function that is comparable to human competence *is*, by matter of fact, a cognitive system. To put it simple: if it talks like a duck, walks like a duck, it's a duck. So if you build a computer that can process stories, and give responses the same way that I process stories and responses to it, then this computer can be said to really *understand* these stories, just like I *understand* them. To be sure: Searle is against this idea. He says (follows the Chinese Room Experiment): Suppose I sit in a closed room, with a large book of rules, and these rules tell me how to create an appropriate response to some linguistic input that is given to me via a small window, then I wouldn't necessarily by performing these rulebased mappings come to *understand* what I was doing. But the people outside the room would quickly come to believe that I really understood the input, since I was giving sensible reply's. (Searle takes the example of Chinese: suppose you have a book that gives you the procedure of writing an appropriate Chinese response to some Chinese input, you could by use of the book fool any native Chinese speaker without in reality actually understanding anything of Chinese).
One of the responses to his article, which he actually encorporated in the article, is called The Systems Reply. If you already know of this reply you can skip to the next paragraph... now. The systems reply says that "understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part". In other words, *you* didn't understand Chinese, but you, the rulebook and everything else you needed in order to do your input-output mappings, taken together, as a system, *did*.
Somewhere in discussing this reply, he says the following, which I would like to quote here in full:
"If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes. "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room—telephone, tape recorder, adding machine, electric fight switch—also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don't really take it seriously, and they don't think anyone else will either. I propose, for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that."
Now, although I disagree with Searle on many a thing (in fact, I still think The Systems Reply holds, and Searle is not succeeding in discarding it succesfully, and in the above quote he is being very rhetorical, as always), I think he has a valid point on this 'side issue'. The point, in my words, is that "computation" just has two meanings. One meaning of the word computation refers to the idea that all of reality can be described as a dynamic system in which variables are coupled in one way or another, and if some variable 'maps' onto another variable in some reliable way, we could say that a transformation, aka a computation has 'occurred'. But this kind of 'computation' has nothing to say about cognition or mental processes. It doesn't even by necessity say anything about the reality of these computational processes, because any physicist that is just a little bit of an instrumentalist in his hart will tell you that such dynamic systems and the computations that go on in these systems are *models* of reality, not the real stuff. Nobody would claim that the computer model of an atom, rotating on your PC, *is* actually an atom. The mathematical talk of systems and computations is a *language* in which we communicate scientific ideas. It is not the real thing. (But you could just as well hold that this language is actually referring to something very real that is in one way or another 'just like' that which the language describes). But the realism/instrumentalism distinction is really not at issue here anyway.
What's at issue here is that there is another meaning of the word computation that refers to cognitive processes exclusively. It states that cognitive processes are 'computational' and by that it is meant computational in a very special sense. This theory tells us that the state of the world (if there is such a thing as 'the state' of 'the world') is 'encoded' by the perceptual system as a perceptual 'representation', and that this representation is 'processed' as a set of symbols internal to the cognitive system. To be precise, it is the brain that physically 'instantiates' these symbols. The activity of the brain is representing them, and these symbols interact with one another via a set of 'procedures' (rules, computations), in such a way that the sensory representation of the world is reliably coupled to some 'intelligent' behavioral response.
Saying that all of nature is computational proves nothing about, what I call, (in reference to Searle's strong AI), Strong Computationalism, as described above. As Searle says above " the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred". In my view, Lambert Schomaker was talking about computation in the 'universal' sense and thereby blurring the distinction between mental and nonmental processes. What I was claiming then, as I still do now, is that mental processes are not 'computational' in the Strong sense.
...
Hey, did you all fall asleep there!?? Wasn't anybody listening to what I was saying??? Ok then, let's take a coffee break!
-=-=-=-
Searle's famous article Minds Brains and Programs
..
Rereading Searle I was reminded of the discussions that Iris and I have been having about the word "computation". This discussion I didn't start with Iris, I've had similar discussions with lot's of people. I clearly remember my Big Talk in Nijmegen, where I presented my results of my internship for my fellow students and teachers. I was claiming (hell, why not?) that cognition is not computational and that it is, instead, best explained by the workings of a nonlinear dynamical system. One of the teachers (Lambert Schomaker) put it to me empathically: But Jelle, *everything* is computational. How can you say that cognition is not computational? Even the rotation of the earth around the sun is computational! And furthermore, *Everything* is a dynamical system! How can you say that cognition is a dynamical system? Even the moon in its orbit around the earth is a dynamical system. So what are you claiming here!?
Let's go back a few steps in time, and start up this discussion by looking inside Pim's old copy of "The mind's I". Within this book is embedded the famous article by Searle, called Minds, Brains and Programs. If you already know this article you can skip to the next paragraph.. now. And within this article, Searle argues against the possibility of Strong Artificial Intelligence (AI). Strong AI claims that computers cannot only be used merely as 'models' of cognitive processes, instead, a computer program that performs some function that is comparable to human competence *is*, by matter of fact, a cognitive system. To put it simple: if it talks like a duck, walks like a duck, it's a duck. So if you build a computer that can process stories, and give responses the same way that I process stories and responses to it, then this computer can be said to really *understand* these stories, just like I *understand* them. To be sure: Searle is against this idea. He says (follows the Chinese Room Experiment): Suppose I sit in a closed room, with a large book of rules, and these rules tell me how to create an appropriate response to some linguistic input that is given to me via a small window, then I wouldn't necessarily by performing these rulebased mappings come to *understand* what I was doing. But the people outside the room would quickly come to believe that I really understood the input, since I was giving sensible reply's. (Searle takes the example of Chinese: suppose you have a book that gives you the procedure of writing an appropriate Chinese response to some Chinese input, you could by use of the book fool any native Chinese speaker without in reality actually understanding anything of Chinese).
One of the responses to his article, which he actually encorporated in the article, is called The Systems Reply. If you already know of this reply you can skip to the next paragraph... now. The systems reply says that "understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part". In other words, *you* didn't understand Chinese, but you, the rulebook and everything else you needed in order to do your input-output mappings, taken together, as a system, *did*.
Somewhere in discussing this reply, he says the following, which I would like to quote here in full:
"If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes. "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room—telephone, tape recorder, adding machine, electric fight switch—also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don't really take it seriously, and they don't think anyone else will either. I propose, for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that."
Now, although I disagree with Searle on many a thing (in fact, I still think The Systems Reply holds, and Searle is not succeeding in discarding it succesfully, and in the above quote he is being very rhetorical, as always), I think he has a valid point on this 'side issue'. The point, in my words, is that "computation" just has two meanings. One meaning of the word computation refers to the idea that all of reality can be described as a dynamic system in which variables are coupled in one way or another, and if some variable 'maps' onto another variable in some reliable way, we could say that a transformation, aka a computation has 'occurred'. But this kind of 'computation' has nothing to say about cognition or mental processes. It doesn't even by necessity say anything about the reality of these computational processes, because any physicist that is just a little bit of an instrumentalist in his hart will tell you that such dynamic systems and the computations that go on in these systems are *models* of reality, not the real stuff. Nobody would claim that the computer model of an atom, rotating on your PC, *is* actually an atom. The mathematical talk of systems and computations is a *language* in which we communicate scientific ideas. It is not the real thing. (But you could just as well hold that this language is actually referring to something very real that is in one way or another 'just like' that which the language describes). But the realism/instrumentalism distinction is really not at issue here anyway.
What's at issue here is that there is another meaning of the word computation that refers to cognitive processes exclusively. It states that cognitive processes are 'computational' and by that it is meant computational in a very special sense. This theory tells us that the state of the world (if there is such a thing as 'the state' of 'the world') is 'encoded' by the perceptual system as a perceptual 'representation', and that this representation is 'processed' as a set of symbols internal to the cognitive system. To be precise, it is the brain that physically 'instantiates' these symbols. The activity of the brain is representing them, and these symbols interact with one another via a set of 'procedures' (rules, computations), in such a way that the sensory representation of the world is reliably coupled to some 'intelligent' behavioral response.
Saying that all of nature is computational proves nothing about, what I call, (in reference to Searle's strong AI), Strong Computationalism, as described above. As Searle says above " the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred". In my view, Lambert Schomaker was talking about computation in the 'universal' sense and thereby blurring the distinction between mental and nonmental processes. What I was claiming then, as I still do now, is that mental processes are not 'computational' in the Strong sense.
...
Hey, did you all fall asleep there!?? Wasn't anybody listening to what I was saying??? Ok then, let's take a coffee break!
-=-=-=-
Searle's famous article Minds Brains and Programs
..
Subscribe to:
Posts (Atom)