Tomorrow a student of mine will present a talk on Searle's famous article Minds Brains and Programs. This student and I will probably be among the few in the classroom interested in those kinds of things. It is very theoretical, it is philosophy, it doesn't get you anywhere, it doesn't help you making money. It is just a discussion about the question of whether computers can really *understand* or whether computers are, at best, merely a *model* of the process of understanding. Fell asleep already did you? Well, be glad you're at home, and not sitting in this classroom, tomorrow, having to listen to it and even having to 'form an opinion' and 'discuss the topic with your neighbour'. But just in case you *are* interested in this question, here're some thoughts...
Rereading Searle I was reminded of the discussions that Iris and I have been having about the word "computation". This discussion I didn't start with Iris, I've had similar discussions with lot's of people. I clearly remember my Big Talk in Nijmegen, where I presented my results of my internship for my fellow students and teachers. I was claiming (hell, why not?) that cognition is not computational and that it is, instead, best explained by the workings of a nonlinear dynamical system. One of the teachers (Lambert Schomaker) put it to me empathically: But Jelle, *everything* is computational. How can you say that cognition is not computational? Even the rotation of the earth around the sun is computational! And furthermore, *Everything* is a dynamical system! How can you say that cognition is a dynamical system? Even the moon in its orbit around the earth is a dynamical system. So what are you claiming here!?
Let's go back a few steps in time, and start up this discussion by looking inside Pim's old copy of "The mind's I". Within this book is embedded the famous article by Searle, called Minds, Brains and Programs. If you already know this article you can skip to the next paragraph.. now. And within this article, Searle argues against the possibility of Strong Artificial Intelligence (AI). Strong AI claims that computers cannot only be used merely as 'models' of cognitive processes, instead, a computer program that performs some function that is comparable to human competence *is*, by matter of fact, a cognitive system. To put it simple: if it talks like a duck, walks like a duck, it's a duck. So if you build a computer that can process stories, and give responses the same way that I process stories and responses to it, then this computer can be said to really *understand* these stories, just like I *understand* them. To be sure: Searle is against this idea. He says (follows the Chinese Room Experiment): Suppose I sit in a closed room, with a large book of rules, and these rules tell me how to create an appropriate response to some linguistic input that is given to me via a small window, then I wouldn't necessarily by performing these rulebased mappings come to *understand* what I was doing. But the people outside the room would quickly come to believe that I really understood the input, since I was giving sensible reply's. (Searle takes the example of Chinese: suppose you have a book that gives you the procedure of writing an appropriate Chinese response to some Chinese input, you could by use of the book fool any native Chinese speaker without in reality actually understanding anything of Chinese).
One of the responses to his article, which he actually encorporated in the article, is called The Systems Reply. If you already know of this reply you can skip to the next paragraph... now. The systems reply says that "understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part". In other words, *you* didn't understand Chinese, but you, the rulebook and everything else you needed in order to do your input-output mappings, taken together, as a system, *did*.
Somewhere in discussing this reply, he says the following, which I would like to quote here in full:
"If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes. "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room—telephone, tape recorder, adding machine, electric fight switch—also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don't really take it seriously, and they don't think anyone else will either. I propose, for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that."
Now, although I disagree with Searle on many a thing (in fact, I still think The Systems Reply holds, and Searle is not succeeding in discarding it succesfully, and in the above quote he is being very rhetorical, as always), I think he has a valid point on this 'side issue'. The point, in my words, is that "computation" just has two meanings. One meaning of the word computation refers to the idea that all of reality can be described as a dynamic system in which variables are coupled in one way or another, and if some variable 'maps' onto another variable in some reliable way, we could say that a transformation, aka a computation has 'occurred'. But this kind of 'computation' has nothing to say about cognition or mental processes. It doesn't even by necessity say anything about the reality of these computational processes, because any physicist that is just a little bit of an instrumentalist in his hart will tell you that such dynamic systems and the computations that go on in these systems are *models* of reality, not the real stuff. Nobody would claim that the computer model of an atom, rotating on your PC, *is* actually an atom. The mathematical talk of systems and computations is a *language* in which we communicate scientific ideas. It is not the real thing. (But you could just as well hold that this language is actually referring to something very real that is in one way or another 'just like' that which the language describes). But the realism/instrumentalism distinction is really not at issue here anyway.
What's at issue here is that there is another meaning of the word computation that refers to cognitive processes exclusively. It states that cognitive processes are 'computational' and by that it is meant computational in a very special sense. This theory tells us that the state of the world (if there is such a thing as 'the state' of 'the world') is 'encoded' by the perceptual system as a perceptual 'representation', and that this representation is 'processed' as a set of symbols internal to the cognitive system. To be precise, it is the brain that physically 'instantiates' these symbols. The activity of the brain is representing them, and these symbols interact with one another via a set of 'procedures' (rules, computations), in such a way that the sensory representation of the world is reliably coupled to some 'intelligent' behavioral response.
Saying that all of nature is computational proves nothing about, what I call, (in reference to Searle's strong AI), Strong Computationalism, as described above. As Searle says above " the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred". In my view, Lambert Schomaker was talking about computation in the 'universal' sense and thereby blurring the distinction between mental and nonmental processes. What I was claiming then, as I still do now, is that mental processes are not 'computational' in the Strong sense.
...
Hey, did you all fall asleep there!?? Wasn't anybody listening to what I was saying??? Ok then, let's take a coffee break!
-=-=-=-
Searle's famous article Minds Brains and Programs
..
Tuesday, September 19, 2006
Subscribe to:
Post Comments (Atom)
3 comments:
I am definitely not an expert, i learned computer science, always interested in AI, but never got closer then some stupid program that could deduce which thing you were thinking about.
Anyway. It is kind of confusing to have the word "computational" having the two meanings, the getting some result (decision/value/answer) through computations, and second, that even our truth, the things we see, are derived by our own computations (because we, ourselves, to capture and comprehend the reality do some transforming : that is a tree, the tree is "green", ...)
And i understand that we want to prove, somehow, whether the mind, the way "we" get results, can somehow be described or modelled in a system that is purely "computational", and with that i mean whether it could be broken down to elementary, basic, operations, like those that a computer can execute. Very interesting indeed. Especially because it is actually a very abstract discussion (or seems to be).
But supposing we could prove that it is possible, we could then be looking for the perfect AI.
The question Iris posed I didn't understand at all. Way over my head. According to me you were just getting started, i am interested in the next part. Or did i miss the ball completely ?
Okay. Re-read it. If you say, concluding, that cognition is not computational in the *strong* sense, you loose me. I have no idea which kind of computational (of the two you mentioned before) you are then referring too.
Suppose it could be either, then you mean :
1- there is no model that could capture cognition completely
2- cognition itself is not a computational process (inside our heads) we get results/ answers/ comprehension another way. Intuition jumps to mind.
But of course in our trying to describe "cognition", or simulate "cognition", we have to break it down in computational steps.
Then your conclusion means we can't recreate cognition, in a computational way. Is there another way ? Neural networks ? Following connections or something ?
Waaaaaah, i am completely out on a limb here. Hope i didn't make a fool of myself. I hope you got a good laugh out of it then :-) Anyway real interesting, thank you for getting me to think ...
Thank you Iris ! That helps a lot.
I don't know much about neural networks, just that it makes decisions or calculated results in jumps, the "connecting neurons". It is in that sense computational that it is executed by a computer (the weak sense), but i thought of something else too.
In the Systems Reply, somebody thinks of a set of rules, given some input, the wanted output is achieved. While the "computational" (non-mental) part can follow this set of rules, the system will seem to "understand". But somebody has to device the rules. Whereas with neural networks, a self-learning system is created, nobody has to define a limiting set of rules, and through the learning, should or could make the same type of jumps our mind can. Maybe i give it too much credit, maybe it is not all that sophisticated :-)
Somehow the conclusion that cognition is not computational, so it can't be reproduced in a non-mental process seems too simple.
Although, reversing it, assuming we could "make" something mechanical with the same "cognitive abilities" as a human seems equally far-fetched, but that shouldn't mean it should be discarded too easily, no ?
On the other hand : who needs mechanical versions of men ? Although now my manga-history comes into play. If i could put my brain or a mechanical version of my brain, my "ghost" as they so aptly call it, inside a machine, i could become nearly immortal.
Otherwise we wouldn't be looking for copies, with emotions, feelings, chaos, ... which are only likely to introduce errors. Like empathy in a device that guards the entrace. "Poor thing, forgot your entrance-card again ?"
I am afraid i am digressing :-)
Post a Comment