Read Aloud the Text Content
This audio was created by Woord's Text to Speech service by content creators from all around the world.
Text Content or SSML code:
When I was teaching at MIT in the 1960s, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: *‘‘You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand intelligence. We in the AI Lab have taken over and are succeeding where you philosophers have failed.’’* But in 1963, when I was invited to evaluate the work of Alan Newell and Herbert Simon on physical symbol systems, I found to my surprise that, far from replacing philosophy, these pioneering researchers had learned a lot, directly and indirectly, from us philosophers: ==e.g., Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a ‘universal characteristic’ (a set of primitives in which all knowledge could be expressed), Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Wittgenstein’s postulation of logical atoms in his Tractatus.== > In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program. But I began to suspect that the insights formulated in existentialist armchairs, especially Heidegger’s and Merleau-Ponty’s, were bad news for those working in AI laboratories—that, by combining representationalism, conceptualism, formalism, and logical atomism into a research program, AI researchers had condemned their enterprise to reenact a failure. --- Using Heidegger as a guide, I began looking for signs that the whole AI research program was degenerating. I was particularly struck by the fact that, among other troubles, researchers were running up against the problem of representing significance and relevance—a problem that Heidegger saw was implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned values, which John Searle now calls function predicates. Heidegger warned that values are just more meaningless facts. To say a hammer has the function, hammering, leaves out the defining relation of hammers to nails and other equipment, to the point of building things, to the skill required in actually using a hammer, etc.—all of which Heidegger called ‘‘readiness-to-hand’’— so attributing functions to brute facts couldn’t capture the meaningful organization of the everyday world and so missed the way of being of equipment. ‘‘By taking refuge in ‘value’-characteristics,’’ Heidegger said, ‘‘we are...far from even catching a glimpse of being as readiness-to-hand’’ (Heidegger, 1962, pp. 132–133). Head of MIT’s AI Lab, Marvin Minsky, unaware of Heidegger’s critique, was convinced that representing a few million facts about objects including their functions, would solve what had come to be called the commonsense knowledge problem. It seemed to me, however, that the real problem wasn’t storing millions of facts; it was knowing which facts were relevant in any given situation. One version of this relevance problem is called the ‘frame problem.’ If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated? As Michael Wheeler puts it inReconstructing the Cognitive World: Given a dynamically changing world, how is a nonmagical system...to take account of those state changes in that world...that matter, and those unchanged states in that world that matter, while ignoring those that do not? And how is that system to retrieve and (if necessary) to revise, out of all the beliefs that it possesses, just those beliefs that are relevant in some particular context of action? (Wheeler, 2005, p. 179) Minsky suggested as a solution that AI programmers could use descriptions of typical situations like going to a birthday party to list and organize those, and only those, facts that were normally relevant. He suggested a structure of essential features and default assignments—a structure Husserl had already proposed and called a ‘‘frame’’ (Husserl, 1973, p. 38).^1 But a system of frames isn’tina situation, so in order to identify the possibly relevant facts in the current situation one would need a frame for recognizing that situation, etc. It thus seemed to me obvious that any AI program using frames was going to be caught in a regress of frames for recognizing relevant frames for recognizing relevant facts, and that, therefore, the commonsense knowledge storage and retrieval problem wasn’t just a problem; it was a sign that something was seriously wrong with the whole approach. --- Unfortunately, what has always distinguished AI research from a science is its failure to face up to, and learn from, its failures. To avoid the relevance problem, AI programmers in the 1960s and early 1970s limited their programs to what they called ‘micro-worlds’—artificial situations in which the small number of features that were possibly relevant was determined beforehand. It was assumed that the techniques used to construct these micro-worlds could be made more realistic and generalized to cover commonsense knowledge—but there were no successful follow-ups, and the frame problem remains unsolved. John Haugeland argues that symbolic AI has failed and refers to it as ‘‘Good Old Fashioned AI’’ (GOFAI). That name has been widely accepted as capturing symbolic AI’s current status. Michael Wheeler goes further, arguing that a new paradigm is already taking shape: ‘‘A Heideggerian cognitive science is...emerging right now, in the laboratories and offices around the world where embodied-embedded thinking is under active investigation and development’’ (Wheeler, 2005, p. 285). Wheeler’s well informed book could not have been more timely since there are now at least three versions of supposedly Heideggerian AI that might be thought of as articulating a new paradigm for the field: Rodney Brooks’ behaviorist approach at MIT, Phil Agre’s pragmatist model, and Walter Freeman’s dynamic neural model. All three approaches accept Heidegger’s critique of Cartesian internalist representationalism, and, instead, embrace John Haugeland’s slogan that cognition is ‘‘embedded and embodied’’ (Haugeland, 1998). ## Heideggerian AI, Stage One: Eliminating Representations by Building Behavior-Based Robots Winograd (1989) notes the irony in the MIT AI Lab’s becoming a cradle of ‘‘Heideggerian AI’’ after its initial hostility to my presentation of these ideas (as cited in Dreyfus, 1992, p. xxxi). Here’s how it happened. In March 1986, the MIT AI Lab under its new director, Patrick Winston, reversed Minsky’s attitude toward me and allowed, if not encouraged, several graduate students to invite me to give a talk I called ‘‘Why AI Researchers should studyBeing and Time.’’ There I repeated the Heideggerian message of my What Computers Can’t Do: ‘‘The meaningful objects...among which we live are not amodelof the world stored in our mind or brain;they are the world itself’’ (Dreyfus, 1972, pp. 265–266). The year of my talk, Rodney Brooks published a paper criticizing the GOFAI robots that used representations of the world and problem solving techniques to plan their movements. He reported that, based on the idea that ‘‘the best model of the world is the world itself,’’ he had ‘‘developed a different approach in which a mobile robot uses the world itself as is own representation—continually referring to its sensors rather than to an internal world model’’ (Brooks, 1997b, p. 416). Looking back at the frame problem, he says: ‘‘And why could my simulated robot handle it? Because it was using the world as its own model. It never referred to an internal description of the world that would quickly get out of date if anything in the real world moved’’ (Brooks, 2002, p. 42)