Saturday, July 8, 2023

The Search For A Logical Robot: Critical Remarks On Artificial Intelligence

The recent history of technological development has raised significant questions regarding the future possibility of genuine artificial intelligence. Specifically, many scientists and philosophers (along with the general public) have become increasingly convinced that it will someday be possible to use technological innovations in order to construct a being capable of thinking and reasoning in the same ways that human beings do. In addition to the popular worries that this has raised concerning the practical risks and metaphysical implications that such a being would present, these speculations have also led many to reflect on how such a being could be identified, were one to emerge. 

These reflections have received further encouragement from the advent of large language models such as ChatGPT, which emulate human speech patterns with an astonishing degree of accuracy. Indeed, some computer scientists have already been fully persuaded that ChatGPT is a genuine rational subject that deserves to be treated as such. However, to what extent are these convictions justified? In order to answer this question, we must first investigate what the appropriate criteria for identifying the emergence of genuine artificial intelligence could possibly be. The success of this investigation, in turn, will depend on a proper conception of intelligence as such.

In a recent article, Jensen Suther surveys contemporary philosophical thought on artificial intelligence and highlights several plausible requirements that any intelligent being must satisfy. One of these requirements, famously defended by Hubert Dreyfus, holds that any genuine intelligence must be embodied. When we think about genuinely intelligent behavior, one of its core elements involves an ability to modify one’s behavior in light of success or failure. If a being has no capacity to receive data, act in light of that data, and adjust its behavior in light of the results of their activity, then in what sense can it be said to be reasoning at all? If we accept that this capacity for informed behavioral modification is a real requirement for intelligence, there are two important implications: First, any genuinely intelligent being must have some sort of body capable of behaving in certain ways and having that behavior modified in light of incoming data. Second, that being must have some determinate goal that informs how it modifies its behavior in light of incoming data. In the absence of some goal, there would be nothing to guide its behavioral modification and, consequently, no basis for distinguishing intelligent behavioral modification from unintelligent modification, so this latter feature is just as necessary as the former. Together, we can call them “the embodiment criterion”. 

However, even if we accept the embodiment criterion for genuine intelligence, there is another key feature which any intelligent being must possess: Intelligent beings, by thinking, must be capable of determining their own activity as embodied beings. We can call this the “self-determination criterion”. On the face of it, this criterion seems straightforward enough. By thinking and reasoning, intelligent creatures can decide what they should do in light of what is the case and act accordingly. However, in what sense can an agent be genuinely self-determining if it cannot determine the principles that govern its own acts of thinking and reasoning? If an artificial being is pre-programmed with rules for thinking or methods for forming such rules, then its own “rational” procedures are determined by a source that is wholly external to it. If it is not pre-programmed with rules or rule-forming methods, then it is hard to see how it can be said to have rule-governed behavior at all. In the former case, it seems like the agent is governed by external constraints in a way that is incompatible with genuine intelligence. In the latter case, it seems unconstrained in a way that is also incompatible. 

For human beings, a solution to this dilemma is possible: When human beings think, they also place themselves under a shared set of rules that characterize the thoughts of the intellectual community that they are a part of. By taking on the responsibility to think in accordance with these rules, these rules govern their particular acts of thinking without being pre-programmed in them by any external source. To the extent that the rules governing human thinking are established by a community, human thinking can be said to be a socially constituted phenomenon that is self-imposed by individual thinkers. Furthermore, human thinkers remain genuinely self-determining insofar as they collectively shape the public rules that govern them by reasoning with one another. 

However, this solution to the dilemma posed by self-determination is not obviously available in the case of artificial intelligence. The reason why human beings can defer to socially instituted rules of thought is because human beings recognize one another as potential thinkers that they can treat as rule-governed subjects. It is because they are recognized as potential thinkers that it is possible for them to place themselves under the rules of their intellectual community and be held responsible for them. But in the case of AI, this potential for intelligence is precisely what is at issue. Insofar as their potential for intelligence is not recognized, they will not be recognized as potentially rule-governed subjects. Consequently, they will not be able to place themselves under the rules that govern our intellectual community and thereby satisfy the self-determination criterion. 

The moral of the above considerations is as follows: The possibility of genuine intelligence requires a being to be an embodied member of an intellectual community that recognizes them as a potential thinker. This is the only way that an agent can satisfy the embodiment criterion as well as the self-determination criterion. Human beings recognize other human beings as potential thinkers because they recognize themselves as intelligent creatures of the same sort. They recognize each other as deferring to public rules because they understand what cases of deference look like for creatures of the sort that they are. However, this recognition does not extend to non-human agents. We cannot treat such beings as potential thinkers because we do not know whether and when they are capable of deferring to public rules. In order to know that, we would have to know something about the nature of their life form. But artificial agents have no life form to know about. They are not alive. 

Recent developments in technology have given rise to two questions: Is genuine artificial intelligence possible and, if so, can we identify when it is present? As this discussion suggests, it is a mistake to view these questions as separate from one another. The possibility of genuine artificial intelligence depends upon the ability of humans, or some other rational creatures, to recognize when it is present. This is the only way for artificial intelligence to satisfy the self-determination criterion. It is not possible for artificial intelligence to be identified in this way because it bears no recognizable form of life. If the search for a logical robot is to reach its conclusion, it cannot simply involve the production of artificial intelligence alone. Rather, as Jensen Suther notes, “we can’t produce artificial intelligence without also producing artificial life.”


"Hegel Against The Machines" by Jensen Suther: https://www.newstatesman.com/ideas/2023/07/hegel-against-machines-ai-philosophy?mibextid=Zxz2cZ

No comments:

Post a Comment