![](https://csdnimg.cn/release/download_crawler_static/89035106/bg6.jpg)
L&S claim that “mastery of language” is “both a necessary and a sufficient
condition for AGI” (p. 217), but “the challenges humans face in understanding
language are formidable because of the immense complexity of the signals we
receive. How, then do we succeed in the task?” (p. 219). Their answer has three
parts: First, we humans
share linguistic capabilities and a common ground of shared knowledge. Sec-
ond, language itself serves to constrain the space through which a hearer must
search to determine the target intended by the speaker . . . . And then third,
each speaker is able to actively form and interpret utterances based on his
[sic] own intentions of the moment. (p. 89)
11
And why is this something that humans can do and machines can’t? Because “We
can describe and explain some of what occurs in the course of such interactions;
but we cannot build mathematical models that will enable us to predict what will
occur” (p. 89). But why is prediction necessary for AGI? Wouldn’t it suffice for
a (partial) descriptive or explanatory model to enable an AI to converse and, more
generally, to act in the real world, albeit imperfectly?
12
And why is perfection
needed? Maybe imperfect action suffices.
So their argument seems to be that, because we don’t understand how we “suc-
ceed” in the task of language use, we cannot build an AI that would succeed. And—
crucially—the reason is that “there is no distribution from which one could sample
adequate training material” (p. 233), although this focuses on just one method of
model construction: statistical modeling for machine learning.
As to whether there could be some other kind of modeling (e.g., symbolic
modeling of common sense, as in Brachman and Levesque 2022), they say this:
Complex systems (including language) “do not meet the conditions needed for
the application of any known type of mathematical model” (p. 235, my italics).
The missing premise here is that there are no other types of mathematical models
besides those that we (now) know. But short of showing (which they don’t) that any
other type would be logically impossible, there is no reason to believe this missing
premise.
4 Premise 2: The Human Brain Is a Complex System
A “system” is “a totality of dynamically interrelated elements . . . associated with
some process—the system’s behaviour” (p. 117). “Logic” systems can be modeled
11
Negotiation is involved here; see Rapaport 2003.
12
Both Lake et al. 2017 and Chomsky et al. 2023 distinguish between explanation via model-
building and prediction via neural-network deep-learning programs, arguing that, while both expla-
nation and prediction are necessary, explanation may be more important.
6