Posted by mhb 13 hours ago
The argument isn’t about whether machines can think, but about whether computation alone can generate understanding.
It shows that syntax (in this case, the formal manipulation of symbols) is insufficient for semantics, or genuine meaning. That means whether you're a machine or human being, I can teach you every grammatical rule or syntactical rule of a language but that is not enough for you to understand what is being said or have meaning arise, just like in his thought experiment. From the outside it looks like you understand, but the agent in the room has no clue what meaning is being imparted. You cannot derive semantics from syntax.
Searle is highlighting a limitation for computationalism and the idea of 'Strong AI'. No matter how sophisticated you make your machine it will never be able to achieve genuine understanding, intentionality, or consciousness because it operates purely through syntactic processes.
This has implications beyond the thought experiment, for example, this idea has impacted Philosophy of Language, Linguistics, AI and ML, Epistemology, and Cognitive Science. To boil it down, one major implication is that we lack a rock-solid understanding or theory of how semantics arises, whether in machines or humans.
Is the assumption that there is internal state and the rulebook is flexible enough that it can produce the correct output even for things that require learning and internal state?
For example, the input describes some rules to a game and then initiates the game with some input and expects the Chinese room to produce the correct output?
It seems that without learning+state the system would fail to produce the correct output so it couldn't possibly be said to understand.
With learning and state, at least it can get the right answer, but that still leaves the question of whether that represents understanding or not.
Like understanding how to bake a cake. I can have a simplistic model, for example taking a box cake and making it. Or a more complex model, using the raw ingredients in the right proportions. Both of these have some level of understanding on what's necessary to bake a cake.
And I think AI models have this too. When they have some base knowledge on a topic, and you ask a question that can require a tool without asking for a tool directly, they can suggest a tool to use. Which at least to me make it appear the system as a whole has understanding.
Intelligence without consciousness...