Reading about neuro-symbolic AI has completed my fatalistic outlook regarding hard limitations of machines
A laudable mission with inevitable practical results but haunted with a naive presumption
I’ve seen neuro-symbolic AI mentioned many times as a sort of “new hope,” not only as a way out of the hot mess created by neural networks but also as a way to resolve the grounding problem.
Well.
Today I read "Rethinking Grounding" by Tom Ziemke.
Here’s the damning passage, at least to me:
That means there have to be causal connections, which allow the artificial agent’s internal mechanisms to interact with their environment directly and without being mediated by an external observer.
...That's a nice picture, but there can be no such thing. As soon as you start designing or programming any system, the games is over- You, as the programmer and the designer, IS the mediator. Such independence is impossible and the expectation naive; Imagine something like "programming without programming" and "design without design"... That's just another way of saying it. It’s an unintended rewording of a pipe dream in a way that doesn’t sound like one.
Machines are epistemically landlocked. There is no such thing as “an external world” to them. Machines push around loads INSIDE them.
“But-but-but we’re also machines!” some dream-addled kid may scream with teary eyes.
Uh, no. Learn what a machine is.
What’s actual grounding? Let me drop a hint. What’s “non-conscious reasoning?” Can something that’s not conscious “reason?” About what, exactly? No… Don’t just say things like “letters and numbers” because those are concepts of a conscious mind. That’s you playing a mediator game (**points a coupla paragraphs up where the same word “mediator” first came up**) …I’ve explained this stuff before:
To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.
Being epistemically landlocked means there’s no such thing as actual grounding in machines.
Reading about neuro-symbolic AI confirmed my suspicions: That the “grounding” of machines are, and can only be, faked through simulations of theoretical mechanisms through programming and design and not any kind of actual grounding. (That’s at least two layers of “it ain’t so” cake right there; First, it’s a simulation; Second, it’s theoretically derived. Read all about what “it’s a theoretical model” in turn means here. If you don’t have the time for that tome just read the very last line of this article.)
Machines will just have to "fake it until they make it" performance-wise, using whatever means to increase performance until they get "good enough" for whichever specific applications there are (unless it's "full autonomous driving..." forget THAT.)
I'm going to conclude with the following quote:
"All models are wrong, but some are useful." -George E. P. Box
'Reasoning and causality fall out of language', quote from cognitive scientist John Ball. We all learn our native language, or mother tongue. Studying that and the world's languages, unlocks the key to the building blocks of language.
With the help of linguistic framework, Role and Reference Grammar (RRG) we can now exploit its linking algorithm that connects syntax with semantics for implementation on machines.