23 Comments
User's avatar
Anastasia Leventi-Peetz's avatar

very well said. These is the so called OOD (Out Of Distribution data) that confuses neural networks when to make a decision with it. However, real world data is exactly that: data OOD. Neural networks produce shortcuts and spurious correlations, they don't produce causal relations or explanations because they do not have knowledge and don't understand context!

Expand full comment
SubThought's avatar

The whole purpose of the cognitive system is to deal with information outside of the system. Read. You are very ignorant, unfortunately, and read who Daniel Dennett was. That would be the first thing. "He called me Daniel Dennett Junior." I wonder who is Daniel Dennett? Let me look that up. You cannot be spoon fed everything in life. Your ignorance is on total display for the world to see....

Expand full comment
David Hsing's avatar

Hm. If everything is "read" instead of "present your damned argument" then I can also just tell you to read: https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46/ Newsflash: That's not how a discussion works, buddy. You say "it's the purpose to deal with external information" well no shit... I'm asking exactly HOW. When you actually do bother to do so, I'm going to show you how exactly it's NOT dealing with external information.

I know exactly who Dennett was... he (and his fans) are those who handwave an argument away by simply claiming it's "intuition pump." I don't take him nor his non-reply against The Knowledge Argument (why don't you find out about that also?) seriously.

Expand full comment
SubThought's avatar

The argument was already presented here: https://zenodo.org/records/15522356

Tell me what you do not agree with in that paper.

Expand full comment
David Hsing's avatar

My goodness do you even know how LinkedIn works? Why did you even tag my name in that reply to Martin? You could have just dissed me without having it appear in my notification stream... Good grief I'll give you the benefit of the doubt but you're acting like a dumb little troll if you actually wanted me to see it on purpose.

Expand full comment
David Hsing's avatar

LOL you are a laugh a minute, who now, like many other schoolkids I run into, are trying to accuse me of being a Luddite. Ah yes, please also accuse Gary Marcus of the same! LOLOLOL you are completely out of bullets... https://garymarcus.substack.com/

Expand full comment
David Hsing's avatar

You yourself basically already laid it out as an information processing system by saying "that's how people work" on LI. No, that's not how people work. First, people aren't just information processors. Reference the Knowledge Argument (Mary demonstration) https://plato.stanford.edu/entries/qualia-knowledge/ Second, such IP-based paradigm isn't matched with practical reality https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer tldr; the brain is not a darned computer as explained here: https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness

Expand full comment
SubThought's avatar

You never read the link Building Sentient Beings eh? I see. Nor my response on Linked In? Okay Daniel Dennett Junior. You should do your homework. The same referents people have are the same (analogous) referents that cognitive systems can have.

MM.

Expand full comment
David Hsing's avatar

What exactly in that entire thing that deals with information outside of the system? Exactly nothing.

I don't get the Dennett reference.

Expand full comment
SubThought's avatar

Correct. We need sentient beings that perceive the world...

https://zenodo.org/records/15522356

https://a.co/d/eqYOLBX

Expand full comment
David Hsing's avatar

Machine are inevitably locked to signal/load matching. https://davidhsing.substack.com/p/machines-cant-think

Expand full comment
Thomas Baudel's avatar

This is the distinction Aristotle makes between deduction and induction. Both are valid methods to acquire knowledge, even though they both have limits.

Yet, Aristotle also exposes other means of reaching conclusions, such as abduction (A => B, A likely, I see B, I deduct A, such as: "if I wake up in the morning and see the grass in my garden is wet, I deduct it has rained during the night").

Expand full comment
Alexander Naumenko's avatar

I agree that statistical methods are not the way to go. I think the same about logical methods. I have gone to develop a theory of intelligence to be used as the foundation for a novel AI paradigm that potentially can address the issues of current systems.

Consider reading this - https://alexandernaumenko.substack.com/p/intelligence-and-language

Expand full comment
Andreas Wandelt's avatar

Saw your post via linkedIn; I am very much with you on this as far as current practices, overblown expectations GenAI etc. are concerned. Just thinking:

When it comes to your distinction between errors due to limited human experience vs. stochastic mechanisms in machines: We are not Vulcans; not logical machines. Looking at Asch's conformity experiments, Kahnemann's and Twersky's work etc., I would argue that human decision making quality is influenced quite strongly by all kinds of quasi-stochastic influences, from social conformity, via all our other biases (recency bias etc.) to fatigue and stress at home.

Are you not making an assumption like the classical economists did with their "homo oeconomicus"? I can follow your conclusions if I make that assumption. If I look at real-life situations, I would argue that the effect matters more: Looking at the sum of all influences, does the decision making of "AI" in whatever evolved form produce better results than human decision making.

Looking at the recent publication about the neuronal pattern firing in an LLM when it sees the a misspelt word "rihgt" being very different whether that is a coding context or an essay: Is that not the very rudimentary start of abstraction/contextual understanding?

I am fully with you that current systems are not there yet. Just not sure about your "never" claim. I see artificial systems evolving to other ways of learning (e.g. robotic bodies giving experience with space and time), and with refined architectures. Yet, they will still be neural nets, and I see no *principal* difference.

Can "knowledge", can epistemiologigal categories not emerge from pattern matching? Are you sure this is not what happened in humans?

Expand full comment
David Hsing's avatar

"Can "knowledge", can epistemiologigal categories not emerge from pattern matching? Are you sure this is not what happened in humans?"

This thought experiment of mine gives you the answer- The same answer that Searle gave decades ago:

You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

Expand full comment
Andreas Wandelt's avatar

The Chinese room experiment does IMHO not apply to what I proposed. I totally agree that current systems, based on language alone, do not cut it, and I think I wrote as much. I proposed systems which we would evolve, for example by adding learning/experience dimensions beyond language (robot body). That breaks the logic of that experiment, any sensimotorical input wax deliberately excluded in that. The trained entity would still be a neural net, though, and if I understood you correctly, you said neural nets can in principle not cross that threshhold. If you meant that in a narrow way, like "neural nets trained on language alone cannot..." then we are in agreement, but this was not my point.

Expand full comment
David Hsing's avatar

"adding learning/experience dimensions beyond language (robot body). " Do you realize that's hand waving? What does that "body" take as input and what does it process? Does it handle physical/electrical loads like every single machine there is and ever will be? You haven't thought this through one bit. It must handle loads, and that locks it in. Machines work by moving loads.

Expand full comment
Andreas Wandelt's avatar

Disengaging from this thread as well. I leave it up to you to decide for yourself whether that is because I have realized that *I* don't think things through well enough, or because I have realized that *you* don't. I think every reader of this will be able to make up their own mind on the answer.

Peace

Expand full comment
David Hsing's avatar

"human decision making quality is influenced quite strongly by all kinds of quasi-stochastic influences" Influences of X doesn't determine X in any way. The mind is the mind and the influences aren't the mind. Logic 101.

"I would argue that the effect matters more" ...That's behaviorism. Explain why I should give that line of thinking any credit whatsoever.

Expand full comment
Andreas Wandelt's avatar

Your statement on influences not determining the influenced in any way is for me nonsensical, and your statement of the mind being the mind, and the influences not being the mind is for me trivial and beside the point. Sorry that I was unsuccessful in finding any deeper meaning in them. Likely not worth the effort to go down the rabbit hole and define the terms, so let us not do that.

Your claim on behaviorism is factually wrong. Skinner argued that *only* the observable effect matters. I deliberately did not say that.

I will disengage from following up on this particular thread. My naturally evolved pattern matching algorithm detects the epistemiological category of "dogma" :-)

Peace!

Expand full comment
David Hsing's avatar

Influences are not determinants. A speed limit sign may influence me to do something, but that's not the determinant. Do I need to dumb it down more? Look up the dictionary definitions of the words "influence" and "determinant". You're acting like those school kids that call me "dogmatic" whenever I point out simple facts.

Expand full comment
David Fleming's avatar

I can give you the diagrams to the COVID 19 CRISPR recipe sent from one single UNC brilliant, wise, well meaning scientist six months prior to its receipt by an equally brilliant, well intended Wuhan virologist. No, we don't make a mistake on flawed zero tolerance ethics. 5MM dead, 15MM permanently impaired globally. You didn't read these two culprits got executed for their crime, right? Because their motives were good. They still work today. Understand? It is no longer evil intent by a bad actor or rogue nation. It is the nice folks that must be contained. Feel free to disagree.

Expand full comment