

For reference the open-source project appears to be OpenCog, founded by “Ben Goertzel” who at least up until 2010 held the title of “Director of Research” at SIAI, the relation stopping because he wasn’t a true believer in doom.


For reference the open-source project appears to be OpenCog, founded by “Ben Goertzel” who at least up until 2010 held the title of “Director of Research” at SIAI, the relation stopping because he wasn’t a true believer in doom.


I’ll gladly endorse most of what the author is saying.
This isn’t really a debate club, and I’m not really trying to change your mind. I will just end on a note that:
I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.
Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. “Computable” in the essay means something specific.
Simulation != Simulacrum.
And because I can’t resist, I’ll just clarify that when I said:
Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.
It means that the test does (or can possibly) exist that, it’s just not achievable by humans. [Although I will also note that for methods that don’t rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]


Even if true, why couldn’t the electrochemical processes be simulated too?
But even if it is, it’s “just” a matter of scale.
I do know how to write a program that produces indistinguishable results from a real coin for a simulation.
As a summary,


That’s because there’s absolutely reams of writing out there about Sonnet 18—it could draw from thousands of student essays and cheap study guides, which allowed it to remain at least vaguely coherent. But when forced away from a topic for which it has ample data to plagiarize, the illusion disintegrates.
Indeed, Any intelligence present is that of the pilfered commons, and that of the reader.
I had the same thought about the few times LLMs appear to be successful in translation, (where proper translation requires understanding), it’s not exactly doing nothing, but a lot of the work is done by the reader striving to make sense of what he reads, and because humans are clever they can somtimes glimpse the meaning, through the filter of AI mapping a set of words unto another, given enough context. (Until they really can’t, or the subtelties of language completely reverse the meaning when not handled with the proper care).
Also I realize the word often get’s used fuzzilly that way even in general, but I suspect what they mean is epistemology not ontology.