

Proof that global warming is not real!!! Read your science… if something gets HOTTER it EXPANDS!!! Those scientist cucks have cucked themselves good this time!!!
/s (in case it’s needed)


Proof that global warming is not real!!! Read your science… if something gets HOTTER it EXPANDS!!! Those scientist cucks have cucked themselves good this time!!!
/s (in case it’s needed)


The AI in the article you linked was Open Claw, which is an open-source version of one of these persistent AIs, so you’re right. It links to LLMs like Claude, but Anthropic haven’t actually released their own version yet, which is why it was showing up in the original files as ‘built but not yet shipped’.


Currently the AI only runs for a short time after you provide a prompt. So say you ask it to ‘draft a letter to my congressman demanding an end to the war’, the AI will read what you wrote and output its interpretation of what you want, then it will stop.
What they’re talking about here is something very different, something which can continue processing inputs all of the time. It would be ‘aware’ of (depending on what you give it access to) emails coming in, what you’re working on in other programs, calendar events, etc. The idea is that it could potentially interrupt you with suggestions, maybe even anticipate what you will want and do it for you.
Obviously this is going to be risky at first. We’ve already seen stories of AIs deleting entire projects, what could they get up to if they’re allowed to be your online stand-in with access to everything on your device?
What’s interesting, to me, is that’s exactly how people hedge in the fringe UFO community too.
Ha! True. Very true. I find this scenario compelling but it’s based on a series of assumptions which individually seem plausible but I have no way to evaluate them all together. It’s like the Drake Equation; because the probabilities are multiplicative even tiny adjustments to a few of them end up making a huge difference to the final answer.
The thing is though, if there really is even a tiny chance of the ultimate outcome of this thought experiment being true (i.e. the end of humanity) then we should probably address it. And what that would look like is stopping the AI companies from doing any more research until they can prove their model will be safe, which should make people who are more concerned about AI slop happy too. Everybody wins by hitting the brakes. (Edit: well, Sam Altman doesn’t but I’m not going to lose sleep over that.)
It’s not meant to be a specific prediction, it’s just a plausible (for when it was written) scenario. Don’t worry about the actual years, it could be off by an order of magnitude, just decide for yourself if any of the assumptions are completely wrong.


Nobody is programming those laws because it’s not possible with the way that LLMs are currently built and trained. Instead of The Three Laws, which are inviolable but in certain edge cases insufficient, we have Anthropic’s Constitution, which is 23,000 words worth of good intentions which Claude should keep in the back of its mind while it does whatever it wants to do.
If the UK Government is so eager for sovereign AI capability, why are they relying on a US company to design, build, and presumably run it?