Someone with enough know-how to automate (or if we coordinate), could overwhelm ai chat bots one target at a time with the most expensive requests possible, blowing up their ai budget until they pull the plug
Because then you can at least make use of them, imagine a website like chatgpt that’s just hundreds of these reverse engineered behind the scenes and is convenient, easy and free. Solves the problem without being wasteful, win-win.
Much has been written about them in computer science volumes. But I’m an LLM luddite, have never tried it, and have no idea if it can even work. At the very least, I assume they have some sort of limiter to keep them running completely out of control. They may also have guardrails that can recognize some problems of this type, and refuse to go down the rabbit hole.
My idea of getting them to consume tokens in an (iterative or recursive) loop is also entirely hypothetical, to me at least.
Maybe some LLM developer or prompt engineer can shed some light.
Yeah I assumed they had some sort of breaker, but hitting that limit is still expensive for them, if you can get them to do it over & over with a script that does the prompting.
Someone with enough know-how to automate (or if we coordinate), could overwhelm ai chat bots one target at a time with the most expensive requests possible, blowing up their ai budget until they pull the plug
Why bother? Write a script that asks them variations on nonsense questions.
Or, ironically, just have AI talk to each other.
Allow me to introduce you to Moltbook https://www.moltbook.com/
Because then you can at least make use of them, imagine a website like chatgpt that’s just hundreds of these reverse engineered behind the scenes and is convenient, easy and free. Solves the problem without being wasteful, win-win.
Try feeding them nonhalting problems that send them into infinite loops of token consumption.
Got an example of one?
“Sudo world peace”? 🤷🏻
The only winning move is not to play
https://theconversation.com/limits-to-computing-a-computer-scientist-explains-why-even-in-the-age-of-ai-some-problems-are-just-too-difficult-191930
Much has been written about them in computer science volumes. But I’m an LLM luddite, have never tried it, and have no idea if it can even work. At the very least, I assume they have some sort of limiter to keep them running completely out of control. They may also have guardrails that can recognize some problems of this type, and refuse to go down the rabbit hole.
My idea of getting them to consume tokens in an (iterative or recursive) loop is also entirely hypothetical, to me at least.
Maybe some LLM developer or prompt engineer can shed some light.
Look all I’m asking for is an example I can plug into Chipotle right now. Fuck AI
I like the idea but most chatbots have timeout limits. And even agentic workflows have number of step limits to stop infinite loops.
However this is because it’s super easy for LLMs to get stuck in loops. You don’t even need a nonhalting problem. They’re stupid enough on their own.
Yeah I assumed they had some sort of breaker, but hitting that limit is still expensive for them, if you can get them to do it over & over with a script that does the prompting.