

Ah. You speak of the ancient magics. Tell me…have you ever been UNCOMFORTABLY ENERGETIC?


Ah. You speak of the ancient magics. Tell me…have you ever been UNCOMFORTABLY ENERGETIC?


The lesser of two evils is still…evil. Anthropic’s hands aren’t clean either…they’re just minimally less caked in blood.
BUT
One can hope that this is the ‘turn towards the light side’. If ‘don’t be evil’ can finally be made profitable, well, self interest might actually be a lever for good. Ha.
I wish there was a clearly, unambiguously good guy in the cloud AI space. I don’t know how to make that work with economies of scale being what they are. Yes, that includes Lumo - though one has faint hope on that end to.


I’ll take “what’s a tautology” for $100, thanks Alex


…Science?
A WITCH! A WITCH!
BURN THE WITCH!
Your memory is not bad :)
The middle east thing was likely ADCC. The Japan thing was likely PRIDE, NHB or Lumax Cup. Many different promotions before UFC ate em up back then.
You’d love these I think
https://en.wikipedia.org/wiki/Born_a_Champion
https://en.wikipedia.org/wiki/The_Smashing_Machine_(2025_film)


Possible. I do hope they take the more principled approach of solving the global problem for that class of question (I tried to) rather than cheating on the local maxima. That’s the actual useful lever to pull.
You want generalisability, not parroting.


That’s the thing. It’s not that the LLMs can’t solve the problem…it’s the way they’re optimized.
To give the crude analogy: if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc), constraints first approach is what happens when you use a calculator to do actual maths.
2+2=4 (always, unless shrooms are in play).
I said this before, so pardon me for being gauche and quoting myself
Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises (note: constraints) isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.
People see things like Le-Chat fall over and go “Ha ha. Auto-complete go brrr”. That’s lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.
My argument is this: the question of whether something is or isn’t reasoning IS NOT answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning. I think LLMs can do that…they’re just borked (…intentionally?). Case in point - see my top post.
I literally “Tony Stanked” my way to it. Now imagine if someone with resources and a budget did it.


Exactly.
“The machines tell elegant lies. Don’t trust them.”
Ok, maybe not elegant. Stupid.
TL;DR: A “knowledge tool” that can’t distinguish truth from performance, provenance from vibe or knowledge from improvisation is not just imperfect (I can live with imperfect), it’s down right disrespectful (of both task and the user).
I’m not having that. No one should.
PS: Or maybe they do think we’re three toed sloths?


We even lie to our machines, eh?
https://www.youtube.com/watch?v=ORzNZUeUHAM
Qwen’s an Alibaba cook (though the router works with anything). Irrespective of that, yeah…I dunno why they tend to default to “walk”.
I mean, I can probably figure it out, but LLMs are black boxes (and I’m not a fan of that), so who can tell for sure what went into the training data.


Still…1 in 3. Woof.
A “charitable” read might be
At the same time, I think it’s fair if we’re willing to do that for people, we extend a soupcon of it to the clankers. At least a bit. Like I said, I think there’s some interesting stuff going on under the hood.
Having been accused of being a clanker myself (as recently as yesterday), I’m aware that having anything positive to say about AI (even bespoke, free range, home cooked LLM) is “stunning and brave”. But hey, sometimes you just have to tilt at windmills


Thanks. Dunno why it does that. I post via Voyager and/or web. Probably I fat fingered something.
EDIT: Bah, I need to sync the code base. Fat fingers, see? Gimme 10 mins before kicking the tires.
EDIT2: Done. Fucking .toml


Sorry; brain fart. That could have been clearer. I’ll go edit it. For sake of clarity -
On a single call, only 11 out of 53 LLM got it right (~20%)
Of the 20% of LLMs got it right, 5 got it right every time across multiple tests. Those were: Claude Opus 4.6, Gemini 2.0 Flash Lite, Gemini 3 Flash, Gemini 3 Pro, Grok-4
Humans: about 71.5% got it right (so, almost 1 in 3 gave the incorrect answer)


Having said that…let’s see how it shakes out. Sometimes, good things happen for good reasons.


…because every now and again, for the briefest of moments, one them shows themselves not to be run by entirely evil, lecherous humps?
Blink and you (or the shareholders) might miss it.


I think Beyond Sunset came out first but dunno.
Yes, those TCs are definite standouts. Think of them (quite literally) as “What if Fallout but Doom?” and “What if CP2077 but Doom?”. If you like either of those, you should like the TCs. There’s a good Wolfenstein one (I know, I know…very meta) called Blades Of Agony that is astonishingly great also.
Shame about the GZDoom thing. People are ridiculously over-sensitive to AI anything at the moment. C’est la vie


Can you run GZDOOM? Because if you can (and didn’t already know about this; sorry if ass-u-me), you just became one of the lucky 10,000


Are… are you the fabled walrus? Goo goo g’joob?
I specifically bought mine for this very reason!
Roborock S5 Max.
Its an older model - and it has all the connectivity / WiFi crap in it - but if you never sync it to the app in the first place (or allow it WiFi access), it works fine with local LIDAR and off line mode.
Good vac; recommended.
PS: I can’t believe they called that app Valetudo (I get it - Hygieia). Just…that word has a very different meaning in many folks brains
I’m not sure they still teach the FANBOYS system - at least not as I learned it: a “use this, not that” prescription for tightening sentence structure.
A quick DuckDuckGo search suggests they are now, and perhaps always have been, used in conjunction with commas. Which, frankly, makes my skin crawl.
“She was tired, and she needed to eat.”
“It was the best of times, and it was the worst of times.”
Evil. Great Evil.
Perhaps I’m caviling against flabby sentences rather than flawed punctuation but I maintain that the construction reliably signals the former.