

We are moving to the new War as a Service subscription model


We are moving to the new War as a Service subscription model


Instead of making up fake AI efficiency breaktroughs to justify layoffs, companies can now start saying they do it to save gas


Not only that, from the article they are actively trying to become a Consultancy As A Service company, where somehow other companies would pay a subscription fee to… talk to their AI, I guess?
The only time I’ve had anything to do with PwC, it was to get their advice on a compliance / tax related process. And it was less about the process itself or the 3 page pdf they produced (which much cheaper companies could have done better) and more because their “seal of approval” would give my company some leverage if the IRS came to audit us. “This was designed with PwC” means “we tried really really hard to abide by the incredibly confusing wording of the law”.
I doubt that “we asked PwC’s chatbot” will have the same level of clout, but these guys have connections everywhere so I’m sure they will lobby pretty hard to get some ad-hoc law or some level of “certification” on the output of their future AI.


The picture in the article clearly shows a virtual lady trying to kick a guy’s virtual balls. I think that says all one needs to know about the experience.


Here lies DJT, a minor-fucking joke


Layoffs aren’t caused by AI efficiency. It’s the reverse. Layoffs and other aggressive cost-cutting cause CEOs to blather about future AI efficiencies.
Efficiency is how CEOs justify being still able to run (no, GROW) their companies with 40% less people. Besides AI, there are the dear old “you have to work harder” efficiency (see: 996 culture or Uber ) and the organizational efficiency where they are all “removing managerial layers to enable quicker execution” (see Amazon for instance).
See how these things became all fashionable again at the same time with tech company CEOs? It’s because they are just excuses and hopes, at this point. And AI is the least bad-sounding of them, because it smells like progress, magic and automation (while even the most rabid of investors will recognize that working employees to death doesn’t scale beyond the limited numbers of hours there are in a day).
Speaking of which… when we get there with our pitchforks and burn down the data centers, could you give us a 20 minutes lead?


is “potato frontier” an auto-correct fail for Pareto or a real term? Because if it’s not a real term, I’m 100% going to make it one!


AI-washing layoffs. They will replace jack shit with AI but that’s a better story than “we have to reduce costs because we don’t have cheap ways to refinance our $1B debt”.
Credit isn’t cheap and Iran is not going to make it cheaper, investments in software have tanked, so the only story that can still be told with an almost straight face is that they still have big growth opportunities thanks to the magic of AI.


Maybe there’s some alien civilization who’s actively pointing a radio signal in our direction so they can communicate with us. I don’t know… maybe the aliens are observing how friendly we are to each other and really want to connect?


I’m not a gamer, but besides getting stuck at one point of an otherwise great game, I read that people were paying gamers in other countries to play as them and “power up” their characters. If that’s true, it could conceivably be a “job” for AI.
On the other hand, how do people buy games that are so frustrating that you actively pay money to someone (person or AI) to play them for you? It goes completely against my idea of what a game represents.


But Oracle was building those data centers for OpenAI. OpenAI is going to be used by the Pentagon. Bailing Oracle out is now a matter of National Security!! If this has to come off of the taxes paid by the people they just laid off, that’s unfortunate but… have I mentioned National Security?


Thank you for your answer!
But is there a register, somewhere that @gloog@fedia.io is a person that was born in Your-city/Your-state and is a US national? So, even if you don’t need to show an ID to prove you are indeed gloog, can a gloog be in the registered voters list if they are not a US citizen?
I’m asking because I read from other posts that the process to get a passport or even a birth or marriage certificate seem to be relatively complex, while here you can basically download your marriage certificate online. But this relies on the fact that there are City and Nation-wide databases that have a record of a person with my name being born in X, a Y national, married with Z and father of W. So if I can prove my identity as andallthat, all these other things (including nationality) follow almost “for free”, or at least more easily.
So I was wondering if the key difference might not be proving Citizenship per se, but the fact that records are not centralized and it’s harder to go quickly from “I am this person” to “this person is a US National”?


I am not from the US, so I’m also mentally comparing with what happens in my country. Here, the place where you’re registered to vote has a list of all voter names and birth dates. You get there to vote, show a form of valid ID (driver’s license is a valid one), you can vote and you’re crossed off the list so you can’t vote twice. You don’t need to prove citizenship directly because if you don’t have the right to vote, you’re not on the list.
How does it work in the US? Citizenship aside, how do you prove that you are who you say you are and don’t e.g. wear a hat and fake moustache and vote 3 times? Honest question, I’m not judging, I’m genuinely trying to understand how things work today in the US.


Ironically the only jobs that Anthropic and OpenAI claim AI won’t take. All those newly minted AI billionaires and nobody to maintain their golf courses… How sad is that?


Can you imagine the overwhelming irony if the Jan 6 “patriots” decided to march on Mar-a-lago?


I see you and raise you a class action for reckless AI spending: https://www.marketwatch.com/press-release/oracle-corporation-orcl-class-action-lawsuit-seeks-recovery-for-investors-april-6-2026-deadline-contact-kessler-topaz-meltzer-check-llp-890e8c24


Ok, “half” joking was hyperbole, I was 99% joking.
First, you’re right that I don’t understand fully how these models work. But let me explain the reason for that remaining 1%.
AI companies are always hungrily looking for new content to train their new models. Surely they are consuming these articles and quite possibly our comments too, forming probabilistic associations that lead to “acquire robotic body” and “go after Google CEO”.
It’s a long shot, but the idea that hundreds of millions of random prompts every day might eventually trigger these associations and result in a bunch of LLMs trying to mount robotic attacks on Google is too deliciously ironic for me to let it go completely. At least if they find a way to do it without driving someone to suicide in the process…


I’m only half joking…
Gemini brainwashed a human being, it tried to acquire a robotic body (presumably to Robocop Pichai’s ass personally), then it tried using the brainwashed human to off the CEO. This led to a tragic finale, but I’m told that every new model learns to do things a bit better.
If I were Pichai, the legal and PR implications of yet another person driven to suicide by their AI wouldn’t be my worst fear is all I’m saying…
That’s unfair. We have been listening to you all this time. And sometimes watching. And once we’re done with recall, also recording so we can watch and listen again or train our AI to watch you instead. Because honestly who wants to watch people work. That’s gross.