I have made the conscious decision to try and not refer to it as AI, but predictive LLM or generative mimic models, to better reflect what they are. If we all manage to change our vernacular, perhaps we can make them silgtly less attractive to use for everything. Some might even feel less inclined to brag about using them for all their work.
Other options might be unethical guessing machines, deceptive echo models, or the classic from Wh40k Abominable Intelligence.
I mostly agree. Machine Learning is AI, and LLMs are trained with a specific form of Machine Learning. It would be more accurate to say LLMs are created with AI, but themselves are just a static predictive model.
And people also need to realize that “AI” doesn’t mean sentient or conscious. It’s just a really complex computer algorithm. Even AGI won’t be sentient, it would only mimic sentiency.
And LLMs will never evolve into AGI, any more than the Broca’s and Wernicke’s areas can be adapted to replace the prefrontal cortex, the cingulate gyrus, or the vagus nerve.
Tangent on the nature of consciousness:
The nature of consciousness is philosophically contentious, but science doesn’t really have any answers there either. The “Best Guess™” is that consciousness is an emergent property of neural activity, but unfortunately that leads to the delusion that “If we can just program enough bits into an algorithm, it will become conscious.” And venture capitalists are milking that assumption for all it’s worth.
The human brain isn’t merely electrical though, it’s electrochemical. It’s pretty foolish to write off the entire chemical aspect of the brain’s physiology and just assume that the electrical impulses are all that matter. The fact is, we don’t know what’s responsible for the property of consciousness. We don’t even know why humans are conscious rather than just being mindless automatons encased in meat.
Yes, the brain can detect light and color, temperature and pressure, pleasure and pain, proprioception, sound vibrations, aromatic volatile gasses and particles, chemical signals perceived as tastes, other chemical signals perceived as emotions, etc… But why do we perceive what the brain detects? Why is there even an us to perceive it? That’s unanswerable.
Furthermore, where are “we” even located? In the brainstem? The frontal cortex? The corpus callosum? The amygdala or hippocampus? The pineal or pituitary gland? The occipital, parietal, or temporal lobe? Are “we” distributed throughout the whole system? If so, does that include the spinal cord and peripheral nervous system?
Where is the center of the “self” responsible for the perception of “selfhood” and “self-awareness”?
Until science can answer that, there is no path to artificial sentiency, and the closest approximation we have to an explanation for our own sentiency is simply Cogito Ergo Sum: I only know that I am sentient, because if I wasn’t then I wouldn’t be able to question my own sentiency and be aware of the fact that I am questioning it.
Why digital circuits will never be conscious:
The human brain has about 14 billion neurons. The average commercial API-based LLM already has about 150 billion parameters, and with FP32 architecture that’s already 4 bytes per parameter. If all it takes is a complex enough system of digits, it would have already worked.
It’s just as likely that consciousness doesn’t emerge from electrochemical interactions, but is an inherent property of them. If every electron was conscious of its whirring around, we wouldn’t know the difference. Perhaps when enough of them are concerted together in a common effort, their simple form of consciousness “pools together” to form a more complex, unitary consciousness just like drops of water in a bucket form one pool of water. But that’s just pure speculation. And so is emergent consciousness theory. The difference is that consciousness as a property rather than an effect would explain why it seems to emerge from complex enough systems.
It’s just a really complex computer algorithm
Not particularly complex. An LLM is:
$P_\theta(x) = \prod_t \text{softmax}(f_\theta(x_{<t}))$
where $f$ is a deep Transformer trained by maximum likelihood.
That “deep Transformer trained by maximum likelihood” is the complex part.
Billions of parameters in a tensor field distributed over a dozen or more layers, each layer divided by hidden sizes, and multiple attention heads per hidden size. Every parameter’s weight is algorithmically adjusted during training. For every query a matrix multiplication is done on multiple vectors to approximate the relevancy between each token. Possibly tens of thousands of tokens being stored in cached memory at a time, each one being analyzed relative to each other.
And for standard architecture, each parameter requires four bytes of memory. Even 8-bit quantization requires one byte per parameter. That’s 12-24 GB RAM for a model considered small, in the most efficient format that’s still even remotely practical.
Deep transformers are not simple systems, if they were then it wouldn’t take such an enormous amount of resources to fully train them.
The technical implementation, computational effort and sheer volume of training data is astounding.
But that doesn’t change the fact that the algorithm is pretty simple. Deepseek is about 1,400 lines of code across 5 .py files
You’re really breaking the shitting on AI vibe when you make it sound like the height of human capacity and ingenuity. Can I just call it slop and go back to eating glue?
You can still shit on AI, just because it’s computationally complex doesn’t make it the greatest thing ever. It still has a lot of problems. In fact, one of the main problems is its consumption of resources (water, electricity, RAM, etc.) due to its computational complexity.
I’m not defending AI companies, I just think characterizing LLMs as “simple” is misleading.
Our whole economy is geared to consume resources, we have inflation targeting to prevent aggregate demand and prices from ever falling. If you want to lower consumption need hard currency, the cheap cash that the AI’s are riding on now is most likely still Covid stimulus and QE.
And speculation. Venture capitalists think they can create money by
investingbetting money that they predict they’ll have in the future. It’s how this circular ponzi scheme between Nvidia and OpenAI is holding itself up for now.Those huge numbers that they count in their net worth don’t really exist. It’s money that’s been pledged by a different company based on money they pledged to that company in the first place. It’s speculation all the way down.
They’re hoping for a pay-off, but it’s a bubble of sunken costs kicking the can down the road for as long as they can before it bursts.
“Asking one’s chat bot” sounds so much less impressive than “leveraging AI”. Using the right language throws some cold water on the corporate narrative.
this post is real✅ and has been fact checked by true american patriots✅
That was December 2024.
McKinsey & Company consulting firm has agreed to pay $650 million to settle a federal investigation into its work to help opioids manufacturer Purdue Pharma boost the sales of the highly addictive drug OxyContin, according to court papers filed in Virginia on Friday.
Drug dealer must sell drugs.
I do not hate AI, I am learning how to use it. AI is really great technology and like a calculator, you can be more productive when you know how to use it.
To be fair, it’s a genius robot slave.
Billionaires everywhere are shooting into the sky on rocket exhaust composed of their own semen.
If that worked I’d have been near Jupiter halfway through high school.
and then everyone clapped
ChatGPT alone has nearly a billion daily active users. Even accounting for corporate types who are pressured to use it, saying that nobody likes it or wants it is delusional.
[…] has nearly a billion daily active users.
So has Facebook, or opioids.

Your point is invalid.
Are you trying to claim that nobody likes Facebook? That nobody like opioids? You can say that they’re harmful, but that’s not the argument OP was making. His point still stands.
You mean the average person defending the stance “Everybody hates this thing” isn’t logical, or rational? SurprisedPikachu.jpg
Using it by choice when you specifically ask to is one thing.
When every action you take on a system is visibly being fed to the bots by default it gets intrusive and creepy.
The closest figure I could find said that chat gpt sees around 800 million users per week, not per day. The per month statistics at the height of January 2026 was 5.72 billion visits. Which means about 185 millionish (rounding up) per day. The statistics often go from visits to users. I imagine users means unique users and visits can mean the amount of times anyone has accessed the website. Regardless all of this would mean a majority does not use chat gpt
Regardless, it also means hundreds of millions of people use it and other AI tools on a daily basis. They’re not all being forced to at gunpoint. Trying to pretend that the opinions of one’s own social circle are “everyone” and that people who disagree do not exist is not a persuasive argument for anything.
They’re not all being forced to at gunpoint.
No, but a lot of them are being ordered to do so by their employer.
If it’s something like Google or Windows that’s suddenly started generating AI “answers” or whatever when you just use your computer as normal, does that get counted in those statistics?
I bet it does, and I bet it accounts for a huge percentage of it.
Those probably won’t be counted if the number is ChatGPT users rather than LLM or genAI users. They’re a whole separate bunch of obnoxiousness instead!
I don’t think it’s “regardless” at all.
My point was that saying that “everyone hates this” and “nobody wants this” (what the shirt in the post says) is flatly untrue. Whether ChatGPT has a billion daily active users or 500 million weekly active users or whatever is just trivia that doesn’t really change the gist of what I’m saying. Lots and lots and lots of people really like AI tools and use them every single day. Ignoring that fact and pretending like everyone agrees with you is dumb.
Sam, please get off of the feddiverse, we don’t want AI here.
Ah yes, the other can’t-miss trick, “everyone who isn’t 100% on board with or mildly questions the hate train is a shill.”
Asking for less delusional anti-AI arguments isn’t AI boosting.
This is the fediverse, you can’t really expect people here to check their hate.
I just want to give props to you. I totally get your point but even through the negativity you stayed polite and tried to get your point across. Thank you for making the fediverse a better place.
Yes how dare you challenge our opinions!
Most people would take the statement as hyperbole. Also, the claim wasn’t against all AI use. It’s the shoving it everywhere that they say people hate. Even among those I know who use and like AI, they say it’s being shoved into things pretty arbitrarily and needlessly. Having it available to use is one thing. Ramming it down our throats is another.
And what’s going to happen when the VC funding dries up and they have to charge what it actually costs to run?
Yeah, the point where companies don’t want to run those services at a loss any more is going to be interesting.
First step will be that they’ll slap ads into it. Afaik stuff like that is already planned.
ChatGPT alone has nearly a billion daily active users.
According to whom?
The people who run it? Who have every financial incentive in the world to inflate their numbers?
Even they aren’t that brazen.
It’s a “trust me bro” number that the commenter can later dismiss with “well it wasn’t precise, but you get my point, and if you don’t it’s you who’s boneheaded, not me”
The OP is about AI getting forced into things, which rightfully many people are pissed off about. But you’re right, ChatGPT and many other LLM tools are very popular.
Using it specifically is, at least for me, something different than having it included in everything else. It’s everywhere, no matter if it actually offers real benefits to the user and that’s the issue.
The most ridiculous shit I have seen was when the lastest Gripen version was marketed as being AI powered.
For about 3 weeks I wasnt able to share documents with coworkers without sharing them with Copilot first…
You are a prime example for a simple truth: circlejerking (“opinion bubbling”) doesn’t depend on platform.
There are a shitload of users. Yet, “everybody”…
Perhaps in their social circle, AI is the new porn. Who ever would watch that, after all?
But, more likely, everyone clapped and that was it…
Yep. Fediverse is in a bubble. People in general have no feelings about it. They don’t love it or hate it, they just use it. They have joked about how it gets stuff wrong until like a year ago and that’s an old joke now.
A lot of people here have passionate hatred about it and they project that it someone doesn’t hate it as much, they must love it. But the largest majority have no feelings towards it. It’s just a tool, useful for some things, not as useful for others, that’s it.
I could stomach the ills that come with AI if you could use it (as implemented in all the crap it’s in) without selling your soul to the data harvesters.
ALL the cool kids hate AI. You want to be cool, don’t you? Come on, everyone, let’s hate the latest marvel of technological innovation together!
It uses electricity, which nothing else does. It uses water, which is then destroyed forever and can never be used again after it’s been used once for AI. It’s occasionally wrong, which nothing else ever is. It makes billionaires richer, which nothing else ever does. It reduces your creativity, which everyone has in spades. It discourages you from thinking for yourself, unlike the mob mentality telling you to hate it.
Stop acknowledging the many positives and just accept instead that it’s only ever going to be horrible. Forever. Always in all ways. It’s time to turn off your brains and just hate AI with us. Woo! 🥳
Did you write this or just prompt it?
I’ve been accused of being AI (or a robot, or an alien) for decades. Let’s put it this way: this post prompted me to write this.
I’ve found it useful in a couple of cases. Not useful enough that I’d ever pay for it, though, which is probably why AI still isn’t turning a profit.
The fuck are all these comments? AI is shit, fuck AI. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.
t’s the same as any other commercial tool. As long as it’s profitable the owner will continue to sell it, and users who are willing to pay will buy it. You use commercial tools every day that are harmful to the environment. Do you drive? Heat your home? Buy meat, dairy or animal products?
I honestly don’t know where this hatred for AI comes from; it feels like a trend that people jump onto because they want to be included in something.
AI is used in radiology to identify birth defects, cancer signs and heart problems. You’re acting like its only use-case is artwork, which isn’t true. You’re welcome to your opinion but you’re welcome to consider other perspectives as well. Ciao!
It’s in part because people aren’t open to contradictions in their world view. In part you can’t blame them for that since everyone has their own valid perspective. But staying willingly ignorant of positives and gray areas is a valid criticism. And sadly there are plenty of influencers peddling a black-white mindset on AI, ignoring all other uses. Not saying intentionally or not, again perspective. I’m sure online content creation has to contend with a lot more AI content compared to the norm. But only on the internet do I encounter rabidly anti AI people, in real life basically nobody cares. Some use it, some don’t, most do so responsibly as a tool. And I work in the creative industry…
Look up dot com bubble. We still have the internet. Just because AI is over-hyped and in a bubble doesn’t mean it won’t still have uses.
I fully agree. I still remember the time when using Photoshop was seen by some as not being “real artist”, because “any idiot with a mouse can draw now”. I’m not under any illusion this will last forever, the negative sentiment is boiling because of the bubble and it’s negative externalities, not by the technology itself. So once that bursts, things will hopefully be a lot more peaceful.
machine learning can be useful in limited cases, but is not to be trusted. agentic ai has to go. computers are not creatures, and thinking otherwise is a bad mistake.
Even before our current time, “nobody cares” is not a thermostat reading of what “really matters”. It almost sounds like you believe people know what’s best for themselves, when the truth of the matter is that humanity has long proved otherwise.
I don’t believe that. What I’m saying is that these are all people I work with look very critically and skeptically at the world, as that’s pretty much an inherent requirement for the creative industry. We all know what AI is and what it does, and most arguments against it hold no water to people with a realistic view of the industry to the point it simply cannot be black and white like some claim it to be.
There are a few good reasons to dislike AI, but those don’t apply to all of AI. Some are value based, and other people have other values that are equally valid. And some can be avoided entirely. Like how you could ship packages with a coal rocket instead of a train on electricity, or just shipping less packages to begin with.
There is trust and experience between one another in the industry that we aren’t using it unnecessarily, wastefully, and incorrectly, and AI is not anywhere near a requirement by consumers nor healthy minded businesses.
You sound like a cartoon supervillain, Lex Luthor ranting to superman about the common animals not knowing what’s best for themselves.
Republicans.
“I’ve never seen it it must not exist”
I work in a creative industry too and it is the bane of not only my group but every other company I’ve spoken to. Every artist and musician I know hates it too.
I never said it doesn’t exist. I’m sorry people in your area are being negatively affected if so. But the point still stands. My experience is just as valid.
“in real life basically nobody cares”
“My experience is just as valid”
really?
k
I’m pretty anti AI as it is a tool of the billionaire class to enslave the masses. Look up TESCREAL, its the digital eugenics billionaires and fringe philosiphers believe in and it is the driving force in the AI push.
That being said I can see a use for a focused, local LLM/AI assistant. I have to search a lot of confidential technical manuals, schematics and trust cases in my job. We are thinking about testing out Ollama to upload all our documents too to make searching them easier.
You are the exact person I didn’t mean 😄 the first is a very valid reason to dislike AI.
OK so why is AI so big right now because it isn’t profitable. Even there most expensive tier is losing them money. Then you have the data centers getting breaks on electricity so the rest of us cost goes up to make the difference. Where is this magical profitability that is driving AI.
I honestly don’t know where this hatred for AI comes from
Did you try reading the comment you just replied to?
The use in radiology is not a good thing. Hospitals are cutting trained technicians and making the few they keep double check more images per day as a backup for AI. If they were just using it as an aide and the humans were still analyzing the same number of picturea that would be fine but capitalism sees a way to save a buck and people will die as a result.
This isn’t a problem with AI though, it’s a problem with the people cutting trained technicians. In places where such incompetent people don’t decide that, you get the same number of trained technicians accepting (and being a part of) a change that gives them slightly more accurate findings, resulting in lives being saved overall. Which is typically what health workers want to begin with.
That big ol list of things didn’t do it for you, huh?
That sensationalized list? No, not really.
Separate LLMs and AI.
LLMs are shit, fuck LLMs. They fuel billionaires, destroy the environment, kill critical thinking, confidently tell you to off yourself, praise Hitler, advocate for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.
And AI is a pipe dream no one is close to fulfilling, won’t be realized by feeding LLMs all of the data in existence, and billionaires are destroying our economy in their pursuit of it.
You are referring to AGI not AI.
The broad category of AI is most definitely real.
Could you define that category? Or give us an example of a programme that fits under it and one that doesn’t?
example of some that fit under it: imagenet classifiers (they’re people too!), uhmm
example of some that don’t: chatgptStuff like ML, Computer Vision, Alpha Fold?
AI contains LLMs and Machine learning and AGI.
My main point is that you shouldn’t throw out computerised protein folding and cancer detection with your hatred of LLMs.
OK, and my point is that people are using the term “AI” so loosely as to be indistinguishable from “algorithm”.
We’ll still have the statistical protein folding models after this bubble eventually pops, we’re just not gonna call it “AI”. It’s a trendy marketing department word, and its usefulness as a description in Computer Science is rapidly diminishing.
overheard, rumored, etc: advances in ai are then quickly popularized, to the point where they’re no longer thought of as ai. then people look at the main ai field, and think “why haven’t they done any ai work?”
I would say it just got widespread use, I definitely heard of MS Word doing autofill as ‘AI’ at the time when deep learning was freshly invented thing. People tried to label a lot of things ‘AI’, with LLMs the label just stuck better
AI to a layman just means “LLMs and Generative AI that rich assholes keep trying to force me to use or consume the output of”. i dont think its worthwhile to split semantic hairs over this. call the “good” stuff CNNs or machine learning if you really feel the need to draw a distinction.
To a layman, yes I agree.
Not many laymen on lemmy. We can afford to be precise with our language.
Change this out for any other technology that’s been innovated throughout human history. The printing press semiconductors the internet.
The anti-ai rhetoric on this platform is becoming nonsensical.
At this point it’s just bandwagon hate. These people don’t even understand the difference between llms and AIs and the various applications that they have.
wdym “change this out”? with what??
The internet printing press firearms semiconductors nfts the blockchain.
A multitude of other technologies.
oh man i meant “change what”. change what part to that?? there’s ~two parts and i don’t know what you’re talking about
Any other technology? How about 3D TVs, smart glasses, blockchain, NFTs, the Metaverse?
Yes. These all qualify. They’re all massively successful technologies.
Well, aside from 3D TVs and smart glasses. But they’re generally innocuous. Yes I also understand that smart glasses es have privacy issues but then again in this day and age what doesn’t.
If you think any of these are massively successful, I question what reality you are living in.
The blockchain nfts the metaverse aren’t successful?
These three things generate massive amounts of revenue. The metaverse especially is a billion dollar IP.
The word success doesn’t have a positive connotation to it in this case.
The metaverse had a billion dollars pumped into it, and yet for all they money they spent they have literally no users to show for it. Likewise for NFTs, a few idiots got suckered into paying for monkey JPGs and are now left holding a bag that no one wants.
The blockchain has a small cult trading money back and forth to make it look bigger than it really is. But it’s never achieved any kind of mainstream adoption as the currency true believers keep insisting it will be. And it never will, because it’s way too inefficient to ever scale.
Blockchains in an age of Trump choosing a new Fed chair after trying to have Powell arrested.
Trust your government over software and cryptography, which has no basis in reality outside of the laws of physics and mathematics.
software and cryptography, which has no basis in reality outside of the laws of physics and mathematics.
i don’t know if you’re joking or not, but yes you are
Figured I’d summon at least one person trying to defend crypto. Just because the US has issues doesn’t suddenly mean crypto is good.
Bitcoin has been around for almost two decades now, and still has not achieved anything beyond being a means for speculators to try and fleece each other. If it hasn’t reached widespread mainstream adoption by now, it never will.
Crypto is a failed technology, full stop.
Gold is also just digging something up and then re-burying it. If it hasnt replaced fiat then why are people buying it, why has it been going up 100% a year recently when theres no new industrial demand for it?
Its fine to not hold it, but all finite assets have some intrinsic value, because fiat keeps pumping via new debt issuance, which is inevitably debased. Like it was during Covid, or 2008, or 2001, etc…
Crypto has a higher volatility, but can have a higher return, and is more closer correlated to the Nasdaq; like all assets its generally efficiently priced. I’d say its closer to TQQQ than it is to VT or gold, which may be suitable for 1-10% of a portfolio depending on goals and risk tolerance. If they drop interest rates quickly to pump the stockmarket TQQQ and Bitcoin would likely both rise dramatically.
I feel like you just autopiloted into random cryptobro talking points that have nothing to do with the conversation. I don’t care if you like crypto, the reality is that rest of the world has already rejected it and moved on.
If the US dollar goes through hyperinflation and becomes worthless, people in the US won’t switch to Bitcoin or other crypto as their main form of currency. We’ll do exactly what citizens of every country that experiences such a currency crash does - start using other more stable currencies. You would see businesses start accepting a mix of Canadian dollar, Mexican pesos, Euros, and Yuan.
I’ve contemplated this myself, about competing currencies, and how that would leave the world if we had cheap and ubiquitous FX with little to no drag. Would it not cause a race to the bottom for inflation targeting, and lead to something similar to everyone using a fixed currency?
Why would I hold Canadian dollar or Pesos if their inflation target is 2% versus say the Swiss 1%? Is there enough new money supply for everyone to even attain the lowest inflation currency, or do they bid down the denomination as that countries FX value rises?
Why would I hold Canadian dollar or Pesos if their inflation target is 2% versus say the Swiss 1%?
No. No it wouldn’t.Because ultimately you (assuming you’re in the US), have to pay your taxes in USD. People say that fiat currencies aren’t backed by anything, but that isn’t true. They’re backed by the fact that every single US citizen and resident has to gather up thousands of dollars every year and pay them to the government. Even if you could convince your employer to pay you in Euros, the IRS will still demand you pay whatever taxes you would owe if you were paid in an equivalent amount of dollars.
Bullshit, fuck your false equivalency. This tech is good at generaating slop, propaganda, and destroying critical thinking. Thats it. It has zero value.
i mean, if by “this tech” you mean machine learning in general, then no, it has been used for good purposes(?), but if you mean this tech then absolutely
Ok. This is clearly rage bait.
You’re an ignorant fool and I’m probably not the first person to tell you that.
Fuck off and go enjoy your slop, bot
You know what, fuck you and your bullshit holier than thou attitude.
You’re a stupid piece of shit that will never amount to anything worth while other than being a sweat lord mod on your own Lemmy sub literally called “fuck ai”.
Literally a sex bot programed by a Russian propaganda mill has more original thought than you.
Seriously dude. You’re a cunt.
mmm, not just “propaganda mill”, but “Russian propaganda mill”?
oh, and you just looked at their profile after being demolished (with no prejudice)?
Sorry don’t remember any of those other technologies using so much resources, raising prices for everyone else as they don’t pay the actual cost. And being wrong about stuff.
They literally killed and excommunicated people after the invention of the printing press for producing unauthorized copies of the Bible. Figures like William Tyndale paid with their lives for translating scripture into English, challenging the Church’s authority.
There is illicit material circulating freely on Tor, demonstrating that technology can distribute both knowledge and criminal content.
Semiconductors underpin some of humanity’s most powerful and destructive technologies, from advanced military systems to cyberweapons. They are a neutral tool, but their applications have reshaped warfare and global power dynamics.
You are fully entitled to dislike AI or technologies associated with it. But to dismiss it entirely is ignorant. Whether you want to believe it or not, we are on the precipice of a technological revolution, the shape of which remains uncertain, but its impact will be undeniable.
Bitcoin and Ethereum PoW used resources and raised (GPU and electricity) prices for everyone.
So what is AI in your opinion because LLMs fall under that umbrella.
My opinion. AI is a way to improve a computer models accuracy over time based on new data.
I could even argue that ChatGPT etc. are not AI because the LLMs are not directly learning from the inputs they are receiving.
yes they are!! do you know what the “T” means?? trained!! over time!! from data!!
if you really want to be pedantic, chatgpt is ai!! there’s rlhf, yaknow??r/confidentlyincorrect
The G means Generative
The P means Pre-trained.
The T means Transformer.
It is not learning directly from its users, although the planned state-full amazon infrastructure will likely change thos.
well you know what they (me i use they/them pronouns) say is that being confidently wrong is essential for intelligence
well darn
Doesn’t work that way unfortunately. Ask a person on the street what AI is and theyll tell you whatever flavor slop generator they’re familiar with. You’re not going to see much pushback on ML around the Fediverse.
On the fediverse I think we can be more precise in our language.
Which ai and for which use? It’s a tool. It’s like getting mad cause a guy invented a hammer. It’s not the tool hurting you dude, it’s the people wielding it.
Is the hammer making nude images of children?
Yup. you can use a chisel or even just a hammer, you just need the right person with it
A camera can, ban cameras
If that hammer also had massive environmental impacts and hammers were pushed into every aspect of your life while also stealing massive amounts of copyrighted data, sure. It’s very useful for problems that can be easily verified, but the only reason it’s good at those is from the massive amount of stolen data.
Arguably, hammers also have a massive impact on the environment. They are also part of everyday life. Building you live in? Built using a hammer. New sidewalk? Old one came out with an automatic hammer. Car? Bet there was a type of hammer used during assembly. You can’t escape the hammer. Stop running. Accept your inner hammer. Embrace the hammer, become the hammer. Hammer on.
All those things you said are vague and nebulous and every day people are not gonna understand that message and will just think you’re hysterical or a conspiracy guy. The way the message is put forwards is super important
So do computers.
I find AI to be more reliable every day. Fail to see an issue of killing critical thinking. Also my experience, search engines are flooded with advertisements and garbage unrelated to my search. Can only hope the business world does not “Shittify AI in the same way.
70 years ago, it was predicted pay-television would replace advertisements. Instead television evolved to a fee based system and a higher ratio of ads. So you can bet a good thing will evolve in the same way.
The fuck are all these comments? The internet is shit, fuck the internet. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing it. Stop using it.
Same with the internet. Fuels billionaires, destroys the environment with data centers and cables, kills libraries and textbook research, spreads nazi propaganda. We need to stop using technology in general.
Found the Mennonite.
How does AI fuel billionaires?
who owns the datacenters?
Why is that relevant? AI is a massive money loser.
they have somekind of plan, or maybe its all sunken cost scenario. Either way, they think they can get some benefit from it and they are so determined they are throwing insane amount of money in it even though there is no clear way to get any profit from it. So either they know something we dont or they are desperate to save their investments -> worse ai does, better its for all of us since once ai crashes the components stop being wasted on it, less electricity and materials are wasted on datacenters and best of all, all those fucking billionaires lose a lot of money they have invested or at least the investors who thought it good idea to support them lose and maybe dont do it again.
Just because they have a plan doesn’t mean it’s a good one or that it will work.
AI doesn’t fuel billionaires, it drains their money.
It baits investors into giving them money, mainly.
I’m reading AI Engineering by Chip Huyen and it’s an excellent read. As a technologist, I find the topic fascinating and would enjoy building AI agents. While not a silver bullet, generative models definitely represent technological progress and can boost productivity when used correctly. It’s just that as with everything else, the billionaires want to milk it for everything it’s worth and more to the point of crashing the economy and destroying supply chains for their own selfish interests. We just can’t have nice things.
Nope. I’ve been using it for preliminary writing editing. It’s not creating anything, just giving advice on how to make it clearer.
Ignoring the cost of something is incredibly immature.
Huh?
If your analysis of whether AI is good or bad is simply “I use it and I like it”, then you are a child, or an adult with the mental capabilities of a child.
I’m happy life is so simple for you though!
I am using a tool that’s available to me. I’m not going to not use it just because some people use it wrong. What the fuck?
And if it wasn’t, I never would have discovered a new skill I had no clue I could do before, so I’m going to insist that you lick my nutsack. How’s that for maturity?
Yes, everything you just said is why you are an idiot, and aren’t mature enough to be around adults yet.
I totally agree with you. It’s a tool and it’s only as good as the person using it.
The truth is it is a competent editor and it can provide competent advice that you are not obliged to follow.
The ai slop, which is a very real thing, is human generated and posted by a human.
The data centers are being built because of high demand for LLM usage and the artificial ram shortage is driven by greed and poor business decisions. Blaming these things on llms is like blaming heroin for an addict using it.
Thank you. My point is it can be used to think with you, and not just for you. And, maybe provide a little virtual moral support, when you’re trying to do something new, and hard. I’m actually a little surprised Open AI hasn’t pushed this kind of use-case.
Do you happen to work at Ars Technica?
And so, you have surrendered your voice to the machine. You are now something less than human.
It’s not writing anything for me. It’s suggesting where I’m not making things clear, or where a quicker pace or punchier phrasing might help, awkward prose, inconsistent character voice, etc. And something to bounce ideas off of, like “does this make sense?”
And I can respond, and it adjusts, and helps me get to a baseline standard before a human looks at it–who I sent a draft to a few days ago. It’s 100% my own ideas, words, scene staging, and story. I still rewrote entire sections even though it said they were solid.
I don’t know a lot of writers, and getting friends and family to read 5 pages, let alone 2,000 words, or 13 chapters is near impossible. Critique meetups and such are only so helpful.
And I’ve never done this before, yet I’ve actually created something, which ChatGPT made a little easier by acting as a mostly competent editor with mostly mediocre creative instincts, anyway. Sometimes it’s nonsense, or forgets character traits, but it’s more helpful than zero support.
6 months ago I had no clue I was capable of anything like this, but now I’m doing it, and frankly it’s a pretty original story. Would you not consider that a positive use?
But by doing so, you’ve surrendered your voice. You say it makes things clearer, but sometimes ambiguity is good. You say it makes quicker and punchier phrasing, but some writing is best done low and slow. LLMs are by their nature the least common denominator, all the color of the creative world melted down and blended into a uniform grey. By relying on the LLM to alter your writing style, you’re making your writing more bland, generic, and indistinguishable from everyone else. You’re giving up what makes your writing you’re writing. You’re just another hand for the machine.
I do not consider it a positive use, as I would rather read imperfect human writing than grammatically perfect machine drivel. Imperfections are not a reason to enjoy human-created works. They’re the only thing that makes the works worth appreciating.

I’m writing fiction. Sometimes parts benefit from slower or quick pacing. Sometimes you write something that makes no sense.
Not OP, but I’m glad you’re getting empowered. That’s imo the best use for it. It’s crazy to me how many people write this off, not understanding many aspiring creatives need these kinds of stepping stones to stay motivated. Because logically, if what you make takes off and becomes popular, at some point human employees are probably the better option than AI. And as you said, without AI you might have never taken the first leap. So it would end up creating more livelihood for creatives than taking away.
I appreciate that, thank you. Just wanted to suggest one way it can be not totally evil, and I am a admittedly a bit grateful for it as a handy assistant–like a Jarvis with minor brain damage.
I think the research can be pretty cool. Every implementation has been kinda horrible.
The research/tinkerer community overwhelmingly agrees. They were making fun of Tech Bros before chatbots blew up.
Then everyone in class stood up and clapped.
It’s a meme, but at least in my childhood it did occasionally happen, the “whole class clapping” thing. One time I did really good in a second year Spanish class end of term activity. A girl proposed on the spot, and the whole class clapped. Intensely embarrassing but, as far as I can recall, it was not even sarcastic.
It’s the same as the crypto-blockchain-NFT bullshit. A bunch of idiots with too much money put down on it, then when it doesn’t become the hit they expect they start with the propaganda about how it’s the greatest thing, and then when THAT fails they just take away other choices or try to cram it into everything anyhow
The problem is that the propaganda is working. Despite what this meme implies, many many people do use and like AI chat bots and in my line of work, I am asked nearly daily which AI is the best to use and how users can have their own AI that answers emails or mocks up ideas or how it can make their daily job easier. I’m the wrong person to ask that to but I understand why they’re asking me. I’m their IT guy. I don’t particularly care if you use AI in your job because my job is just to make sure your computer keeps working.
This absolutely did not happen.




















