You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
Medicine.
Evidence shows that some highly specialised models are better at things like detecting breast cancer in scans than human doctors.
Properly anonymised automatic second scans by an AI to catch the markers that human doctors miss for another review by a specialist is an excellent potential use case for an LLM AI.
Transcription services can save doctors huge amounts of admin time and allows them to focus on the patient if they know there’s a reliable system in place for typing up notes for a consultation. As long as it’s treated as a “please review these notes are accurate” rather than treated as a gospel recording and the data is destroyed once it’s job is complete and the patient has been able to give informed consent.
The way these things are being used in actual medical contexts right now is frankly terrifying.
Yeah the sciences in general I’d say. There’s a project aiming to translate the tens of thousands of cuneiform clay tablets that sit in storage all because there’s like a handful of people in the world that can read them- AI is an amazing way to mass translate them and unlocking vast troves of hitherto completely unknown ancient knowledge.
The problem is not even the AI, but the scientists themselves who guard the tablets jealously because they don’t want anyone else to translate “their” tablets that they dug up, even though they are incapable of possibly make a dent in the sheer volume in their collected lifetimes.
Imagine, so much information encoded, from thousands of years ago that could reveal so much about the origins of our culture and civilization!
I had a colonoscopy last year (such fun!) and there was an ‘AI’ monitoring the camera feed to detect anomalies. If it spotted something it just drew the doctor’s attention to it for his expert, human review. I was ok with that. Effectively an extra pair of eyes that can look everywhere on the screen all at once and which never blink.
That’s how AI systems should be used. A “heads up, something weird here” system.
I could also see it being used well like this for patient history analysis. Often a doctor is treating 1 symptom of something larger. They can’t see the wood for the trees. An LLM could pick out oddities and flag them. The doctor can then filter out the mistakes and hallucinations, but be alerted to rare or unusual conditions that match the patient’s symptoms and history.
It speeds up my dev time dramatically. I know what I want to do, I have an idea of how I want to do it. LLM generates boilerplate code I review. I tweak it. I fix the bug. If there is something I don’t understand, I ask sources to review the output. I test it. Then I’ll submit it for peer review once I’m happy with the code and the output.
GenAI is a plagiarism machine. If you use it, you’re complicit.
Ethics aside, LLMs in particular tend to “hallucinate”. If you blindly trust their output, you’re a dumbass. I honestly feel bad for young people who should be studying but are instead relying on ChatGPT and the likes.
Would an upscaler be considered generative? Really all I can think of, but I do believe calling those generative is also a little bit of a stretch using the basic idea of “generation” extremely loosely.
Oh, and helping find new chemicals for medicine and other medical research. Of all the things that might benefit from “throwing everything at a wall and seeing what sticks,” that’s the only real good use it could be.
No. I want to talk to a living machine mind, not a complexified chatbot controlled entirely by ultrarich techbro overlords.
For sure. You could absolutely create and train a model ethically. It wouldn’t be nearly as useful in many aspects, but it would be gen AI. From an environmental perspective, I guess you could ask yourself the same thing of CPU intensive gaming. People play games for hours using up similar, often more, electricity as a small locally run LLM.
Scientific use on your own massive data sets(think 100s of TB) - Sure
Consumer chatbot uses - May give the illusion of positive results, whereas the long-term outcome is an overall negative affect on the user.
Give me back my Google search from 10 years ago and alright, no need for AI.
Nowadays Google is so unusable that I actually go to Claude first if I need to research something.
My current list of reasons why you shouldn’t use generative AI/LLMs
A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o
C) because they’re plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/
D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html
E) because people who use AI show significant cognitive impairments compared to people who don’t https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/
F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799
G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964
H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/
I) because RAM costs have skyrocketed because OpenAI has used money it doesn’t have to purchase RAM from Nvidia that currently doesn’t exist to stock data centers that also don’t currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news
J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953
K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.
i use it like a search engine or example generator
i don’t trust anything it creates just like i don’t trust anything on the internet without validating it
i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does
It’s good you’re being cautious about it but it would be better to not use it at all. A recent Scientific American article showed that AI autofill suggestions change how people think about a subject just through suggestion, even if they don’t use the autofill. And people who use it are often unaware of their own knowledge gaps, so self-reporting about effectiveness is useless. Using it even a little bit is probably putting metaphorical micro-plastics in your brain.
https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/ https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
Protect your brain
Good list, but we should keep it real.
C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything “new” because it is all based on their experiences of the outside world.
F is simply fearmongering and not helpful.
And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.
Gish gallop of shite.
A) overblown, and that argues for cleaner power, better cooling, and more efficient models
B) regulation failure
C) incorrect, they have made discoveries that humans have been unable to. All human knowledge is built off previous knowledge.
D) the enemy is both weak and strong. If they don’t produce anything good then the people who are losing their jobs can’t have either, right?
E) small study based on one task which people are misrepresenting. The actual evidence shows it makes people smarter as they shift priorities.
F) only for vulnerable people. Better safeguards are needed for the weak minded.
G) argument against using people’s likeness not ai
H) use an open source Chinese model
I) market distortion problem, not a principled reason no one should use the technology any more than GPU shortages made all graphics work illegitimate.
J) see (H)
K) try one argument next time. Your best one, maybe people would be more open to wasting time.
Why deleted? This was a good rebuttal.
Mods can’t handle the truth
Thanks for posting this. I’m really frustrated with how vulnerable people on Lemmy are to propaganda. The amount of upvotes on the post you responded to are just embarrassing. The post is exactly the same kind of bullshit cherry picking I see anti-trans people do.
Yes, post-truth slop always has this bitter aftertaste. Big ass bullet list with talking points and links, and you know the pusher has been groomed with counter objections etc… exact same methodology as the alt right pipeline.
A gish gallop is a rhetorical strategy, this is a list on a website. I’m sorry you failed high school debate or whatever.
A) Nope, it’s accurate. I’ll provide some more sources for you to not read.
https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about https://www.snhu.edu/about-us/newsroom/stem/ai-environmental-impact https://news.cornell.edu/stories/2025/11/roadmap-shows-environmental-impact-ai-data-center-boom
B) Yes, it is a regulation failure. We should be regulating these data centers out of existence in order to protect people from the noise and pollution.
C) LLMs can’t “make discoveries.” I think you’re trying to conflate humans and LLMs here, but your point is so muddled that I’m not sure what you’re actually trying to say. Humans are able to iterate on information and build on pre-existing foundations, LLMs produce a reasonably coherent block of text based on statistical averages of previous information. If you’re going to try to conflate the two, you’ve got to be ready for people to laugh in your face.
D) Ah yes, a highly skilled artist or craftsman getting replaced with a slop machine because it’s cheaper despite having a visible sheen of cheapness to it directly reflects on the value of the artist or craftsman, you are very intelligent
E) Nope, you’re wrong. Long term use of LLMs does not make people smarter and does impair their cognitive abilities. I’ll provide some more sources for you to not read.
https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-risk-of-cognitive-atrophy/ https://arxiv.org/abs/2506.08872 https://pubmed.ncbi.nlm.nih.gov/38996021/ https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1259845/full
F) Oh, you’re just an asshole, got it. I can stop arguing with you from here because you don’t care about people, you’re just reflexively defending LLMs for some reason. “The weak minded.” Fucking eugenicist Bond villain type shit.
Try realigning your moral compass toward compassion toward people and maybe you won’t reflexively make a jackass out of yourself on the internet next time.
Yes and your comment utilised that rhetorical technique. Gish gallop describes how arguments are structured and delivered, not the medium they appear in.
A) Nobody said the environmental costs were fake; the point is that “costly and harmful” does not by itself prove “nobody should use it”
B) “Regulate harmful data centers” is a policy position, not an argument that every use of an LLM is unethical; if the problem is siting, noise, emissions, and water stress, the target is those failures, not the existence of the tool in every context.
C) AI has already contributed to genuine new findings, whether you want to admit it or not.
D) People will pay more for better products, if the work was substandard there would be plenty of opportunity elsewhere in companies that position themselves as ‘slop-free’
E) Your evidence does not justify “long-term use impairs cognitive abilities”: one widely cited paper is still an arXiv preprint on essay-writing with 54 participants, another is an opinion/reflection article, and one of the stronger experimental papers you cited actually found AI assistance increased individual creativity while reducing diversity across outputs. Check my replies where I quoted several more rigorous studies / meta analysis.
F) Calling someone a eugenicist does not fix the evidentiary gap; the defensible claim is that chatbots may worsen delusions or dependency in some users who are psychologically vulnerable to it and therefore need guardrails, not that ordinary use “breaks your brain” full stop.
Less smelling your own farts and more reading the actual evidence and you might gain a clue
Some good and valid input to the discussion.
I’d be interested in E) “the actual evidence”. Got a link?
Yes as I had this discussion with someone the other week.
A peer-reviewed meta-analysis of 51 studies found that ChatGPT has a large positive effect on students’ learning performance, and moderate positive effects on learning perception and higher-order thinking skills (like analysis and synthesis) across educational contexts.
The Impact of Artificial Intelligence (AI) on Students’ Academic Development
Research published in the journal Education Sciences reports that AI in educational contexts can lead to personalized learning, improved academic outcomes, and increased engagement, with many students reporting enhanced learning efficiency.
Artificial intelligence in education: A systematic literature review
Ai tools support problem-solving skills, collaboration, and instructional quality in meaningful ways.
That’s very interesting, thanks!
This seems about right. Anecdotally I never learned as much as I do since I use AI. It’s crazy good at explaining stuff with exactly the angle you require according to your level and learning style.
I’ve done some hardware hacking, built my own Linux distro for a project, got way better at administering my home server.
The most fun I’ve had is to try and locate the rights to an obscure science fiction short story for a podcast I want to make. This led me to contact a few editors, library archivists, and a couple of noted literature professors. Genuine fun and connections, with the AI helping me navigate mountains of information, the legal aspects and also the cultural differences between the US and UK publishing scenes.
All of this is just in the last few months, it would have taken me years pre-ai or more realistically I would have given up before getting anywhere.
I appreciate all these links you post. Keep it up and thank you
Do you think local llms or community hosted ones are still as bad? Because most of those concerns seem to be more with the corporate ownership of ai, which is definitely a bad thing.
Just my personal take, but my opinion basically boils down to “they can be.”
It’s all about how ethically they’re handled, and that can be good or bad at any scale. Take your very own instance, for example. Not that it’s hosting a local LLM (maybe they are, IDK), but the instance openly supports GenAI and has instances for all the major GenAI companies/models. GenAI without ethical sourcing - which none of these companies do - is one of the most blatant examples of a corporation using technology to steal the skilled labor of workers to avoid having to pay them what they’re owed for that skill. So your own instance is pro-corporatism, so long as they’re benefiting from stealing from workers. Not very anarchist if you ask me.
On the other hand, there’s a company that I believe partnered with Affinity a few years back that is a website design company that was hiring artists to create UI pieces for a training set for their LLM that they were going to use to create website templates for customers as part of their service (and I think they were also guaranteeing royalties for those who contributed as well?).
The instance is explicitly anti corporate ai. There’s !haidra@lemmy.dbzer0.com which db0 worked on. https://aihorde.net/ is probably the most ethical image generation service.
most ethical image generation service.
oxymoron
And yet, again, the instance has communities for every single big tech genAI model. That’s definitely not anti-corporate. Using those models both contributes to their shareholder value/profits and the theft of wages from workers.
And where do they get the training data for AI Horde? From scraping the web and all the freelance artists on there, like all of the big corporate models? Because then they’re just justifying exploitation of workers as benefiting everybody when what they really mean is benefiting themselves.
It’s like the argument pro ChatGPT airheads use constantly about how genAI “democratized” art. You know what “democratized” art and made it freely accessible to everybody? The pencil. It’s just making up excuses for wanting the product of skill without putting in the effort to learn the skill or pay appropriate compensation to somebody with the skill to give you the product that you want. It’s upper management thinking.
And this is why I say that it depends. Horde AI could be great - so long as the people whose work is being used to allow others access to skilled labor that they don’t want to do themselves are being properly compensated for their work. Otherwise, it’s no different from the corporations. Just because it’s free doesn’t mean that nobody is going hungry as a result of it. Unless it’s trained exclusively on products from big corporations. Those artists got paid when they did the work, so nobody gets hurt there except in the theoretical sense of freelance artists potentially losing customers down the line to “good enough and cheap” genAI from people with the above upper management mindset.
And yet, again, the instance has communities for every single big tech genAI model.
Where do you see that? As far as I see, we only have comms for stable_diffusion, which is an open-weights local diffusion model. I couldn’t find any corporate comms like OpenAI or Copilot or whatever. If we did, I don’t know if I’d delete them tbh, since they’re not explicitly against our CoC, but it would be something I’d be concerned and raise with the instance if they would be too “bootlicky”. But nevertheless, we do not at the moment.
And where do they get the training data for AI Horde?
The AI Horde is using open-weight models only. We don’t train them. We just use them once they’ve been trained.
PS: We are also anti-copyrights, so complaints based on copyright violations don’t fly with us.
You know what “democratized” art and made it freely accessible to everybody? The pencil.
I often see this vacuous argument and it never convinced tbh. It assumes everyone has enough time to train on making art, which most wage-slaves undoubtedly do not. It’s an inherently classist argument to assume everyone has the free time to master any artistic skill.
And this is why I say that it depends. Horde AI could be great - so long[…]
This is an argument against capitalism, not against GenAI itself. You’re arguing that because capitalism is bad and exploits workers, a tool that can also be used to further exploitation needs to be opposed. But we say it’s not the fault of the tool being used for exploitation, it’s the fault of the system allowing exploitation. I.e. If you remove the capitalist system, this argument against GenAI is moot. And we’re very much anti-capitalists in our instance. It’s a similar argument against piracy as well (and we’re also pro-piracy btw). I.e. sharing media is not a problem in a non-capitalist society, in fact it’s a positive. It’s only a negative due to capitalism.
Sorry it took so long to get back to this, as they say, “Life, uh, gets in the way.”
I had to go and check the AI communities I have blocked because I could’ve sworn that I had multiple of different corporate GenAI blocked from DB0, but I stand corrected - I have only a handful of Stable Diffusion ones. Of course, I was also under the impression that Stable Diffusion is made by OpenAI or one of their competitors, so I blocked them instantly on that alone when I was largely blocking AI communities to clean up my homepage and to avoid the kinds of people those communities usually attract. There’s a certain kind of person with a “corporate fact cat/middle manager” attitude that can plague GenAI communities that drives me crazy because they think that generating an image takes as much skill and effort (or even more) than creating one by hand.
That definitely does change my opinion on Stable Diffusion, but it still comes down to a “it depends.” And as you so rightly put it, my problem is a capitalism issue, not a GenAI issue. My perspective is that not all of us are so lucky as to live in Ireland, which I believe has recently implemented a UBI specifically for artists, and so until capitalism is dealt with, any impacts of that take precedence - including those created as a consequence. Just because something is useful doesn’t mean we should be dumping it as fuel on to the fire of capitalism because capitalism is what’s actually burning us. Local models using images sourced with permission from the artists is a great thing. People getting paid to make things specifically to be used for training - awesome! A win in my book. In a world where artists have a guaranteed roof over their heads and food in their bellies, I do not care at all about whether or not their work is used to train AI. I bet artists can do some really cool stuff with GenAI as well - it’s basically a bigger, more advanced version of the same concept that makes the Gaussian Blur tool in Photoshop work.
This is why I’m also pro-piracy when it comes to corporations - you aren’t stealing from the workers, they got paid to make the thing, not when it gets sold - and why my opinion is “it depends.” I’m completely willing to go ahead and change my opinion once something stops hurting workers and becomes nothing but a benefit now that it’s out of the hands of the billionaires. There’s an interesting conversation to be had over the…I can’t think of a good word, ownership of identity maybe? Ownership of characters created to represent yourself at any rate (somebody coming along and saying “this is me” about a character you made as an avatar of yourself feels bad), and there’s a country in Europe that made an interesting choice in response to deep fakes, CSAM, and revenge porn created by AI by giving every citizen the copyright to their own face, body, and voice, but that’s a whole different conversation.
And this concept right here:
It assumes everyone has enough time to train on making art, which most wage-slaves undoubtedly do not. It’s an inherently classist argument to assume everyone has the free time to master any artistic skill.
Has a sense of capitalistic entitlement in it. You feel that you deserve the product of art but don’t respect the people who do put in the time and effort learning how to make it enough to properly compensate them for the time that they spent learning the profession. One, because they could have spent that time learning a different trade - programming, becoming an electrician or maybe an airplane mechanic or whatever - and two, because those who do art professionally almost universally talk about how they almost never have time to make art for themselves - stuff that they want to make just for them. And art (alongside the humanities) is a universally disrespected skill, with many commission based artists working for below minimum wage. It’s like arguing that because you don’t have the time or money to make a car, you deserve to be able to freely take cars from people’s driveways and use them as a form of public transit. In an ideal world where the US isn’t a car-centric hellscape and the trams always arrive on time, we wouldn’t even need for everybody to have their own personal car! But we don’t live in that world and hot-wiring somebody’s car to take for a joyride that makes them miss work isn’t cool. Just because I don’t have the genetics for it or the time to train to compete in the Olympics doesn’t grant me the right to free steroid injections.
And I use the word product up there very, very deliberately. Art is two things: the Product to be Consumed (and promptly discarded in this day and age of consumerism), which is what GenAI makes, and the Process, which is often what artists talk about as their favorite part of making art. But the end result - the Product - is just a small part of what Art is. Adam Savage said something along the lines of “I have no interest in AI art. One day, some college film student will do something amazing with AI - and Hollywood will milk it to death - but right now, I don’t see anything in AI that I care about. Because you don’t see anything of the artist in it, and that’s what I care about. Their intent, what they wanted to say with the piece, what they went through in making it and what they learned along the way, none of that exists in AI art.” I’m not religious, but as the saying goes: “God gave us grain but not bread so that we, too, could indulge in the joy of the act of creation.” Making something allows us to better understand ourselves and the world around us. It’s why people desire GenAI. To create something that only exists in their imagination. It’s why Art Therapy exists. One time I heard a college student reflect that “art is how artists process the world around us” and I absolutely agree. Van Gogh died a pauper, having barely sold any of his works in his lifetime, only to become one of the most beloved painters long after his death for his loneliness and pain that he expressed in his brushwork. One thing that is guaranteed to make me cry is that scene from Dr Who where the museum curator talks about why Van Gogh is his favorite artist while Vincent breaks down crying behind him.

One thing that people caught up in the GenAI arguments often miss is that artists (any worth listening to at least) aren’t gatekeeping art at all. Go watch a video on color theory, perspective, or additive and subtractive palettes. Artists love sharing information, and art is a conversation itself. I’m sure you can see it in the GenAI communities on your instance as well, people love to make things and be a part of a community with a shared passion. Artists don’t care if you aren’t an expert or anything, so I encourage anybody reading this to pick up a pencil, make something, and just share it with the world. I’ve talked to artists who say that their favorite commissioners are those who send them drawings to help interpret their vision - even if it’s just doodles of stick figures on a napkin or something. There used to be a tiny subreddit called r/Mona_Leslie, and it was one of my favorite places on Reddit because the whole idea of it was to professionally critique random people’s stuff as if it were in a museum gallery. People praising the brushstrokes of little kids’ fingerpaint art, the line work of stick figure drawings, whatever, it was just such a great vibe. In fact, I challenge anybody who uses GenAI regularly to take an image they generated and like, bring it into an image editor, create a new layer, and just start drawing over it. You can probably make it fit your original vision even more than the AI could with enough effort. Even if you just do a half hour a couple of times a week or something, what you learn simply from doing it will expand the horizons of your creativity.
TL;DR: You’re absolutely right that it’s a problem with capitalism, not with GenAI itself. But until such a time as capitalism no longer creates a problem from GenAI, I am firmly in the camp of putting a leash on what can and can’t be done with AI (largely on corporate AI) to minimize the harm as much as we can. Just because overfishing is a larger issue caused by capitalism doesn’t mean that we shouldn’t work on limiting the amount of micro plastics that end up in the ocean - especially now that supposedly something like 5-10% of the fish we eat is plastic.
Has a sense of capitalistic entitlement in it. You feel that you deserve the product of art but don’t respect the people who do put in the time and effort learning how to make it enough to properly compensate them for the time that they spent learning the profession.
This is really not true at all. Me and others not having the time to learn to draw (and compose and direct and act and and and…) doesn’t mean we disrespect those who do. We just want to make something to enjoy for ourselves. And yes, those who don’t have the time, also (typically) don’t have the money. Again, it’s a classist argument to claim that everyone has either the time to learn, or the money to commission.
Likewise, it’s infuriating to see privileged takes of “oh just spend a few hours here and there”. Motherfucker, there’s people who do not have a few hours here and there. There’s people who work 2 jobs, who raise children alone, who are primary caregivers for others. They’re not taking anything from artists by generating an image they like in the 1 minute they have available.
I am of the opinion of, let people enjoy things that bring them joy. I have no issue with GenAI if it’s for strictly non-commercial personal use, especially when it’s using open-weight local models who’ve already been trained. I do think that GenAI work should not be able to be monetized at all, but I don’t make the rules. But people moralizing against random enthusiasts because “just learn to draw bruv” is never going to convince anyone or achieve anything. However convincing people to not support massive corpos will.
Modern vocaloid is generative AI and I think making a song with Hatsune Miku is justifiable.
Vocaloid is a synthesizer, not AI.
Those aren’t mutually exclusive, Synth V uses diffusion models internally and I assume Vocaloid 6 does as well with its boasting of AI features and AI voice banks. It’s different artistically from other diffusion stuff like Suno or Midjourney, but it’s still generative AI.
never, almost everyone who uses it become kinda lazy themselves, and they always keep referring to “chatgpt as an answer to your question”
I have used AI to make a few games for my kids, and a couple of apps that I wanted for my own wants/needs. In both cases it was very frustrating and I cant beleave people that say the current state is ‘great’. It was barely fictional and needed constant over-site - if I didn’t know at least basic C++/PHP/javascript and html/css I don’t know if it would be that usefull.
It helped me to rapidly prototype, but needed lots of work to keep it on track. I can see how agents go rogue and delete whole dir trees etc.
As for the environmental costs, even with out LLMs I think we are fucked with what we have done up to now and what the US seems hell bent on bringing on us (the rest or the world)
I think it’s gonna fall on its face
Ask programmer bros who work on corporative hell… It’s almost mandatory today if you want to earn money programming.
If you’re in a dev company that doesn’t require AI, it’s just a matter of time.
I think programmers are like 90% responsible for impact on environment due to AI use. I’ve a friend who work on a big company, they use AI literally everywhere you can imagine, even on Slack to answer other colleagues messages. They need to feed huge codebases to provide context to AI, at the end it’s more resource hungry than generating video or images a few times a day.
I do love llms they have their limited use cases. But the problem is that humanity is right now playing with a loaded gun.
If we would learn how to use it properly, it would be just another useful tool. But we are incapable of being respectful of anything that’s not within our sight. An sometimes not even if it is within.
Our greed and laziness is what makes it bad. All those psychotic breaks? All those easy to exploit safeguards? Loosing our cognitive ability? Wasting money on unproven systems to make more money?
Humans are the problem.
Honestly I am waiting for AI caused copyright hellscape apocalypse. If everything is free, everything is free - and they can’t make money. It will be an ‘interesting ride’ for the years to come.Yes I suck at the conversation piece of emails in certain scenarios and having a soundboard to bounce off of helps. I still know when it spews things in not quite a fan of but it does do the heavy lifting for me.
Even so, still not a fan overall. It’s like launching a nuke at a country to kill a rat. It’s so bad for the environment, our brains, and our independence (in terms of hardware ownership because… Well. Y’all know. )
I guess my tl;dr is it’s not truly worth it.





















