I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.

However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I’m playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.

I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it’s a bit hypocritical for people to get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could “steal” the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I don’t think people hate AI per se - they hate big tech, and what big tech is doing with it. That’s a legitimate gripe, but it’s not the same thing as the technology being bad.

    AI used well can be genuinely useful. I’ve dropped a couple of examples in other threads I won’t rehash here, but the short version is: there are real world uses for this tech (world modelling, medicine, robotics).

    Hell I built clinical notes pipeline that takes the tedium of charting from 15-20 mins down to about 3, with a policy gate that rejects LLM output before it ever reaches me if it fails criteria I defined. None that looks anything like the slop-firehose corporate rollout most people are reacting to.

    https://lemmy.world/post/42920187/22058968

    https://lemmy.world/post/44188294/22635793

    Worth noting too: taking a black-and-white position on anything is just less cognitively expensive than arriving at a nuanced one. That’s not a character flaw, that’s called “being human”. But that doesn’t mean the nuanced position is wrong.

    PS: The electricity/water data centre stuff is maybe more complicated than the headline takes suggest. This might be worth actually reading before treating it as settled.

    https://blog.andymasley.com/p/a-cheat-sheet-for-conversations-about

    YMMV and ICBW

  • BlindFrog@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Chatgpt, list all instances where OP is trying to subvert people’s points with logical fallacies, & burn a couple hundred extra Wh while you’re at it, thanks. I’m sure it’d take less energy for me to do it, but nah

    This book is probably more worth ur time than this post: https://ia801605.us.archive.org/29/items/aiboba/aiboba.pdf It’s An Illustrated Book of Bad Arguments by Ali Almossawi

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      I love when people dismiss your argument without actually addressing it in any way, instead choosing to focus on pedantic logical fallacy classifications in a theoretical and non-specific way that does not actually explain what fallacies you have executed, and where. Good stuff, really convinced me or your side of the argument.

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Mostly anti-intellectualism and ego, as far as I can tell. Also, conflating someone’s business practices with a technology.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Judging by the comments, I would say that most Lemmy users are aware of the downsides of LLMs. The average GPT user probably hasn’t heard of half the points mentioned in these comments.
    Judging by the downvotes, I would say that many Lemmy users are also very passionate about it. The average GPT user might think of LLMs like any other tool.

    Unfortunately, I get the feeling that Lemmy isn’t a suitable place for having a serious conversation about AI in general (not just LLMs). I would love to have that conversation, but this just isn’t the place for it, as you can see. The people here seem to be too focused on LLMs, how they’re developed and how they’re forcibly implemented in places where they provide zero value etc. AI in general is such a broad category, and this kind of biased conversation misses 90% of it.

    When you say AI, people hear LLM, and that’s a genuine problem. When people say they hate AI, they probably aren’t thinking of things like image search, optical character recognition, automatic categorization of the events of your bank account, signal processing in audio and video, image upscaling, frame generation, design of 3D structures, route planning etc. There’s so much you can do with AI, but Lemmy users rarely mention those.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      Yea, I am really getting disillusioned with the discussions on the fediverse around a lot of important topics, not just AI. I could picture a response from someone in this thread as “good, fuck off AI shill”. Not a very productive or healthy place for a discussion, as much as I support the goals and motivations behind the fediverse. Apparently there is an anti-ai zealotry that makes real dialogue impossible.

  • hesh@quokk.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Kills the planet

    Steals from artists

    Widens inequality

    Puts people out of work

    Reinforces prejudices

    Makes us stupid

    Makes everything generic

    Blows up the economy

    Supports oligarchs

    Can’t be trusted, hallucinates and lies

    Overhyped & overpromised

    Can’t generate outside of its training data

    Is creating obscene surveillance state

    Used in weapons to kill

    Replaces human interaction

    Just annoying

    • peepeepoopoo@hilariouschaos.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      As a thought experiment I considered all of these points and here are my thoughts.

      Kills the planet

      Got me there. That stupid datacenter crap where they need 1000tb of ram and zillion 5090RTXs and an entire nuclear power plant just to generate a chocolate chip cookie recipe needs to fucking go. Self-hosted ai isn’t that bad though. You can still argue that running a self-hosted koboldcpp on a 10 watt raspberry pi ALSO destroys the planet but so does all technology. Imagine living with no A/C, no deodorant, no running water, no toilet paper, just to make the earth livable for an additional 100 years or whatever. Fuck that. I chose to not have kids so I’m still doing my part which is more than what the majority of the population can be arsed to do.

      Steals from artists

      I don’t really understand this argument despite being the most common anti-ai argument. What type of art is ai really capable of replacing humans on? Hentai and video game 3d model textures? It’s useless at making 3d models even to the most fanatic of ai worshippers. I can watch porn on pornhub for free and would never and have never commissioned a human artist to make porn pictures for me. Am I stealing from hentai artists by not commissioning them for their work and choosing other means of looking at boobs?

      Buying textures for your hand made 3d models only supports the corporation selling them and the original artists get very little if anything at all. Using ai to circumvent spammy price gouging for 3d model textures seems like a better way to fight back to me. Another point is that copyright trolls are always harassing random youtubers over bullshit claims which DOES destroy livelihoods. Using ai to create a unique illustration that isn’t registered in a copyright strike database when you REALLY weren’t going to pay a $20 license for some spammy corporate licensed art either way really seems like a legitimate use of ai to me.

      Another thing is memes even. I would 100%, absolutely, positively, never ever in a million years commission a human artist for the hundreds of dollars it usually costs to make an illustration for a meme in a shitpost I was trying to make. Yet people get out their torches and pitchforks anytime someone uses ai in a shitpost. I just don’t get it. It’s the “pirating software STEALS money from developers” argument all over again. Is it REALLY stealing if you WEREN’T going to pay for whatever it was otherwise? In 2018 the average person online was practically up in arms over how unfair copyright law is and everyone dropped it to hate ai instead. Seems a little too convenient is you ask me. I think a lot of people have been played.

      Widens inequality

      Employers using ai to screen out the applicants that aren’t desperate enough and are therefore less likely to submit to abnormally cruel or illegal terms could be an example of this. Employers in America generally have too many freedoms in the first place. We aren’t going to get out this downward spiral of wages not keeping up with costs of living without doing some stuff that would be really unpopular to all the powerful people in charge of it all. I’m not sure that they need ai to continue colluding together to treat us all like trash. It will eventually devolve into all-out violence if no one forces them to stop ai or not.

      Also facial recognition cameras, more about that further down.

      Puts people out of work

      I don’t have any good supporting or opposing arguments for this one because I don’t know of any strong examples. Ai is 1000% shittier than a human at any given task for 0% of the cost which is enough to keep an american corporation satisfied for most purposes at least in theory.

      Reinforces prejudices

      I’m not going to be like “provide examples or it doesnt count” because it’s lame and stupid when people do that but my best guess for this one is its talking about how ai can be used to reinforce white nationalist ideology online in bot swarms and stuff. An ai can generate pro christo-fascist propaganda just as much as it can generate pro-democracy propanganda. I wish we could harass christian nationalist type people online with ai but it seems to be only the bad guys doing it. Go on reddit and say anything positive about marijuana in any context besides “my grandma is dying of cancer and marijuana allows her to not be in pain”. You will have people telling you to grow up and stop being a piece of shit. Meanwhile, you can speak out in support of bombing poor people in the middle east and no one bats an eye. Why can’t we harass the piece of shit people with ai? I guess you got me on this one. It only is used for spreading christian nationalist ideology for some reason. But this COULD change.

      Makes us stupid

      A few days ago I used a self hosted ai to help write a python script to run object recognition on the cctv cameras for my home network and it only took an afternoon. It would have taken longer to do this if I truly had to figure out and research every little detail and function name myself but I still could have done it. Sure there was some incorrect stuff in it but fixing that was still faster than doing it from scratch. I used the time I saved to also program a graph that shows the temperature history on my weather station. Does this mean I am stupid?

      Makes everything generic

      100% true. In 2014 or so, you could find anything you wanted on the internet. Now every single webpage is one big nothing-burger. Would corporate enshitification alone have brought things to this point even without ai? Maybe so, maybe not. The point stands.

      Blows up the economy

      It definitely provides a coverup excuse for the systematic price gouging of essential microchips and computer components, sure.

      Supports oligarchs

      This is true. Using non self-hosted ai even without paying for it does support oligarchs. Look at Grok for example. It’s a blatant fascist ideology propaganda machine. The other bots probably do the same thing but more subtle. I bet if you asked chatgpt about marijuana, transgender rights or atheism it wouldn’t be supportive of it. Yet if you asked chatgpt to run an online bot harassment campaign to tell transgender people and marijuana users how big of a piece of shit they are, there would be little pushback and it would say things of suspiciously higher quality than it was the other way around. They’d probably quietly and temporarily switch it over to the paid model for that one to make it generate higher quality hate speech without charging you for it. I’m not going to try it though.

      Can’t be trusted, hallucinates and lies

      Sure. You can’t trust posts on the internet either. Sometimes I find it easier to do my research and differentiate between bad advice and not bad advice than it is to just start from nothing, but most of the seriously potentially useful stuff is usually banned from ai models anyway.

      Overhyped & overpromised

      I guess. See “Puts people out of work”. 1000% worse for 0% of the cost is a no-brainer to an american corporation. To cut down on backlash they probably have to pretend replacing customer support roles with bots is “actually better”.

      Can’t generate outside of its training data

      Some self hosted ai models are compatible with being connected to a websearch which means all the non-self hosted ones also have that. Then you have ai shifting through ai slop articles trying to guess which information is useful and which isn’t. The thought of making an ai sift through another ai bot’s poop is funny to me.

      Is creating obscene surveillance state

      This is the objectively worst part about the advent of ai. Ai powered facial recognition allows law enforcement to have an easier time tracking down and harassing the types of people that the dominant ideology (the christian nationalists) want removed from society. The fascists established a full-on 1984 and we fuckin’ let them. For this one reason alone, I believe the world would be better of if ai were never a thing.

      Used in weapons to kill

      Violence wasn’t invented until the first gun was invented after all. Not really. Maybe when the next american civil war happens, the good guys can have ai guided rockets or whatever too.

      Made computer components expensive

      I already elaborated on this, but yes. Spamming ai datacenters all over the place just to prevent houses from being built there to keep costs of living high means they have to fill them with overpriced video cards. To give credit where credit is due, this isn’t all on ai. Chip companies are purposefully scaling back production so they can make more money while doing less work. Meanwhile, the government is massively cutting back on medicaid because they think we are all worthless losers who don’t work hard enough and deserve to either die in prison over unplayable medical debt or live through suffering because there is lots of suffering in the bible and republicans want to make America more like the bible. It is an unreasonably cruel, unreasonably unfair double standard.

      Replaces human interaction

      I guess. Imagine getting swatted because you told your ai “friend” you were considering fleeing to a blue state and getting an abortion. Although religious fucknuts report their friends over this too.

      Just annoying

      If you get on any ai and give it a prompt like: generate a sensationalist shitpost of a news article titled “Why you should sell all your possessions and work 120 hours a week at your job instead and never take vacation because you deserve to live like that”. The result is just an average modern news article.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      Again, this is a lot of hyperbole.

      Is AI killing the planet, or is capitalism and addiction to fossil fuels? If AI was 100 renewable and run based on community consent, would it still be “killing the planet”?

      In what way does AI “steal” in any way more significantly than an artist uses another artist for inspiration or a coder uses another open source project for their code?

      How does AI widen inequality worse than it has been already, and is that solely the result of AI or is it just a product of capitalism?

      I could go through the entire list, but you get the idea. A lot of the “evils” of AI are actually just symptoms of deeper systemic issues that have nothing to do with AI itself.

      • spectrums_coherence@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        I feel this reply is somewhat misguided and reiterate many of the (frankly, a bit frustrating) AI propaganda talking point.

        “It is not AI, it is capitalism”: It is AI in capitalism, which is the reality we live in right now. If you take anything and put it in an utopia with a bunch of constraint, then of course it will be great. But that simply is not the world we are living in.

        Some people hate cars, because it is a inefficient, polluting, and unsafe way to travel. But what if cars are all running on renewable energy, is super small, and never collide with pedestrian or cyclist.

        Some people hate meat eating because it is inefficient and forces animal to live in inhuman conditions. But what if we can make animal photosynthesis and make them live happy, free, and full lives. Then no one will be against meat eating, but again, that is not our world right now.

        Just because there is an alternative utopia where AI is perfect that doesn’t mean it is right now, and its flaws causes the hatred on the internet.

        Now, most AI center are polluting, consume large amount of energy, and those AI that people mostly uses are built with stolen knowledge. Finally, society should optimize for the well-being of the people, and artist are people, AI are not. All the AI people use nowadays funnels the money to the richest few, while majority of the population, even AI experts, don’t have the means to train useful AI model as of now.

      • hesh@quokk.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 days ago

        Thanks for your reply. Here are my rebuttals:

        Is AI killing the planet, or is capitalism and addiction to fossil fuels?

        Capitalism was already killing the planet, but the rush to invest in AI has demonstrably accelerated it.

        If AI was 100 renewable and run based on community consent, would it still be “killing the planet”?

        No. But thats not the scenario we are in.

        In what way does AI “steal” in any way more significantly than an artist uses another artist for inspiration or a coder uses another open source project for their code?

        Because artists are people with consciousness and feeling and the capability for novel thought. AI is not. Believing it’s doing the same thing as human thinking is being suckered by the hype.

        Even the people creating AI know it’s not “thinking”. They call AI that actually “thinks” AGI and believe they will someday create it by pushing AI further, as long as we give them all of our money (trust me bro).

        How does AI widen inequality worse than it has been already, and is that solely the result of AI or is it just a product of capitalism?

        This is a big one, but without guardrails it’s inarguably poised to hurt working people and enrich the powerful, which therefore drives further inequality (which yes, was already bad as a product of modern capitalism). And those guardrails are not in place, and will not be put into place if we just follow along as they want us to.

        • village604@adultswim.fan
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          Because artists are people with consciousness and feeling and the capability for novel thought. AI is not. Believing it’s doing the same thing as human thinking is being suckered by the hype.

          But AI is being used as a tool by humans to generate the images. It won’t do anything on its own.

          What it has done is allow people to get inspiration out of their head and into the physical world with a much lower barrier for entry than ever before.

          There are still people who don’t consider digital artists to be real artists because they use digital tools instead of physical ones. The hate for people using genAI is basically the same thing.

          There are a lot of valid criticisms of genAI, but this one in particular has always seemed silly to me.

          • hesh@quokk.au
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 days ago

            If you tried to digest every piece of intellectual property ever created by humans for free, they would lock you up. But OpenAI and Meta get to do it, and sell you a subscription to the AI they created with it - Making Zuck and Altman richer than God by destroying the ability for artists to make a living, and making every bit of art created from now on a shitty derivative pasted together by an AI from the memory of human art. It’s an episode of Black Mirror.

            • village604@adultswim.fan
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              4 days ago

              Not all genAI is OpenAI and Meta. There are ethically trained image generation models.

              People are conflating all generative AI with tech giants, which is a critique on capitalism, not the technology.

              The technology is actually quite amazing with regards to image generation.

  • ShellMonkey@piefed.socdojo.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I have split opinions on it. The massive data centers that use so much power, hardware, space, etc are problematic. They get used as a replacement for human ingenuity, scarf down every bit of data the can, aggregate it together in a way that even the owners don’t fully understand. They get manipulated to give answers that suit the owner’s wishes and fuel divisions in public discourse.

    I take significantly less issue with locally run small model systems that you can put on your own machine. They’re not continually running/training and are generally treated more as a hobby toy, not some replacement for human understanding.

  • CallMeAl (like Alan)@piefed.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Reading through this thread and your responses gives the strong impression that you just want to argue while at the same time aren’t very well informed on the matter. Where you do respond its mostly whataboutism rather than actually addressing the comment you are responding to.

    Your post asks “Why do people hate AI?” and then goes on to validates many of the commonly heard reasons people have for hating AI. You end with a suggestion that if we could develop AI into something else in the future, it might be good.

    So it seems you already understand why people hate AI and are promoting an agenda rather than asking a genuine question.

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I gave positives along side the negative, which most people vigorously against AI (which seems to be all of the fediverse) refuse to acknowledge as positives. I do have an agenda, which is to try to understand why there is such a blind and vigorous hate for something I and a lot of people find quite useful, and which could be beneficial for productivity if people use it effectively.

      • CallMeAl (like Alan)@piefed.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        I do have an agenda, which is to try to understand…

        If your goal is to understand why people feel the way they do then why are you arguing with people and attempting to refute their responses instead of thoughtfully reflecting their concerns back to them to confirm if you have understood?

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          Are you saying I am being disingenuous in my intentions by making counter points in a discussion? Is reflecting people’s ideas back to them the only way to understand them?

          • maniclucky@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 days ago

            You’re being disingenuous because that’s the only thing you’re doing. You show no signs of actually ingesting or contemplating other points of view, opinions, etc.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              What signs would you need to see to believe I am ingesting or contemplating other points of view? I have asked questions, tried to discuss the points that were raised, and even told those I disagree that I appreciate their opinion. For those who have been extra pedantic and focused more on the semantics of the arguments (i.e, you), I have had less patience and curiosity, because those arguments are not really relevant to the actual topic, and more of an ad hominem against me as a person. Overall though, I have not called anyone derogatory names (unlike others in this thread), I have not dismissed someone’s ideas out of hand without providing sources or examples, and I feel I have engaged in a respectful and calm manner. I’m not here to troll anyone, I just would like to discuss the topic I have laid out above. Sorry if my approach has not been what you would have preferred, but to be honest, given that you have not actually contributed to the discussion meaningfully, I frankly don’t give a shit. So I’m done debating my debate style, and if you choose to continue focusing on it, as opposed to the debate topic itself, then I will be removing you from my interactions permanently.

              • maniclucky@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                3 days ago

                For those who have been extra pedantic and focused more on the semantics of the arguments (i.e, you)

                I’ve had 2 comments not including this one, neither of which discussed semantics. You never responded to my other comment.

                Overall though, I have not called anyone derogatory names (unlike others in this thread)

                While yes, that would indicate bad faith, never said you did that. Can’t speak to others, who shouldn’t do that.

                I have not dismissed someone’s ideas out of hand without providing sources or examples, and I feel I have engaged in a respectful and calm manner.

                It’s less that you are dismissing things or being disrespectful. It’s that your engagement has a pattern where you aren’t engaging at all with certain points that is very obvious. Your positive bias toward LLMs shows. Whether it is due to legitimate bias or stark contrast within the thread due to a very polarizing topic is tricky to parse but definitely comes off that you are invested in LLMs and are unwilling to acknowledge the downsides in a meaningful way. E.g. your outright dismissal of the lack of ethics because it doesn’t offend you personally and find those that complain to be hypocrites.

                Sorry if my approach has not been what you would have preferred, but to be honest, given that you have not actually contributed to the discussion meaningfully, I frankly don’t give a shit.

                Again, do you think you’re responding to someone else? I rattled off a pile of common complaints to which you never responded. At no point did I accuse you of anything or even remark upon your character directly other than observing the stated pattern of avoidance and deeming it disingenuous. One inference could possibly be made with my rebuttal of your ethics argument, but it’s kind of a stretch.

          • knightly the Sneptaur@pawb.social
            link
            fedilink
            arrow-up
            0
            ·
            3 days ago

            Are you saying I am being disingenuous in my intentions by making counter points in a discussion?

            Yes, that’s very clearly what’s happening here.

            • rabiezaater@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              In what way is making a counter point disingenuous? Why do I need to just blindly accept what someone says without any pushback?

              • knightly the Sneptaur@pawb.social
                link
                fedilink
                arrow-up
                0
                ·
                3 days ago

                In what way is making a counter point disingenuous?

                It reveals that your intent is not to comprehend another perspective, but to insist upon your own.

                Why do I need to just blindly accept what someone says without any pushback?

                The thing that you’re being asked to accept is that this someone believes what they say they believe.

                Nobody’s asking you to blindly assume that this someone is being honest, but making a counterpoint is not the same thing as asking clarifying questions to probe their perspective for the inconsistencies that would indicate deception.

  • Dæmon S.@calckey.world
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    @rabiezaater@piefed.social @nostupidquestions@lemmy.world

    generating ideas

    LLMs don’t generate ideas, stricto sensu. They do, and I find it useful for esoteric (gnosis through chaos magick) purposes, output names and words unbeknownst to the user (this is how I, as an ESL person, learned some words I didn’t know before).

    But if we consider hard determinism, do we as biological automatons, though?

    learn to code

    As someone who codes since my childhood, I wouldn’t suggest relying on LLMs for that. They could be used to output a descriptive text about some function or library, but you must know LLMs are statistical machines, the output text is a chain of “which token is the most probable next?”, an auto-completing only slightly “better” than, say, Gboard’s auto-complete. They “hallucinate” precisely because they rely on statistics and randomness.

    Again: extremely useful as an “Ouija board”, not very useful for blindly relying for learning something, definitely not reliable for “vibe coding”.

    Wanna learn how to code? Do the Elliot Alderson (Mr. Robot TV series) approach: find an existing “Hello world” project/source-code, tinker with it, change things here and there, try to compile/run, Google the exception that the compiler/interpreter thrown at you, change more things, break things, then fix the things you broke… This is exactly how I did. Let go of any hurry and you’ll likely going to master it eventually.

    d&d […] I need a character […] it makes it up quick

    Yes, this is one of the use cases where LLMs can thrive, as a dice with hundreds of billions of sides.

    You may want to roll real dices, convert the number into the respective letter (A=1,B=2,…) then append it as a source of real entropy, because the randomness you get from LLMs is likely to be pseudorandom.

    Ideally, you’d tune (using a RTL-SDR) to a blank radio frequency and digitize the (true noise) spectrum into ASCII, and voila: free randomness, straight from the Cosmic Womb to your computer!

    get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume

    Totally agree with you in this regard. Throughout the history, humans relied on other humans’ “ideas”. Most of the novelty stemmed from “what if I were to take this flamey thing that consumed the tree I used to sit on, and put it under this food?”, mashing up existing things. If we really were to appeal, evolution is that, merging two genetic sequences in an approximate manner while trying to replicate, still I don’t see humans accusing newborn of “stealing genetic work from their ancestors”.

    definitely useful in a lot of ways, […] if […] developed on a more localized and decentralized scale

    I totally agree in this regard, too.

    To answer the main question: IMHO, people hate AI because it has been pushed and used by corps to further enshittify this world. I’m not Anti-AI, but I’m not pro-AI either. There can be nuance from both.

  • schnurrito@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    I don’t, not in general.

    There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn’t have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.

    But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.

    I don’t agree with talking points like “AI companies should be required to pay copyright holders of their training data” or “AI is bad because of the environmental impact” or “AI is bad because of RAM prices” or “AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users’ suicide)” or such things; I think all of these are nonsense.

    I believe in general that AI gets too much attention in the media. It’s really not that impactful.

    • cheese_greater@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      There has to be a liabillity standard tho, otherwise it completely does away with any possibillity of even nominal accountabillity. If harm is caused because of a human, there is liabillity (whether directly or to whoever is responsible for that persons actions). The same should be true for whoever employs LLM for some purpose that results in harm. The LLM cant be jailed or “shutdown” really, its incumbent upon the handler to assume liabillity for the activities they are involved with

      • schnurrito@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        4 days ago

        whoever employs LLM

        incumbent upon the handler to assume liabillity

        I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.

        But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.

  • toebert@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    AI is great, LLMs are a waste. This has been the case for years before LLMs.

    LLMs which the current hype calls AI are the equivalent of a scammy car salesman. To your example of have AI teach you to code - AI is awful at coding. It produces code that is the average of a junior developer’s output. It will look awesome from the outside because it will often mostly work at first, but in reality it’s going to be an unmaintainable mess. An experienced engineer could use one and produce a good outcome, in some cases may be faster than without and in others slower - but the experienced engineer requirement is a must. What this means is your AI teacher by itself is a junior engineer, whose output wouldn’t be trusted by themselves. That’s the level you’ll reach and may even learn and pick up terrible habits that’ll set you back.

    It will do all that and consume a ridiculous amount of resources for it compared to following a YouTube course.

    I imagine a similar case is true for most industries, people who work in the industry see the absolute garbage coming out of it in large quantities and have to listen to people from the outside who don’t know what good looks like in that context keep saying “oh you are now redundant cuz look how good ai is”.

    Meanwhile, it is trained on data stolen from the people who are now losing their jobs because the idiotic decision makers are on the side of believing how good the output looks like. AND there is more, it’s doing it wasting a massive amount of resources, which drives up the prices for everyone (think all electrical devices needing computers, electricity prices). But what what money are they using for it? Oh yes! The money generated out of thin air by the corporations generating this massive AI bubble, which is most likely going to end with a crash that will decimate the market (and therefore the investments and pensions of people). And if the past is any indication, the government will prop the companies up with tax money - so people will pay for it twice.

  • NABDad@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    What do you mean when you say AI?

    Are you talking about all the different areas of research or just LLMs?

    • rabiezaater@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      Both. I think people don’t even realize that there are non llm AI applications, and it has done a disservice to the field in general.

      • NABDad@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        LLMs are interesting, and there are some very promising applications, but I’m concerned that the hype is going to damage the reputation of the technology in a way that could interfere with those things.

        Regarding all other AI, there’s a lot of good that has come from AI research, and most people don’t recognize it. We have a tendency to shift our definition of “intelligence” to always exclude things that someone figures out how to get a computer to do.

        Every day we use software that would have been considered AI years ago.

        I’m not against AI, but I’m against the capitalist impulse to squeeze money out of anything to the detriment of all of humanity and the world.

        My hope is that the LLM bubble bursts, big companies suffer terribly, the “AI” tag becomes bad marketing, and they let AI quietly return to research, where people can do some good with it.

        • rabiezaater@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          I appreciate your distinction between capitalism and AI. Many attribute the maladies of hyper late stage capitalism (enshittification, data hoovering, algorithmic engagement tuning, etc) to AI, when one is just a symptom of the other.

          I agree on the overhype and hope for the industry. I do not want LLMs to go away, and there are plenty of open source non commercial LLM projects out there. I look forward to the day when I can just download a local LLM assistant that has all the capabilities of the best models today. Once someone figures that out, I think the corporations who have poured hundreds of billions into massive data centers will start collapsing.

  • maniclucky@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Well, you dismissed the lack of ethics of it all. Just because you do open source doesn’t mean everyone else does. And open source often acknowledges contributors, unlike LLMs. You can’t consent for other people.

    It’s hideously destructive. Wastes electricity, wastes water, plays merry hell with anywhere the damned data centers pop up.

    It’s unregulated and has already killed people. Multiple stories have come out where an LLM has encouraged suicide. Plus various dangerous outputs like the bleach as cake ingredient thing. Because…

    It isn’t intelligent, it’s just a parrot. I’ll start paying attention when it can successfully count letters in words. So would you trust a random parrot that told you about something you know nothing about?

    It doesn’t do a quarter of what it says. Translation should be its bread and butter and it can’t really manage that. There’s a reason the tech bros that hyped crypto are hyping this. Because they don’t actually know what it can or can’t do.

    It’s approaching max efficacy for current techniques. More data is better in machine learning, but it’s finding the limit and it’s way closer than the scammers want to admit.

    It’s destroying jobs before it can handle them. I’ve tried to use it before. I spent as much if not more time fixing its output than if I had done it myself. It gets to do my boilerplate sometimes now.

    It’s making worse workers. All that time agonizing over a problem was spent learning how to do it at all. Now it shits out worthless garbage that the person doesn’t know what it does or how to fix it. Job security for me I guess.

    It could be a useful technology, but the delusion that it’s capable of becoming AGI distracts from all the things it could be capable of if big companies actually tried to use them instead of the lazy implementations they’re chasing.

    Edit: I also forgot that it entrenches racism and other bad behavior. If your corpus is full of racist shit, you get a racist robot. And racist assholes make it harder to fix that because they won’t acknowledge that such things are bad and that this badness can be taught* to robots.

    Source: Data engineer

  • Ada@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    It will change society. It won’t improve skills.

    Studies already show the opposite at play. https://arxiv.org/pdf/2506.08872v1

    If the LLM could teach you how to code, but couldn’t do the coding for you, it would be a tool for improvement. But it isn’t used that way. Instead of saying “teach me how to code this”, people are more inclined to say “code this for me”.

    On top of that, they’re controlled by corporations who are not in the slightest bit interested in your welfare, privacy or economic success. They will invade your privacy, fuck over the environment, fuck over people and load their LLMs with propoganda and barriers that serve their political and social interests.

    And as a bonus, they’re a nightmare for the environment.

    Having said all of that. I agree, they are going to fundamentally reshape society. But it’s like the industrial revolution. Yeah, we ended up with a more efficient society, but it didn’t make people freer, it further entrenched wealth in the hands of the wealthy, whilst fucking up the environment. That’s what LLMs are going to do.

    We could do them differently. That implementation isn’t inherent in their nature. But we won’t do them differently, because the people pushing it want the shitty outcome, because it’s not shitty for them.

  • Devolution@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    AI makes children stupid. AI is mostly used for making AI slop. AI is being used by governments to manipulate public perception. AI is being used to engage is scams. People are using it to cheat. AI is being used to offset critical thinking.

    AI is being used by corporations to engage in mass layoffs to save a buck. AI is being used by police stations and federal agencies to identify people, with minimal success (misidentification). AI is being used to deny health claims without review. AI customer service is dogshit.

    I was a futurist like you once. I wanted AI based on how the movies presented it. However, the reality is LLMs are being used not for human improvement, but instead for the purpose of creating a permanent underclass with few at the top.

    TL;DR: fuck AI.