AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

  • thatsTheCatch@lemmy.nz
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    Most of my qualms with AI aren’t in the usage of AI, but in its creation (water usage, mass layoffs, etc.—you’ve heard it all before).

    To me it’s like asking “What are some good uses for slaves?” (An extreme example to show the point, I’m not trying to say AI is the same as slavery).

    Like yeah I could find good uses for it, but should it exist in the first place?

  • Random Dent@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I actually find it pretty helpful for tech support stuff. It doesn’t always get it right, but it’s usually at least in the right general area and TBH it beats going through endless forums where the answer is buried among 8 pages of people bickering about nothing, or those ones where someone has your exact problem and then replies “nm I fixed it” and doesn’t say what they did.

  • makingStuffForFun@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    I had a project of markdown files. About 400 of them, with about 1200 plus links in them.

    The original filenames were changed. The links no longer worked.

    The LLM went through each link, and found the new one, based on filename and file content, using its ability to recognise patterns, words, etc etc.

    Absolutely saved me maybe a couple of days of manual pain labour, and all done in about 10 minutes.

    This is the kind of thing I use it for. Horrible repetitive processes.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    If we’re strictly talking about LLMs: Certain accessibility services - MAYBE. Writing closed captions / transcription for the most part requires little “human” touch. If we ASSUME that AI will be able to it reliably one day - because it really can’t yet - that’s one thing that would benefit society.

    Image descriptions is another thing I might see done by AI one day but that still requires an understanding of what’s actually important about the image.

    • lepinkainen@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      I built a system that translates subtitles from English to my native language and it beats cheap-ass “official” translations 9/10

      It even gets colloquial terms and phrases right, adapting to the correct song for example - something a human translator working for minimum pay usually won’t bother

    • Paragone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Please go watch the yt video of Bernie Sanders discussing politics/society/civilization with Claude.ai

      That may blow your mind…

      It’s … not quite as limited as you, or I, had been believing…

      _ /\ _

  • seahag@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    AI has uses in the medical, scientific, and disabled communities. I’ve seen it helping blind people with shopping, with Google glasses or whatever reporting what they’ve picked up and describing it to them. It can also identify/predict cancer tissue early.

    Generative AI is peak laziness and the death of human creativity. Using AI for companionship has a nasty effect on mental health.

    AI should have only ever been an assistant in medical/scientific research in my opinion, simply because it’s so damaging to the environment, economy, and society.

    • iByteABit@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      It can also identify/predict cancer tissue early.

      Do you mean an LLM or a machine learning model specifically trained for this?

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        Different case, obviously, but I remember reading about an AI which can identify pending-heart-attack from x-rays … and nobody could figure-out what the hell it was judging from…

        THAT is brilliant.

        Specialized to the degree that it is trustworthy.

        I’d be surprised if humans could possibly compete against a properly-done set-of-AI’s, which worked through, in the correct order, all possible diagnostic-reasoning.

        Democratizing accurate-diagnosis would be THE medical-revolution the world needs, now.

        _ /\ _

  • tyler@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    You need to differentiate between generative AI, NLP, machine learning, etc. Your question is pretty much entirely pointless otherwise.

  • DarkCloud@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    Writing and fact checking ONLY the most basic concepts and common information that is found multiple times and in multiple places online (eg. It’s strongly reenforced and verified in the training data and has/will be the same for a long time).

    Mass formatting, changing formats, changing language, and decoding via common methods.

    Pitching “what you mean, but can’t remember the name for”.

    …and that’s about it.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    It’s ok for very furnace level exploration. Like 100 level stuff. If it’s something you’d google and easily find in an article it’s likely to do a ok job.

    I’ve also found it’s good for tedious straightforward tasks. Anything that would be uncomfortable or timely or automate manually. Best for one offs.

    I’ve also found it’s extremly good for translation, which was it’s originally use.

  • crunchpaste@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Well, I know its quite specific, but nothing beats AI at stereo matching and depthmap generation and that’s important in many fields.

  • shellington@piefed.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I agree there is a lot of annoying hype. However i also agree there are some specific use cases where it can be helpful.

    I for one find it handy some times when i am writing bash scripts to do things on my system. I obviously check them before running but it does save time.

    Although i do recommend running models locally if possible as it is obviously preferable from a privacy and cost standpoint.

  • sicktriple@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 days ago

    The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn’t see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.

    The thing is, we don’t need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn’t need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don’t care, that’s not the point, they are in this because it’s a golden ticket to growth city and once they cash their check they don’t give one hot fuck about the human-spirit-stealing-machine they built.

    TLDR: our society is broken and that’s why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that’s the only version that the system we built will allow.