• potatopotato@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I unfortunately work with AI and actually understand how it works. It’s going to replace workers the same way that cocaine replaces workers.

    It’ll make some knowledge workers moderately more productive but that excess will be absorbed like with any other tool and we’ll just do more shit as a society at the expense of continuing to destroy the environment.

    Once the bubble bursts and things calm down there will probably be some job growth as the economy figures out how to better utilize these new tools. It’s like if you invented a machine that could frame 60% of a house and brilliantly declared you’d fire all the framers but then realized you’re now building a lot of houses and need more framers than before to finish the remaining 40%.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      You are thinking of office work, but there are a LOT of jobs that will be permanently replaced by AI-driven robotics, like fast food workers, retail shelf stockers, drivers, warehouse work, etc. Those are workers that can’t be easily trained UP, and many will likely become permanently unemployed.

      • sobchak@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I don’t buy that. There’s little reason to automate those jobs because the labor is so cheap. And as someone who has worked most of those jobs in the past, most of those workers could be easily trained for different jobs; most are actively taking it upon themselves to train to get out of them.

        • BarneyPiccolo@lemmy.today
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Labor is cheap? Most cities are approaching $15 an hour, and even those immoral states that keep it at the Federal minimum of $7.75, a robot is still going to be cheaper in the long run. Then there are benefits, payroll taxes, personal issues, schedules, etc. People are a pain in the ass, and expensive in a lot more ways than money.

          Besides, it almost certainly won’t be up to the franchisee. When corporate decides that they can be more efficient and more PROFITABLE with automation, the stores will go along with it, whether they like it or not.

          It’s not an if, it’s a when. It’s definitely going to happen.

    • WanderingThoughts@europe.pub
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      make some knowledge workers moderately more productive but that excess will be absorbed

      That seems to result in a higher burn out rate. The worker had to do more soul crushing check and verify work instead of doing knowledge work.

      • lightnsfw@reddthat.com
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Can confirm. It’s not AI but probably 80% of my job is just emailing other people to do shit, emailing other people status updates about their work, and verifying their completed work which is frequently wrong. It sucks.

    • vikinghoarder@infosec.pub
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I’m trying to figure out why everyone is so mad about AI?

      I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing). So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?

      Give me some red pills!

      • sobchak@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I’m working with people that seem to try to offload a lot of their work to AI, and it’s shit, and making the project take longer and shittier. Then they do things like write documents in AI and expect people to read that nonsense, and even use AI to send long, useless Slack messages. In short, it’s been detrimental to the project.

      • Brainsploosh@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        It doesn’t reason, and it doesn’t actually know any information.

        What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.

        Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.

        Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.

        EtA: All for the low low cost of personal computing, power scarcity and drought.

  • BJW@lemmus.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The future is in agentic AI with a single developer for code review. Management tells the developer what they want, developer engineers the prompt, gives it to the AI agent that has complete access to the relevant projects and DB schema. It generates a change log, and the developer reviews it, asking for changes as needed.

    Huge teams are about to be consolidated, with a huge inflow of software engineers into the unemployment bin, and entire downstream economies are going to collapse from the resulting unemployment of previously high paying careers. We need Universal Basic Income yesterday.

    • vane@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      As always all those claims have very big gaps: relevant projects, db schema, management know what they want. Dude there are multi billion dollar companies that take hundreds of millions of dollars to figure out what management want and you think they will replace it with single prompt ? It’s like talking to 5 year old.

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s why you still need one skilled developer per project as the middle man, for requirement elicitation. You don’t have to believe me, no skin off my nose, but I’m with an organization that’s making it work exactly as described. Months of work done in weeks instead.

        • vane@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          8 weeks is still months. If your code looks like your math then RIP, LLM junkie.

          • BJW@lemmus.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            I don’t know why someone down voted you, but I also don’t know what you mean by my math. Which math would that be? I was using AI before LLMs ever even hit the scene.

    • sacredfire@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      The near future? How is this a sustainable business model for any business? You just need one developer and “agentic” AI to build anything, how do you differentiate yourself?

      But before that problem, I don’t see the current tools anywhere near able to deliver on the hype. They are incredible and they have plenty of use cases, but for anything non trivial it feels like it’s more work fixing the errors they create than just doing it myself. I think I’d kill myself if I had to review and fix multiple agents worth of indecipherable code.

      All that being said, everyone still might get laid off! It doesn’t have to be good to crash the market.

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        What tools do you have experience with? Just the free ones? The crap from Microslop or OpenAI? I sincerely believe you’d change your opinion if you were using the professional products from companies that have created working models, but I imagine most people only have limited experience with those models, if any. Most use ChatGPT or Copilot and surmise it’s all inaccurate crap. They’d be right from that limited sample, but wrong about the market at large.

    • BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Sorry, missed this was in ShitPost. Please, allow me to revise: Dang clankers, Deytükurjerbs!

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      constantly reviewing low quality work. kill me now.

      what godawful boring job that would be. I’d have absolutely no motivation to do it well

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        It’s only low quality until it isn’t. Have you used Gemin 3.1 Proi lately? Anthropic’s Opus 4.6?

        Everything looks like low-quality crap when you only use the free models from Microsoft and OpenAI. But I suspect you haven’t utilized the paid, professional models if you hold that opinion.