I’m a software developer in Germany and work for a small company.

I’ve always liked the job, but I’m getting annoyed recently about the ideas for certain people…

My boss (who has some level of dev experience) uses “vibe coding” (as far as I know, this means less human review and letting an LLM produce huge code changes in very short time) as a positive word like “We could probably vibe-code this feature easily”.

Someone from management (also with some software development experience) makes internal workshops about how to use some self-built open-code thing with “memory” and advanced thinking strategies + planning + whatever that is connected to many MCP servers, a vector DB, has “skills”, a higher token limit, etc. Surprisingly, the people visiting the workshops (also many developers, but not only) usually end up being convinced by it and that it improved their efficiency a lot and writing that they will use it and that it changed their perspective.

Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!

I see Microsoft announcing that 30% of code is written by AI which is advertisement in my opinion and an attempt to pressure companies to subscribe to OpenAI. Now, my company seems to not even target that, but target the 100%???

To be clear: I see some potential for AI in software development. Auto-completions, location a bug in a code base, writing prototypes, etc. “Copilot” is actually a good word, because it describes the person next to the pilot. I don’t think, the technology is ready for what they are attempting (being the pilot). I saw the studies questioning how much the benefit of AI actually is.

For sure, one could say “You are just a developer fearing to lose their job / lose what they like to do” and maybe, that’s partially true… AI has brought a lot of change. But I also don’t want to deal with a code base that was mainly written by non-humans in case the non-humans fail to fix the problem…

My current strategy is “I use AI how and when ->I<- think that it’s useful”, but I’m not sure how much longer that will work…

Similar experiences here? What do you suggest? (And no, I’m currently not planning to leave. Not bad enough yet…).

  • Everyday0764@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    I work in consulting big corporate, it’s AI all the way down.

    I work in a GenAI team building mostly LLM powered applications. I use LLMs to work, they can be useful if you pick thr the largest model available. with smaller models it’s just a waste of time.

    My boss is very technical (theoretically) but he sold his brain to LLMs and do hard vibe coding of whole apps by himself and the pushes the on the team to do bugfixing…

    Fun fuct, he always downsize estimates, since “he can do in 1/5 of the time”.

    yep, welcome!

  • Deestan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    A small company this level of cooked and immature leadership may not have the resources to recover long term from the damage. Even if the bubble pops on them in a year.

    I’d start looking for alternatives now, before it becomes urgent.

  • gigachad@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    I am making similar experiences, but is is not as bad as you are describing it yet. We have a new member in the team who is not a developer by himself, but he has gotten the task to make our way of working more professional (we are mainly scientists and not primarily software engineers, so that’s a good thing).

    His first task was to create programming guidelines and standards. He created 8 pages of LLM generated text and example non sense code. He honestly put a lot of effort in it, but of course there are a lot of things in it that are wrong. But the worst thing is the wall of text. You are nailing it - it is my task now to go through this whole thing and extract the relevant information. It sucks. And I am afraid that soon I will need to review more and more low quality MRs generated by people who have little experience in programming.

    • ch00f@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      We had a dev drop a combined total of 8,300 lines of readme files into the code base over a weekend. I want to nuke all of them, my boss suggests reviewing and updating them.

      • 87Six@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        8,300 lines

        rookie numbers

        I think my team is in the tens of thousands of AI generated “documentation”.

        They claim the AI can use it to code better in the project.

        Bullshit. The AI can’t load in a single one of these files without filling half the context.

        • 87Six@lemmy.zip
          link
          fedilink
          arrow-up
          0
          ·
          9 days ago

          I was recently instructed to have gander at it.

          I warned that it seemed inconsistent with the actual code.

          Was told I’m right and they brushed it off.

          “We should update this to reflect reality”

          They brushed it off and we moved on. The misleading doc is still there, waiting for its next victim.

      • namingthingsiseasy@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        “I don’t have time to read through that much bullshit.”

        Maybe phrase it a little more kindly, but that’s what I’d try at the very least. “I have other priorities at the moment” could work too.

  • LeapSecond@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    I had a manager who pushed AI a lot. When he left, all the pressure to use it seemed to die down. So maybe it’s just a couple of people creating this environment and if you can get away with avoiding them it’s better.

    The problem with AI code we saw is that often no human has actually looked at it. During reviews you won’t check every line and you’ll have to trust much of the code that seems to do obvious things. But that assumes it was written by a human you also trust. When that human hasn’t reviewed the code either, you end up with code no one in the company has seen (and may not even know how it works).

    • Leon@pawb.social
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Your entire comment echoes my thoughts. Things aren’t exactly improved by the idea of adding LLMs to the review process either. Gods.

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    My advice is to stick to making specific observations as not to sound hyperbolic.

    Yet at the same time what is needed is a counter-narrative to “AI can do the job”. My observation is the industry cycles between agile and waterfall development and people (who are the loudest) are using AI to do waterfall development. This is a bad idea for the same reasons waterfall development is bad: trying to write a spec that covers all situations is counterproductive.

    The alternative I see is that we borrow from Agile and treat AI as a pair developer that you outsource small (or largely repetitive) tasks to. This is not vibe coding nor is it TDD as you are still actively developing all levels of the code, but your feedback loop (OODA) is kept short.

  • Cherry@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Managers using influence to inflate their importance is common. It sounds like they are hoping to head off innovation to protect themselves.

    The second aspect is their harnessing of useful idiots. I think critical or practicing objective thinking is being pushed aside at the time when needed most. You probs have to tread carefully and maybe influence people to use their own minds not that of desperate managers. Alternatively set the useful idiots off in competition and watch the fireworks. Not helpful but it is fun.

    There always some people are always looking for an easy fix, a way to rise quick, to be a yesman. But for the most they just wanna do a good job. Maybe appeal to the rationally of those people. Trust your brain and skill rather than cult practises.

  • KokusnussRitter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    I haven’t had a similar experience yet, but maybe some if your collegues feel the same way? You could write a letter stating your concerns and let anyone sign who agrees and then send it to your manager. Also, I’d like to add that under German law AI Output can not be copyrighted. You can only claim coownership or something. Maybe that could be interedting to your managers?

      • moonshadow@slrpnk.net
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        Principles or ideals prioritized above comfort and stability, fucked up you have to ask. Spoken like a hollow bootlicker

        • Paddzr@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          9 days ago

          Sounds like I walked into reddit antiwork crowd! Always black and white with you lot… If you’re in industry and market that allows that? I’m happy for you.

          • moonshadow@slrpnk.net
            link
            fedilink
            arrow-up
            0
            ·
            9 days ago

            You’re the one who crashed in with the judgment, name calling, and confrontational attitude. You couldn’t be more thoroughly shaped by your “industry and market” and I’m not sure if it’s more gross or sad. Corporations might be people now but they’re sure as shit not gonna cry at your funeral, get a life outside your job

            • Paddzr@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              9 days ago

              Some very bold assumptions.

              Reality is, attitude like that doesn’t get you hired. I don’t hire based on people’s view on AI and LLM, but I do hire on attitude and one’s ability to be molded to fit the job I’m hiring for. Hasn’t failed me for 20 years and this is far from the first fear mongering in tech.

              • moonshadow@slrpnk.net
                link
                fedilink
                arrow-up
                0
                ·
                9 days ago

                Ability to be molded as a primary virtue is completely alien to me man. Maybe it’s a cultural thing. “Reality is,” I see forming your self to fit the needs of this system as a tragic waste of life. There are ways to sustain yourself beyond blind acquiescent compliance, and basic human needs a career can never meet. I would feel like a failure having spent the last twenty years of my life guided by what was best for a business.

                • Paddzr@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  8 days ago

                  I’m not sure where your belief that work somehow defines me come from… Maybe it’s something from “your culture”.

                  You make so many assumptions is wild and tiring to even bother to repute them. I’ll leave it here as I’m not convinced you’re actively engaging in any sort of conversation anyway.

    • 87Six@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      This.

      I’ve already got my manager to tell me to not use AI on a task. I see this as an absolute win and I’m gunning for more.

      He ALWAYS uses AI first when he needs to figure something out. ALWAYS tells us to use AI for the quick start. But when we do it, and it ends up wasting time, somehow it’s our fault, and we didn’t prompt it properly.

      Also, am I mad, or does Cursor (specifically Sonnet) sometimes act dumb on purpose? Sometimes it codes a feature nearly entirely without many issues, other times it seems unable to comprehend that it’s using the wrong property in a class.I feel like it’s made to make us question each other’s ability to use AI tools and cause internal team unrest.

      • Sunsofold@lemmings.world
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        Never forget that it isn’t thinking, at all. It comprehends nothing. It’s just a very big, expensive autocomplete. It didn’t understand when it was using the right property, it just rolled its d10000 and got something that fit requirements, but on the time it failed, it rolled outside of the desired range. No thought, just numbers.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    We’re using LLMs at the company I work at and it seems very useful in many cases but sometimes it still doesn’t work. I’m a bit worried about the aspect of the code rotting by LLMs generating stuff based on existing code.

    My mindset has shifted a bit, now I’m more focused on making stuff easy to find and easy to figure out patterns to use so that the codebase becomes easier to work with. There’s some horrible code in the project and the LLM absolutely sucks balls at it but if it’s a clean routine job such as making a table with update dialogs and actions to manipulate the data the success rate is >95%.

    So yeah, don’t trust it, treat it like a junior dev that got straight As in school and has never considered security. Code reviews are now where it’s at.

  • namingthingsiseasy@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    It’s hard to be a contrarian in these kinds of positions (I’ve been there, and it didn’t end well), so I wouldn’t be too outspoken, but at the same time, try to innocently point out the issues with approaches like this. I would just try to point out the flaws in this approach, the same that we would for any other kind of programming fad - without making it seem like it’s an agenda, of course.

    For example, any time teams are looking for feedback - code review, retrospectives, etc. - just point out the flaws on why vibe coding is a bad idea and bring it up casually when the time comes. It doesn’t hurt to be honest as long as you don’t come off as being an ass about it.

  • raicon@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    Endure the next year or so, until it pops and there will be a massive need for senior devs for fixing the slop machine.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    Let me give you a little parable.

    There once was a juggler, who could juggle with three balls all day. Then someone from the audience threw a fourth ball, and he kept going. Someone threw a glass, then a flaming torch, and he kept going, occasionally burning his hands. Seeing he could do it, someone throws a machete, and the juggler almost never cuts his fingers keeping all those things in the air. A chainsaw gets added, and an open bottle of bleach, and occasionally the juggler gets his hair caught or spills some bleach, but he keeps going. As he keeps going, people keep adding more and more things. Eventually it’s too much, and it all comes crashing down, killing the juggler and several members of the audience and destroying all the objects in the air.

    On the next street corner, a juggler stands with three balls. Someone from the audience throws in a fourth. He steps aside and lets it fall to the floor, happily juggling three balls.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        The point was that the more you keep compensating for other people’s dumb moves, the greater the damage when it all inevitably comes crashing down.

        In other words, just do what they ask, get them to sign off on it and watch it crash and burn in an unmaintainable, unsecured mess

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Reading a wall of text to extract a simple concept which turns it to be wrong seems very appropriate for this thread, just perhaps not in the way they intended.

  • PetteriPano@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 days ago

    Agentic use of AI didn’t really work well enough until December of last year. The models and tools just improve that fast. Codex/claude (or opencode with the same top models) is what you’d need for it.

    You still need to plan and define clear specifications for the model. Spend 80% of your time planning and breaking down the job into steps and it’ll be pretty self-going from there.

    Of course, this works best for common frameworks and solved problems or logical problems. React/node developers can easily 10x their output, and get it done better than they would by hand.

    I’m working more with empirical development, so most of my time goes into studying environments and adapting to it. I get most benefit out of having agents read through logs and figure out what happened. It gets it right maybe half the time, but it’s a good rubber ducky even when it goes wrong. I’d say it 2-3xes my output. But I can probably improve my usage, too.

    But yeah, code review is where it hurts. If it’s slop, it just takes so many rounds to get it right. Even when it’s good, it’s just so much code to review.

    • Leon@pawb.social
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      I try to push on the maintenance aspect. Developing something new is easy, and my company does do that, but the group I’m in is primarily doing maintenance on existing software. Bug fixes, feature additions, etc. If we generate applications entirely using LLMs, none of us will be experts on the applications we push to the customers.

      They push corpo buzzwords like “responsibility”, but who takes responsibility when no one has done the work to begin with? It feels like a liability nightmare, and the idea of sitting there cleaning slopcode just isn’t very appealing to me.

    • dan1101@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      That’s going to be a problem, almost like a money laundering scheme. AI can spit out content that’s 99% derived from copyrighted content but is itself free of copyright.