• 2 Posts
  • 5 Comments
Joined 24 days ago
cake
Cake day: February 18th, 2026

help-circle
  • The author is supposedly fairly experienced. Which makes this take really baffling.

    When someone needs an IDE with new language support, they won’t wait for JetBrains to release it next year, maybe. They will go to a software shop around the corner, pay a few hundred bucks, and get it next Monday. When someone needs a new feature in Photoshop, they won’t wait for Adobe. They will buy a new Photoshop from a friend, with the feature and maybe a few more. When a company needs their accounting system to support a new logistics optimization scheme, they won’t go to Oracle. They will re-write the entire Oracle Fusion, for a few thousand dollars.

    No company in their right mind is going to want to constantly throw away and switch software like shoes. There’s a reason companies employ tons of customer success people to help with migrating and onboarding. It is painful to migrate a tool over.

    Also what happens with the shoe model when the customer forgot a few requirements, do you pay for the new disposable software again? Do you keep regenerating new software every time you forget a thing you need to support? And that’s assuming a small company with an easy install process. What about a large company where you have to roll this out to multiple machines at any given time?

    Okay cool you pay for a web app instead so its easier to distribute. Where are you housing your disposable app? You wanna manage an AWS account yourself and manage the scaling and infra?

    People don’t pay SaaS companies for functionality itself.

    But all of that assumes that AI can perfectly replicate every aspect of different software. Which there’s no context window in the world that will support that. It reeks of the same “Docusign is gonna be vibecoded away vibe”. You don’t pay docusign for an interface to type your name in. You pay them to stay on top of regulatory compliance with document signatures and support various different integrations with your other tools that you have.


  • I feel ya. I’ve been told the same thing at my job as well.

    I’d say “find a new gig” but honestly every place has been bitten by this hype train it seems like.

    I’ve been doing a hybrid approach where I use the chatbot to do rudimentary things like label renames and then just do a lot of my work “the normal way”. That way I log some token usage to say I use the tools and then bet that my output isn’t going to be drastically different from my coworkers.

    Then when the hype train dies we can all hopefully go back to doing what we do best. It’s just a shitty period that I hope we can ride out.


  • I generally agree with what the post is saying but this part

    I think that we’ll still be coding, but with some other layer, as LLMs are good with structured input, like programming languages. So we might need other programming languages than we have atm. Might we need different tools to evaluate LLMs’ output to make it deterministic? Might we need a different approach for engineering to make it scalable? Might we need more?

    I just don’t see this happening to be honest. It’s the same thing people keep claiming about “prompts replacing code”

    Let’s say you do make it deterministic. Then why do you need the LLM for it? You can just build a plain old compiler for it. Why add Anthropic or Open AI as an expensive middleman to your operations. There’s already a lot of admin plugins that will set up entire routes and pages based off of a db model. The reason people don’t purely work off of those is the world isn’t modeled off of simple CRUD. There are so many edge cases and requirements that aren’t easy to model in a sweeping generalization that you need some way of fine tuning that.

    So if you scrap that you’re back to “prompts as code”. Which also sucks.

    If you have a PR change that’s breaking production and the only change is to a prompt

    Make the popup background red blue

    How the hell do you triage what went wrong? Do you revert and roll the dice that the LLM is gonna get it right? No one in their right mind would ever think this is okay in a production setting?

    I don’t want to say we’ll never have a higher level extraction, but I don’t think it’ll be due to LLMs.



  • Yeah that’s a real good point. I focused a lot on the short term issues of agentic slop but you’re right the long term impact of this is going to be staggering.

    That mental model is ultimately the more important part for the long-term health of the project. Coding is more an activity of communication between people; having an artifact that tells the computer what to do is almost an incidental side-effect of successful communication.

    100%. Something I wanted to touch on in my post but cut because I couldn’t weave it in well was more on the relationship with “The Problem” and “The Code”. Pre-AI coding acted as a forcing function for the problem context. It’s really hard to effectively build software without understanding what you’re ultimately driving towards. With agentic coding we’re stripping that forcing function out.

    Institutional knowledge is already something that’s been hard to quantify and value by C-suites and now you’re ripping out the crucial mechanism for building that.

    I see a lot of memes with people being like “in 5 years we’re just gonna press yes and not understand what the agent is doing” and I keep thinking “why do people think this is funny?”