• BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Eliminating programmers will be possible when we figure out how to eliminate engineers in designing buildings.

    Only a true AGI will be able to do that, and while LLMs feel like a step towards AGI, they are still missing the critical ongoing learning component that needs to happen for an AGI to exist. The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.

      • entwine@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        19 hours ago
        • Take a human and have him study every single repo on GitHub

        • Take an AI and train it on every single repo on Github

        Which of those two will continue to make novice mistakes like SQL injection and XSS vulnerabilities?

        These AI “coding agents” aren’t learning or thinking. They’re just natural language statistical search engines, and as such it’s easy to anthropomorphize them. Future generations will laugh at us, kinda like how we laugh at old products that contain cocaine, asbestos, lead, uranium, etc.

          • entwine@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            18 hours ago

            But by definition they are learning and it is not conceptually different from how we learn.

            (citation needed)

            “Machine learning” is neither mechanically nor conceptually similar to how humans learn, unless you take a uselessly broad view and define it as “thing goes in, thing comes out”. The same could be applied to a simple CRUD app.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.

      As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Rag was never supposed to be about learning over time. It was supposed to provide better context at inference. It could never scale to handle new learning beyond focused concepts.

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.

          But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.

    • Denys Nykula@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      A true AGI also might simply not want to be a programmer or engineer, or might want to work on niche, single-developer projects interesting for them but not of use to wider humanity, like many actual developers do once their $dayjob is over. I can imagine they’ll also be annoyed by slop machine users creating extra boring work for them to shovel through and AI bros getting creepy with them or trying to subordinate them to own wishes.