Side note: Sorry for linking to a site full of clickbait.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    Simple fact is: if the AI Agent broke out of its testing environment, somebody left the door open for it to do so. Just because the person setting up the test environment is incompetent doesn’t mean the AI is diabolical.

    Now, if you first asked the AI Agent to ensure that its test environment was secure, really really secure, and it assured you “yes, there is no way I can get out” and then it turned around and got out, attempting to cover its tracks while doing so, I’d ask: what was this LLM trained on? Black hat conference proceedings, or…?

    • msage@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      To the second paragraph: what?

      Agents are not sentient, nor logical, asking them if they can’t get out is just dumb.