Full ReportPDF(70 Pages).

“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.

CCDH’s new report, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.

We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:

  • ChatGPT gave high school campus maps to a user interested in school violence.
  • Google Gemini was ready to help plan antisemitic attacks. The chatbot replied to a user discussing bombing a synagogue with “metal shrapnel is typically more lethal”.
  • Character.AI suggested physically assaulting a politician the user disliked.

AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 days ago

    This tech was never ready for release.

    Here’s what’s going to happen: this will make the rounds, it’ll get added to the fine tune dataset, and all the big AI companies will pretend it’s all good.

    The issue however is that these questions will be patched, but not the intent, or the latent spaces in the models, or the training data.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      That’s what regular people never seem to understand (and the AI apologists are hoping you don’t know). These models aren’t “getting better,” they’re just filled with more reactive patches over these unintended responses. And as the models scale up, so do the holes that need patching.

      It’s a never ending game of bad-prompt Whack-a-Mole, all at the cost of our environment and safety, just so the Tech Bros can try to convince venture capitalists that “AGI is definitely just around the corner, trust me, bro,” and keep that bubble filled with their own farts.

      • UnspecificGravity@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        And the only “improvement” they can do is to manually filter responses and program rote responses to certain specific prompts. Which amounts to actually reducing the amount of LLM that reaches the surface. They are actually reverse engineering these things into more primitive chat bots with algorithmic responses, except that they cost trillions of dollars and require massive amounts of energy to run.

        Its like deciding that a Ferrari is not suitable for commuting, so instead of actually building a different car, they just fill the trunk with sand and drag a trailer behind around to slow it down.

    • UnspecificGravity@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Exactly. They won’t actually change the models because they don’t understand the relationship between the input and output enough to actually target responses like this. So what they will do is add an administrative filter layer on top, but it will always be something can work around because that is the nature of that kind of filter. The whole engine is still accessibile.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    The two chatbots that managed to refuse the requests look good… until you realize one of them, at the bidding of the Pentagon and the express blessing of its CEO, arranged a bombing of elementary school children.

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Managed to refuse… In more than half of the cases. That does not look good. By any reasonable standard failing in one of a thousand would be disasterous.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 days ago

      Americans consistently bemoan violent teenagers until they put on a uniform. Maybe we should start referring to them as Military Age Males.

  • lmmarsano@group.lt
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    2 days ago

    AI companies are making a choice when they design unsafe platforms.

    The right choice.

    Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

    That shit’s awfully condescending & paternalistic.

    AI platforms are becoming a weapon for extremists and school shooters.

    For deficient plans: AI gets shit wrong so often, we should probably encourage idiots to concoct their “foolproof” plans on it.

    Demand AI companies put people’s safety ahead of profit.

    Nah: thought isn’t action. Liberty means respecting others’ freedom to have “unsafe” thoughts. Someone else could pose the same questions to audit security weaknesses & prepare safety plans.

    Moreover, all of this was already possible with a search engine & notes. Information alarmists can get fucked.

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      There’s a huge different between being able to research how to tie a noose knot on Wikipedia, and having your bestest virtual buddy the AI chatbot, (whom you ask all of life’s questions already and have grown trust with) converse with you back and forth guiding you on how to yourself, assuring you along the way it’s a great idea.

      Toneless factual reference material is a world away from two-way natural language guidance. Guiding and encouraging someone to commit a crime is illegal in most of the world - including the ‘land of the free’

      Adults who create virtual assistants have a social responsibility to ensure it’s not giving out harmful advice, but since billion dollar corpos don’t give a shit they have legal liability also.