Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      2 days ago

      Aka being wrong, but with a fancy name!

      When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        2 days ago

        Accepting concepts like “right” and “wrong” gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.

        To be precise:

        LLMs can’t be right or wrong because the way they work has no link to any reality - it’s stochastics, not evaluation. I also don’t like the term halluzination for the same reason. It’s simply a too high temperature setting jumping into a closeby but unrelated vector set.

        Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It’s then a “oh but wen make them better!” And their marketing departments overjoy.

        To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.

        We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though…

          • eceforge@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            No comment on the rest of the thread but I always though “confabulation” was a more accurate word than hallucination for what LLMs tend to do.

            The “signs and symptoms” part of the article really seems oddly familiar when compared to interacting with an LLM sometimes haha.

          • Scipitie@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            2
            ·
            2 days ago

            That’s my problem: any single word humanizes the tool in my opinion. Iperhaps something like “stochastic debris” comes close but there’s no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(

              • deranger@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                Paredolia just means seeing patterns that aren’t there, it’s not implicitly human. If you see a dog in the clouds, that’s paredolia.

                • Telorand@reddthat.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  Great, when did I say otherwise? Pareidolia is a thing humans do, because we like patterns. Finding patterns is something that has benefited our species, but it is sometimes so strong that we see faces in electrical outlets or the shape of a car’s front profile (for example).

                  • deranger@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 day ago

                    I mean, it doesn’t really follow given the context. Nobody is talking about the visual sense, they’re talking about humanizing AI through using certain words, which isn’t paredolia.

          • leftzero@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            2 days ago

            Scam. We’re being sold an autocomplete tool as a search engine.

            Or fraud, since some of the same companies destroyed the functionality of their search engines in order to make the autocomplete look better in comparison.

      • bad1080@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        if you have a lobby you get special names, look at the pharma industry who coined the term “discontinuation syndrome” for a simple “withdrawal”