return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 month agoTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comexternal-linkmessage-square25linkfedilinkarrow-up1265arrow-down19cross-posted to: technology@beehaw.orgnews@lemmy.world
arrow-up1256arrow-down1external-linkTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 month agomessage-square25linkfedilinkcross-posted to: technology@beehaw.orgnews@lemmy.world
minus-square8oow3291d@feddit.dklinkfedilinkEnglisharrow-up1arrow-down3·1 month ago LLMs don’t have any intentions. Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions. The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
minus-squaredeliriousdreams@fedia.iolinkfedilinkarrow-up4·1 month agoThe people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.
minus-squaresupamanc@lemmy.worldlinkfedilinkEnglisharrow-up3·1 month agoAn LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions.
The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
The people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.
An LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.