Beep@lemmus.orgBanned to Technology@lemmy.worldEnglish · edit-21 month agoHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgexternal-linkmessage-square9linkfedilinkarrow-up175arrow-down113file-textcross-posted to: firefox@lemmy.ml
arrow-up162arrow-down1external-linkHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgBeep@lemmus.orgBanned to Technology@lemmy.worldEnglish · edit-21 month agomessage-square9linkfedilinkfile-textcross-posted to: firefox@lemmy.ml
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up4arrow-down1·1 month agoThat even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
minus-squarelIlIlIlIlIlIl@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·1 month agoI guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up5arrow-down1·1 month agoDo you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.
That even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
I guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
Do you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.