I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rig...
LLMs are highly impressive text generators, amazing facsimiles of human writing and wholly unsuited to anything involving semantic understanding and critical thought. You cannot generate facts, and it doesn’t understand how the patterns it analyses and reproduces relate to actual concepts or things, but they’re extremely “knowledgeable” about those patterns.
They’re a technological marvel, relentlessly abused by grifters posing as prophets to scam the gullible.
Unfortunately, the gullible are executives and representatives.
LLMs are highly impressive text generators, amazing facsimiles of human writing and wholly unsuited to anything involving semantic understanding and critical thought. You cannot generate facts, and it doesn’t understand how the patterns it analyses and reproduces relate to actual concepts or things, but they’re extremely “knowledgeable” about those patterns.
They’re a technological marvel, relentlessly abused by grifters posing as prophets to scam the gullible.
Unfortunately, the gullible are executives and representatives.