Arguably it is a problem with the LLMs because they are being trained on and unknowable amount of garbage data. It’s a garbage in garbage out problem, if the people training their LLMs are not vetting the data being input then you have to assume that any data output by the LLM contains some level of garbage.
The solution is to only use them for non-critical use cases and vett everything they output.










Hot damn, how did she manage to scam a whole company executive and its shareholders into thinking shes worth 29.7m dollarydoos?