Large language models are now capable of automating attacks that were previously only possible by human adversaries. In this talk, I discuss several ways that adversaries could mis-use current models in order to cause harm both at a larger scale and at a lower cost than they do currently. For example, we find that recent state-of-the-art models can now find 0-day vulnerabilities in large software projects that have been extensively tested by humans for decades. These new capabilities will alter the threat landscape and require we rethink security in the coming years.
Just use the same AI to white hat. 🤷♂️
AI has achieved rank: script kiddy
Y’all got any interesting news?
I’m tired of the bullshit ads disguised as “experts” and “studies”
Remember when AI outclassed the best Go player in the world?
That was in 2016.
As I recall Go players have adapted and have found ways to induce hallucinations and beat the machine, some using other AI. Others have adopted “adversarial strategies.”
https://arxiv.org/abs/2211.00241
They say it’s comprehensible enough that a human “expert” can do it without algorithmic assistance.
this is an ad
That headline doesnt parse for me
This guy who’s job expertise is cyber security and LLMs is very very worried. The models are improving exponentially and finding very advanced vulnerabilities now. This is a serious problem.
“To Black Hat” = hacking
I’m just some hobo and I too am very very worried.







