I agree with this. We've made existing problems 100x worse overnight. I just read the curl project is discontinuing bug bounties. We're losing so much with the rise of AI.
That seems a bit fatalistic, "we have lost so much because curl discontinued bug bounties". That's unfortunate, but it's very minor in the grand scheme of things.
Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).
Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.
Blaming only people is also incorrect, it's incredibly easy to see that once the cost of submission was low enough compared to the possible reward bounties would become unviable
Ai just made the cost of entry very low by pushing it onto the people offering the bounty
There will always be a percentage of people desperate enough or without scruples that can do that basic math, you can blame them but it's like blaming water for being wet
>In places where guns are difficult to come by, you'll find knife crime in it's place.
By how much and how consequential exactly, and how would we know?
There were 14,650 gun deaths in the US in 2025 apparently. There were 205 homicides by knife in the UK in 2024-2025. [0][1]. Check their populations. US gun deaths per capita seem to exceed UK knife deaths by roughly 15x.
Good question. Canada has twice as many registered firearms as the US (though the number of unregistered firearms is likely greater in the US). It's certainly not difficult to purchase guns in either country. And Canada experiences an order of magnitude fewer gun deaths per capita than the US. The US is somewhat unique among western nations in how it handles mental illness, and crime, and I would suggest those are more fruitful avenues of inquiry.
So I'll stand by the stance that individuals are responsible for their own actions, that tools cannot bear responsibility for how they are used on account of being inanimate objects, and that all tools serve constructive and destructive purposes, sometimes simultaneously.
> Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
I think there's a general feeling that AI is most readily useful for bad purposes. Some of the most obvious applications of an LLM are spam, scams, or advertising. There are plenty of legitimate uses, but they lag compared to these because most non-bad actors actually care about what the LLM output says and so there are still humans in the loop slowing things down. Spammers have no such requirements and can unleash mountains of slop on us thanks to AI.
The other problem with AI and LLMs is that the leading edge stuff everyone uses is radically centralized. Something like a knife is owned by the person using it. LLMs are generally owned one of a few massive corps and at best you can do is sort of rent it. I would argue this structural aspect of AI is inherently bad regardless of what you use it for because it centralizes control of a very powerful tool. Imagine a knife where the manufacturer could make it go dull or sharp on command depending on what you were trying to cut.