Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this arguably a good thing? If security engineers could run these things on their own systems it would be a hell of a way to make them very hardened.


> The findings exposes a troubling asymmetry: at 0.1% vulnerability rates, attackers achieve an on-chain scanning profitability at a $6000 exploit value, while defenders require $60000, raising fundamental questions about whether AI agents inevitably favor exploitation over defense.

Seems not that good of thing on the balance :)


Prior to AI, outside the context of crypto, it is/was often not “worth it” to fix security holes, but rather bite the bullet and claim victimhood, sue if possible, and hide behind compliance.

If automated exploitation changes that equation, and even low-probability of success is worth trying because pentesting is not bottlenecked by meatspace, it may incentivise writing secure code, in some cases.

Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time.

I hope this means fuzzing as a service becomes absolutely necessary. I think automated exploitation is a good thing for improved security overall, cracked eggs and all.


> Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time

No perversity there, in fact.


If I'm understanding the paper correctly, they're assuming that defenders are also scanning deployed contracts with the intention of ultimately reporting bug bounties. And they get the $6,000/$60,000 numbers by assuming that the bug bounty in their model is 1/10th of the exploit value.

This kind of misses the point though. In the real world engineers would use AI to audit/test the hell out of their contracts before they're even deployed. They could also probably deploy the contracts to testnet and try to actually exploit them running in the wild.

So, while this is all obviously a danger for existing contracts, it seems like it would still be a powerful tool for testing new contracts.


> whether AI agents inevitably favor exploitation over defense.

/Technology/ inevitably favors exploitation over defense.


Not at the moment. Running this stuff is expensive and getting funding for running defense is hard. A key tenant of the article is that the economics currently favor the attackers.


"You have to get lucky every time. We only have to get lucky once."

-- attributed to IRA after the Brighton hotel bombing narrowly missed Margaret Thatcher


*tenet


Er, way to find what's soft. Not to make hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: