Google claims Big Sleep ‘first’ AI to spot freshly committed security bug that fuzzing missed

Google claims one of its AI models is the first of its kind to spot a memory safety vulnerability in the wild – specifically an exploitable stack buffer underflow in SQLite – which was then fixed before the buggy code’s official release. The Chocolate Factory’s LLM-based bug-hunting tool, dubbed Big Sleep, is a collaboration between Google’s Project Zero and DeepMind.

Source: The Register

 


Date:

Categorie(s):