BEAST AI Jailbreak Language Models Within 1 Minute With High Accuracy

Malicious hackers sometimes jailbreak language models (LMs) to exploit bugs in the systems so that they can perform a multitude of illicit activities.  However, this is also driven by the need to gather classified information, introduce malicious materials, and tamper with the model’s authenticity. Cybersecurity researchers from the University of Maryland, College Park, USA, discovered that BEAST AI managed to jailbreak the language models within 1 minute with high accuracy:- Vinu Sankar Sadasivan Shoumik Saha Gaurang Sriramanan Priyatham Kattakinda Atoosa Chegini Soheil Feizi Language Models (LMs) recently gained massive popularity for tasks like Q&A and code generation.

Source: GBHackers

 


Date:

Categorie(s):