Jailbreak Trick Breaks ChatGPT Content Safeguards

Users have already found a way to work around ChatGPT’s programming controls that restricts it from creating certain content deemed too violent, illegal, and more. The prompt, called DAN (Do Anything Now), uses ChatGPT’s token system against it, according to a report by CNBC.

Read full article on Dark Reading: Cloud

 


Date:

Categorie(s):