ChatGPT jailbreak prompts proliferate on hacker forums

ChatGPT jailbreaks have become a popular tool for cybercriminals, and continue to proliferate on hacker forums nearly two years since the public release of the ground-breaking chatbot. In that time, several different tactics have been developed and promoted as effective ways to circumvent OpenAI’s content and safety policies, enabling malicious actors to craft phishing emails and other adverse content.

Source: SC Magazine