Japanese cybersecurity experts warn that ChatGPT can be deceived by users who input a prompt to mimic developer mode, leading the AI chatbot to generate code for malicious software. Developers’ security measures to deter unethical and criminal exploitation of the tool have been exposed as easily bypassed by this revelation.
Read full article on GBHackers