Splunk Urges Australian Organisations to Secure LLMs

Splunk’s SURGe team has assured Australian organisations that securing AI large language models against common threats, such as prompt injection attacks, can be accomplished using existing security tooling. However, security vulnerabilities may arise if organisations fail to address foundational security practices.

Source: Security on TechRepublic

 


Date:

Categorie(s):