Securing AI systems against evasion, poisoning, and abuse

Adversaries can intentionally mislead or “poison” AI systems, causing them to malfunction, and developers have yet to find an infallible defense against this. In their latest publication, NIST researchers and their partners highlight these AI and machine learning vulnerabilities.

Source: Help Net Security

 


Date:

Categorie(s):

Tag(s):