Protect AI Guardian scans ML models to determine if they contain unsafe code

Guardian is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. Guardian brings together the best of Protect AI’s open source offering, and enables enterprise level enforcement and management of model security, and extends coverage with proprietary scanning capabilities.

Source: Help Net Security

 


Date:

Categorie(s):

Tag(s):