LLM Guard: Open-source toolkit for securing Large Language Models

LLM Guard is a toolkit designed to fortify the security of Large Language Models (LLMs). It is designed for easy integration and deployment in production environments.

Source: Help Net Security

 


Date:

Categorie(s):