Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI’s Huntr bug bounty platform.

Source: The Hacker News

 


Date:

Categorie(s):