Large language models (LLMs) such as ChatGPT have shaken up the data security market as companies search for ways to prevent employees from leaking sensitive and proprietary data to external systems. Companies have already started taking dramatic steps to head off the possibility of data leaks, including banning employees from using the systems, adopting the rudimentary controls offered by generative AI providers, and using a variety of data security services, such as content scanning and LLM firewalls.
Source: Dark Reading: Cloud