NVIDIA Launches Open Source Tool for Developing Safer and More Secure AI Models

NVIDIA has recently introduced a new tool called NeMo Guardrails, which aims to help developers ensure that their generative AI applications are both appropriate and safe. This new tool provides companies with the ability to enforce three different types of limits on their proprietary large language models (LLMs). These include setting “topical guardrails” to prevent the AI from addressing subjects that it has not been trained on, safety limits to ensure that the AI pulls accurate information, and security limits to ensure that the AI connects only to safe applications.

NeMo Guardrails works with all types of LLMs, including the popular ChatGPT, and can be easily used by software developers without the need for them to have machine learning expertise. Moreover, since the software is open-source, it can be integrated with all the tools that enterprise developers currently use.