NVIDIA releases NeMo Guardrails for | | | | Turtles AI

NVIDIA releases NeMo Guardrails for
DukeRem27 April 2023
  #NVIDIA has announced the #release of an #opensource #toolkit, #NeMo #Guardrails, for building safe and trustworthy Large Language Model (#LLM) conversational systems. The new toolkit is designed to work with all LLM, including #OpenAI's #ChatGPT, and is powered by community-built toolkits such as #LangChain, which has already gathered around 30,000 stars on #GitHub within just a few months. NeMo Guardrails enables developers to add programmable rules to define desired user interactions within an application, making it easy to guide chatbots. The toolkit comes with a set of programmable constraints or rules called guardrails that sit between a user and an LLM, like guardrails on a highway that define the width of a road and keep vehicles from veering off into unwanted territory. The guardrails monitor, affect, and dictate a user's interactions, ensuring that conversations stay focused on a particular topic and prevent them from veering off into undesired areas. They also ensure that interactions with an LLM do not result in misinformation, toxic responses, or inappropriate content, and prevent an LLM from executing malicious code or calls to an external application in a way that poses security risks. NeMo Guardrails is built on Colang, a modeling language and runtime developed by NVIDIA for conversational AI. The goal of Colang is to provide a readable and extensible interface for users to define or control the behavior of conversational bots with natural language. The toolkit is fully programmable, enabling developers to customize and improve guardrails easily over time. NVIDIA is incorporating NeMo Guardrails into the NeMo framework, which includes everything needed for training and tuning language models using a company's domain expertise and datasets. NeMo is also available as a service and is part of NVIDIA AI Foundations, a family of cloud services for businesses that want to create and run custom generative AI models based on their datasets and domain knowledge. NVIDIA's NeMo Guardrails is a significant step towards building trustworthy, safe, and secure LLM conversational systems. The toolkit comes with several examples addressing common patterns for implementing guardrails, making it easy for developers to build their own safe and secure LLM-powered conversational systems. NVIDIA looks forward to working with the AI community to make the power of trustworthy, safe, and secure LLMs accessible to everyone.