LLM Guard
Open-source toolkit for sanitizing, detecting vulnerabilities, and securing LLM I/O
LLM Guard
Open-source toolkit for sanitizing, detecting vulnerabilities, and securing LLM I/O
LLM Guard is an open-source security toolkit for large language models that provides a suite of input and output scanners to detect prompt injection, sensitive data exposure, hallucinations, toxic content, and other security risks. Developers and platform engineers integrate LLM Guard as middleware in their LLM application stacks to add security layers without building detection from scratch. AI developers building production LLM applications use LLM Guard to quickly implement baseline security controls for their chatbots, agents, and AI-powered features.
Key Features
- ✓Prompt injection detection
- ✓PII scanning
- ✓Toxic content filtering
- ✓Hallucination detection
- ✓Open-source
Quick Info
- Category
- Cybersecurity
- Pricing
- Free
More Cybersecurity Tools
Darktrace
SecurityAI-powered cybersecurity platform that uses self-learning AI to detect and autonomously respond to cyber threats in real time.
CrowdStrike Charlotte AI
SecurityCrowdStrike's generative AI security analyst that answers threat questions, investigates incidents, and accelerates response.
Vectra AI
SecurityAI-driven threat detection and response platform that identifies attacker behavior across hybrid and multi-cloud environments.
Recorded Future AI
SecurityAI-powered threat intelligence platform