👁️
Vigil
Open-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data expo…
AI Safety & Alignment
Vigil
Open-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data expo…
Open-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data exposure in AI applications.
Key Features
- ✓Open-source
- ✓Injection scanning
- ✓Jailbreak detection
- ✓Sensitive data
- ✓Python library
#open-source#prompt-security#llm-scanning#python
Quick Info
- Category
- AI Safety & Alignment
- Pricing
- Free
More AI Safety & Alignment Tools
Prompt Armor
AI Safety & AlignmentAI security platform that detects and blocks prompt injection attacks, jailbreak attempts, and malicious in…
Rebuff
AI Safety & AlignmentOpen-source prompt injection detection platform that uses a self-hardening approach to identify and block a…
Anthropic Constitutional AI
AI Safety & AlignmentAI safety and alignment research
Redwood Research AI Safety
AI Safety & AlignmentAI safety technical research