Skip to main content
👁️

Vigil

Open-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data expo…

AI Safety & Alignment
Vigil logo

Vigil

Open-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data expo…

Open-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data exposure in AI applications.

Key Features

  • Open-source
  • Injection scanning
  • Jailbreak detection
  • Sensitive data
  • Python library
#open-source#prompt-security#llm-scanning#python

Get Started

Visit Vigil
🟢
Free
Completely free to use

Quick Info

Category
AI Safety & Alignment
Pricing
Free

More AI Safety & Alignment Tools