🔐
Prompt Armor
AI security platform that detects and blocks prompt injection attacks, jailbreak attempts, and malicious in…
AI Safety & Alignment
Prompt Armor
AI security platform that detects and blocks prompt injection attacks, jailbreak attempts, and malicious in…
AI security platform that detects and blocks prompt injection attacks, jailbreak attempts, and malicious inputs in LLM applications.
Key Features
- ✓Prompt injection detection
- ✓Jailbreak prevention
- ✓Input filtering
- ✓Real-time protection
- ✓API integration
#ai-security#prompt-injection#safety#llm-security
Quick Info
- Category
- AI Safety & Alignment
- Pricing
- Paid
More AI Safety & Alignment Tools
Rebuff
AI Safety & AlignmentOpen-source prompt injection detection platform that uses a self-hardening approach to identify and block a…
Vigil
AI Safety & AlignmentOpen-source LLM prompt security scanner that detects injection attacks, jailbreaks, and sensitive data expo…
Anthropic Constitutional AI
AI Safety & AlignmentAI safety and alignment research
Redwood Research AI Safety
AI Safety & AlignmentAI safety technical research