Skip to main content
🛡️

Rebuff AI

Open-source self-hardening prompt injection detector

Code & Development
Rebuff AI logo

Rebuff AI

Open-source self-hardening prompt injection detector

Rebuff is an open-source prompt injection detection library that uses a multi-layer defense including heuristics, LLM-based analysis, and a vector database of known attacks. It learns from failed injection attempts to harden itself over time. Developers building LLM applications use Rebuff as an open-source alternative to commercial prompt injection protection.

Key Features

  • Multi-layer injection detection
  • Vector-based attack memory
  • Self-hardening from failed attacks
  • Canary token support
  • Python SDK
#prompt injection#llm security#open source#ai safety#security library

Get Started

Visit Rebuff AI
🟢
Free
Completely free to use

Quick Info

Category
Code & Development
Pricing
Free

More Code & Development Tools