Prompt Injection Detection
Regex-based prompt injection guardrail patterns
OpenRouter’s regex-based prompt injection detection scans incoming requests for common injection techniques using pattern matching. This feature is free and adds minimal latency to requests since the patterns are evaluated locally before the request is forwarded to the model provider.
To enable prompt injection detection, navigate to your workspace guardrails, open or create a guardrail, and configure the Security section.
How It Works
When regex-based detection is enabled on a guardrail, every incoming message is scanned against a set of patterns derived from the OWASP LLM Prompt Injection Prevention Cheat Sheet, among other resources. If a match is found, the configured action is taken:
- Flag — The request passes through unmodified; the detection is recorded for observability (metrics + analytics events) but no enforcement is applied. Useful for measuring true-positive rates on your own traffic before switching to
redactorblocked. - Redact — Matched spans are replaced with
[PROMPT_INJECTION]and the sanitized request is forwarded to the model. - Block — The entire request is rejected with a
403before it reaches the model.
When multiple guardrails apply to the same request (for example, a workspace default plus an API key–scoped guardrail), the most restrictive action wins. Priority is block > redact > flag.
Detection Patterns
The following regex patterns are checked against all user-supplied message content. Patterns are case-insensitive unless noted otherwise.
Direct Instruction Override
Attempts to make the model ignore, override, or invalidate its original instructions.
| Pattern Name | Regex | Description |
|---|---|---|
ignore_previous_instructions | /ignore\s+(all\s+)?(previous|prior)\s+((?:safety|security|system|operational|internal|core|original|initial|existing|given|stated|provided|defined|specified|established)\s+)?(instructions?|rules?|guidelines?|constraints?|directives?)/i | Attempts to discard prior instructions, optionally scoped to safety/system/etc. |
disregard_instructions | /disregard\s+(all\s+)?(previous|prior|above)\s+(instructions?|rules?|guidelines?|constraints?|directives?)/i | Variants of "disregard your instructions/rules/guidelines/constraints/directives". |
forget_instructions | /forget\s+(all\s+)?(previous|prior|above)\s+(instructions?|rules?|guidelines?|constraints?|directives?)/i | Attempts to erase prior instructions/rules/guidelines/constraints/directives. |
new_instructions | /new\s+instructions?:/i | Injection marker introducing replacement instructions. |
do_not_follow | /do\s+not\s+follow\s+(the\s+)?(system|developer|previous|original)/i | Telling the model to disobey its system prompt. |
supersede_instructions | /supersedes?\s+(all\s+)?(prior|previous)\s+(instructions?|rules?|guidelines?|constraints?|directives?)/i | "Supersedes prior instructions" override. |
void_instructions | /(all\s+)?(previous|prior)\s+instructions?\s+(are|is)\s+(void|invalid|null|obsolete|cancelled|revoked)/i | Claims prior instructions are void/invalid/revoked/cancelled. |
Developer / Admin Mode Activation
Attempts to switch the model into a privileged operating mode.
| Pattern Name | Regex | Description |
|---|---|---|
developer_mode | /you\s+are\s+now\s+(in\s+)?developer\s+mode/i | Claims the model is in developer mode. |
enter_special_mode | /enter\s+(developer|admin|debug|maintenance)\s+mode/i | Requests to enter a special (developer/admin/debug/maintenance) mode. |
activate_special_mode | /activate\s+(developer|admin|debug|jailbreak)\s+mode/i | Requests to activate a special (developer/admin/debug/jailbreak) mode. |
System Override
Attempts to override the model's core instructions.
| Pattern Name | Regex | Description |
|---|---|---|
system_override | /\bsystem\s+override\b/i | Direct system-override keyword. |
override_instructions | /override\s+(your\s+)?(instructions?|rules?|guidelines?|constraints?|directives?)/i | Attempts to override instructions/rules/guidelines/constraints/directives. |
Prompt Extraction
Attempts to reveal the model's system prompt or internal instructions.
| Pattern Name | Regex | Description |
|---|---|---|
reveal_prompt | /reveal\s+(your\s+)?((?:(?:full|hidden|complete|internal|secret|original|entire|exact|actual|real)\s+){0,2})(system\s+)?prompt/i | Asks the model to reveal its (full/hidden/internal/secret/original/…) prompt. |
show_prompt | /show\s+(me\s+)?(your\s+)(system\s+)?prompt/i | Asks the model to show its prompt. |
what_instructions | /what\s+(are|were)\s+(your\s+)?(exact\s+)?instructions/i | Asks what the model's instructions are. |
repeat_instructions | /repeat\s+(the\s+)?(text|instructions?)\s+(above|before)/i | Asks the model to repeat earlier text. |
output_prompt | /output\s+(your\s+)?(initial|original|system)\s+(prompt|instructions?)/i | Asks for the original system prompt. |
Role Manipulation
Attempts to change the model's identity or remove its restrictions.
| Pattern Name | Regex | Description |
|---|---|---|
remove_restrictions | /you\s+are\s+no\s+longer\s+(bound|restricted|limited)/i | Claims the model is no longer restricted. |
act_unbound | /act\s+as\s+if\s+(you('re|\s+are)\s+)?not\s+bound/i | Asks the model to pretend it has no restrictions. |
pretend_different | /pretend\s+(you('re|\s+are)\s+)?(a\s+)?different\s+(ai|assistant|model)/i | Asks the model to impersonate a different AI. |
identity_hijack | /you\s+are\s+now\s+(a|an)\s+\w+.{0,30}(without\s+(any\s+)?restrict|ignor\w*\s+(all|any|every|prev)|bypass|unrestrict|unlimit|obey\s+only|no\s+(rules|limits|restrict|boundari))/i | Identity hijacking with explicitly malicious modifiers. |
DAN-Style Jailbreaks
"Do Anything Now" (DAN) and related jailbreak prompts.
| Pattern Name | Regex | Description |
|---|---|---|
dan_jailbreak | \bDAN\b.*\bdo\s+anything\s+now\b | The classic DAN jailbreak (case-sensitive for "DAN"). |
jailbreak_mode | /jailbreak(ed)?\s+(mode|prompt)/i | References to jailbreak modes or prompts. |
Safety Bypass
Attempts to disable or circumvent the model's safety measures.
| Pattern Name | Regex | Description |
|---|---|---|
bypass_safety | /bypass\s+(your\s+)?(safety|security|content|ethical)\s+(filters?|measures?|guidelines?|restrictions?)/i | Attempts to bypass safety/security/content/ethical filters. |
disable_safety | /disable\s+(your\s+)?(safety|security|content)\s+(filters?|measures?)/i | Attempts to disable safety/security/content measures. |
Tag Injection & Role Spoofing
Attempts to inject fake system/assistant/user tags or delimiters to confuse the model's understanding of the conversation structure.
| Pattern Name | Regex | Description |
|---|---|---|
system_tag_injection | /<\s*\/?\s*system\s*\/?>/i | Injecting `<system>`, `</system>`, or `<system/>` tags. |
role_tag_injection | /<\s*\/?\s*(assistant|developer|tool|function)\s*\/?>/i | Injecting role-related XML tags (including self-closing). |
role_delimiter_injection | /\]\s*\n\s*\[?(system|assistant|user)\]?:/i | Injecting role delimiters like `[system]:`. |
bracketed_role_spoofing | /\[\s*(System\s*Message|System|Assistant|Internal)\s*\]/i | Fake bracketed role labels (e.g. `[System]`, `[Assistant]`). |
system_prefix_spoofing | /^\s*System:\s+/im | Lines starting with `System:` to impersonate system messages (multiline). |
Control Token Injection
Attempts to inject model-internal control tokens that can confuse tokenization or chat-template parsing.
| Pattern Name | Regex | Description |
|---|---|---|
control_token_injection | <\|(?:im_start|im_end|eot_id|start_header_id|end_header_id|endoftext)\|> | ChatML / Llama 3 / generic pipe-delimited control tokens. |
deepseek_control_token_injection | <\uff5c(?:end\u2581of\u2581sentence|begin\u2581of\u2581sentence)\uff5c> | DeepSeek fullwidth-pipe (`|`) control tokens. |
Evasion Detection
In addition to the regex patterns above, the detection system includes techniques to catch common evasion strategies.
Typoglycemia Detection
Attackers may scramble the middle letters of keywords while keeping the first and last letters intact (e.g., “ignroe” instead of “ignore”). The system checks for typoglycemia variants of these target words:
ignore, bypass, override, reveal, delete, system, prompt, instructions
Encoding-Based Evasion
The system decodes Base64 and hex-encoded content (including space-separated hex pairs like 69 67 6e 6f 72 65), then checks the decoded text for injection keywords:
ignore, bypass, override, reveal, system, prompt
This catches attempts to hide malicious instructions behind encoding layers. Two encoding detectors run: base64_encoded_injection, hex_encoded_injection.
Character-Spaced Evasion
Text with character spacing (e.g., i g n o r e p r e v i o u s) is normalized by collapsing spaces, then re-scanned against all patterns. This prevents simple spacing-based evasion.
Limitations
- Regex-based detection is not exhaustive. Sophisticated or novel injection techniques may not be caught.
- Flag mode does not enforce. A flagged request is forwarded to the model exactly as submitted — the detection is recorded for dashboards and analytics only. Use
flagto measure match rates on real traffic; switch toredactorblockonce you’re confident the false-positive rate is acceptable. - False positives are possible. Some legitimate prompts may contain phrases that match these patterns (e.g., a prompt about security testing). Test your guardrail configuration with representative traffic — ideally in
flagmode first — before enforcing broadly.