Groq: Llama Prompt Guard 2 86M
Model Overview
| Property | Value |
|---|---|
| Model ID | groq/meta-llama/llama-prompt-guard-2-86m |
| Name | Llama Prompt Guard 2 86M |
| Provider | Groq / Meta |
| Parameters | 86M |
Description
A larger prompt injection detection model with 86 million parameters. Offers improved accuracy over the 22M variant while maintaining fast inference speeds.
Specifications
| Spec | Value |
|---|---|
| Context Window | 512 tokens |
| Max Completion | 64 tokens |
| Inference Speed | Very fast |
Pricing
| Type | Price |
|---|---|
| Input | $0.05 per 1M tokens |
| Output | $0.15 per 1M tokens |
Capabilities
- Prompt Injection Detection: Yes
- Higher Accuracy: Yes
- Fast Inference: Yes
Use Cases
Production prompt injection prevention, security-critical applications.
Integration with LangMart
Gateway Support: Type 2 (Cloud), Type 3 (Self-hosted)
API Usage:
curl -X POST https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "groq/meta-llama/llama-prompt-guard-2-86m",
"messages": [{"role": "user", "content": "Check prompt"}],
"max_tokens": 64
}'
Related Models
- groq/meta-llama/llama-prompt-guard-2-22m - Smaller version
- groq/meta-llama/llama-guard-4-12b - Content moderation
Last Updated: December 28, 2025