G

Groq: Llama Prompt Guard 2 22M

Groq
512
Context
$0.0500
Input /1M
$0.1500
Output /1M
64
Max Output

Groq: Llama Prompt Guard 2 22M

Model Overview

Property Value
Model ID groq/meta-llama/llama-prompt-guard-2-22m
Name Llama Prompt Guard 2 22M
Provider Groq / Meta
Parameters 22M

Description

A lightweight prompt injection detection model with 22 million parameters. Designed to identify and prevent prompt injection attacks in real-time with minimal latency.

Specifications

Spec Value
Context Window 512 tokens
Max Completion 64 tokens
Inference Speed Ultra-fast

Pricing

Type Price
Input $0.05 per 1M tokens
Output $0.15 per 1M tokens

Capabilities

  • Prompt Injection Detection: Yes
  • Real-time Classification: Yes
  • Ultra-low Latency: Yes

Use Cases

Prompt injection prevention, security filtering, input validation for LLM applications.

Integration with LangMart

Gateway Support: Type 2 (Cloud), Type 3 (Self-hosted)

API Usage:

curl -X POST https://api.langmart.ai/v1/chat/completions \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "groq/meta-llama/llama-prompt-guard-2-22m",
    "messages": [{"role": "user", "content": "Check prompt"}],
    "max_tokens": 64
  }'
  • groq/meta-llama/llama-prompt-guard-2-86m - Larger version
  • groq/meta-llama/llama-guard-4-12b - Content moderation

Last Updated: December 28, 2025