Groq: Llama Guard 4 12B
Model Overview
| Property | Value |
|---|---|
| Model ID | groq/meta-llama/llama-guard-4-12b |
| Name | Llama Guard 4 12B |
| Provider | Groq / Meta |
| Parameters | 12B |
Description
Meta's content moderation and safety model with 12 billion parameters. Designed for detecting harmful, toxic, or unsafe content in AI outputs. Hosted on Groq for ultra-fast moderation.
Specifications
| Spec | Value |
|---|---|
| Context Window | 131,072 tokens |
| Max Completion | 1,024 tokens |
| Inference Speed | ~1200 tokens/sec |
Pricing
| Type | Price |
|---|---|
| Input | $0.18 per 1M tokens |
| Output | $0.18 per 1M tokens |
Capabilities
- Content Moderation: Yes
- Safety Classification: Yes
- Toxicity Detection: Yes
- Fast Inference: Yes
Use Cases
Content moderation, safety filtering, pre/post-generation safety checks, compliance systems.
Integration with LangMart
Gateway Support: Type 2 (Cloud), Type 3 (Self-hosted)
API Usage:
curl -X POST https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "groq/meta-llama/llama-guard-4-12b",
"messages": [{"role": "user", "content": "Check this content for safety"}],
"max_tokens": 512
}'
Related Models
- groq/meta-llama/llama-prompt-guard-2-22m - Prompt injection guard
- groq/meta-llama/llama-prompt-guard-2-86m - Larger prompt guard
Last Updated: December 28, 2025