LangMart: Meta: LlamaGuard 2 8B
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/meta-llama/llama-guard-2-8b |
| Name | Meta: LlamaGuard 2 8B |
| Provider | meta-llama |
| Released | 2024-05-13 |
Description
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, LlamaGuard 1, it can do both prompt and response classification.
LlamaGuard 2 acts as a normal LLM would, generating text that indicates whether the given input/output is safe/unsafe. If deemed unsafe, it will also share the content categories violated.
For best results, please use raw prompt input or the /completions endpoint, instead of the chat API.
It has demonstrated strong performance compared to leading closed-source models in human evaluations.
To read more about the model release, click here. Usage of this model is subject to Meta's Acceptable Use Policy.
Description
LangMart: Meta: LlamaGuard 2 8B is a language model provided by meta-llama. This model offers advanced capabilities for natural language processing tasks.
Provider
meta-llama
Specifications
| Spec | Value |
|---|---|
| Context Window | 8,192 tokens |
| Modalities | text->text |
| Input Modalities | text |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $0.20 per 1M tokens |
| Output | $0.20 per 1M tokens |
Capabilities
- Frequency penalty
- Logit bias
- Max tokens
- Min p
- Presence penalty
- Repetition penalty
- Stop
- Temperature
- Top k
- Top p
Detailed Analysis
Meta Llama Guard 2 8B is a specialized content moderation and safety classification model built on the Llama 3 architecture. This 8B parameter model is designed specifically for identifying harmful content across 11 safety categories based on the MLCommons taxonomy of hazards.