LangMart: ArliAI: QwQ 32B RpR v1
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/arliai/qwq-32b-arliai-rpr-v1 |
| Name | ArliAI: QwQ 32B RpR v1 |
| Provider | arliai |
| Released | 2025-04-13 |
Description
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself.
The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.
Description
LangMart: ArliAI: QwQ 32B RpR v1 is a language model provided by arliai. This model offers advanced capabilities for natural language processing tasks.
Provider
arliai
Specifications
| Spec | Value |
|---|---|
| Context Window | 32,768 tokens |
| Modalities | text->text |
| Input Modalities | text |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $0.03 per 1M tokens |
| Output | $0.11 per 1M tokens |
Capabilities
- Frequency penalty
- Include reasoning
- Max tokens
- Presence penalty
- Reasoning
- Repetition penalty
- Response format
- Seed
- Stop
- Structured outputs
- Temperature
- Top k
- Top p