Model Documentation

Comprehensive reference for AI and LLM models. Browse pricing, capabilities, and usage examples.

779
Models
39
Providers
64
Free

Recently Updated

All Models

779 models
0

Yi 34B Chat Model Documentation

01.AI

The Yi series models are large language models trained from scratch by developers at 01.AI. This 34B parameter model has been instruct-tuned specifically for chat applications, providing optimized per...

Vision
0

Yi Vision 34B Model Documentation

01.AI

Yi-VL-34B is the world's first open-source 34 billion parameter vision-language model, combining advanced image understanding with multilingual text generation capabilities. It represents a significan...

Vision
A

AllenAI: Olmo 3.1 32B Think (free)

Allen AI

This is a 32-billion parameter model emphasizing reasoning capabilities. The system excels at deep reasoning, complex multi-step logic, and advanced instruction following. Version 3.1 represents impro...

Vision
A

Goliath 120B Model Documentation

Alpindale

Goliath 120B is a merged model that combines "two fine-tuned Llama 70B models into one 120B model" by merging Xwin and Euryale variants. The model was created using the mergekit framework by @chargodd...

Vision
A

Amazon Nova 2 Lite v1

Amazon

Vision
A

Amazon Nova Pro 1.0

Amazon

Amazon's multimodal model designed to balance "accuracy, speed, and cost for a wide range of tasks." As of December 2024, it demonstrates state-of-the-art performance on visual question answering (Tex...

Vision
A

Magnum v4 72B

Anthracite

Magnum v4 72B is a fine-tuned version of Qwen2.5 72B that aims to replicate the prose quality of Claude 3 models, specifically Sonnet and Opus. This model is designed for creative writing and roleplay...

Vision
A

Anthropic Claude Models on LangMart

Anthropic

Vision
A

Anthropic: Claude 3 Haiku (20240307)

Anthropic

Anthropic's fastest and most compact Claude 3 model. Designed for near-instant responses while maintaining high-quality output. Ideal for high-volume, cost-sensitive applications.

Vision Tools Streaming
A

Anthropic: Claude 3 Opus

Anthropic

Anthropic's most intelligent model with best-in-market performance on highly complex tasks. Navigates open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding...

Vision Tools Reasoning
A

Anthropic: Claude 3 Sonnet

Anthropic

Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments. It offers excellent performance w...

Vision
A

Anthropic: Claude 3 Sonnet (20240229)

Anthropic

Anthropic's balanced Claude 3 model offering a good combination of capability and speed. Suitable for most general-purpose applications requiring quality responses.

Vision Tools Streaming
A

Anthropic: Claude 3.5 Sonnet (20241022)

Anthropic

Anthropic's Claude 3.5 Sonnet is a balanced AI model that combines strong intelligence with fast response times. This specific version from October 22, 2024 provides a fixed checkpoint for reproducibl...

Vision Tools Streaming
A

Anthropic: Claude Haiku 4.5

Anthropic

Claude Haiku 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this cost-effective model is optimized for efficient inference while maint...

Vision Tools Streaming
A

Anthropic: Claude Opus 4

Anthropic

Claude Opus 4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this model is optimized for its specific use case category.

Streaming Vision
A

Anthropic: Claude Opus 4.1

Anthropic

Hybrid reasoning model pushing frontier for coding and AI agents with extended thinking capabilities. Achieves 74.5% on SWE-bench Verified with 32K max output tokens.

Vision Tools
A

Anthropic: Claude Opus 4.5

Anthropic

Claude Opus 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this flagship model represents the latest capabilities and state-of-the-art...

Vision Tools Streaming
A

Anthropic: Claude Sonnet 4

Anthropic

Claude Sonnet 4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this model is optimized for its specific use case category.

Streaming Vision
A

Anthropic: Claude Sonnet 4.5

Anthropic

Claude Sonnet 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this premium model offers excellent quality and balanced performance acro...

Vision Tools Streaming
A

Claude 3 Haiku

Anthropic

Claude 3 Haiku is Anthropic's fastest and most compact model, designed for near-instant responsiveness with quick and accurate targeted performance. It excels at tasks requiring rapid responses while ...

Vision Tools Streaming
A

Claude 3.5 Haiku

Anthropic

Claude 3.5 Haiku offers enhanced capabilities in speed, coding accuracy, and tool use. It is engineered to excel in real-time applications, delivering quick response times.

Vision
A

Claude Opus 4

Anthropic

Claude Opus 4 is benchmarked as the world's best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software...

Tools Reasoning Vision
A

Claude Sonnet 4

Anthropic

Claude Sonnet 4 represents a significant upgrade from its predecessor, Claude Sonnet 3.7, with particular strength in coding and reasoning tasks. The model achieves state-of-the-art performance on SWE...

Vision Tools Streaming Reasoning Files
A

Arcee AI: Spotlight

Arcee AI

Vision
A

Trinity Mini - API, Providers, Stats

Arcee AI

B

Baidu: ERNIE 4.5 21B A3B Thinking

Baidu

**Model ID:** `baidu/ernie-4.5-21b-a3b-thinking`

Vision
B

ERNIE 4.5 VL 424B A47B Model Details

Baidu

Vision
B

Black Forest Labs: FLUX.2 Max

Black Forest Labs

FLUX.2 [max] is the new top-tier image model from Black Forest Labs, pushing image quality, prompt understanding, and editing consistency to the highest level yet.

Vision
B

FLUX.1 [schnell] - Black Forest Labs

Black Forest Labs

FLUX.1 [schnell] is Black Forest Labs' fastest image generation model, a 12 billion parameter rectified flow transformer capable of generating high-quality images from text descriptions. Trained using...

Vision
C

Cohere Command R (08-2024)

Cohere

C

Cohere Command R+ (08-2024)

Cohere

Vision
C

Cohere: Command A

Cohere

Command A is an open-weights 111B parameter model with a 256k context window focused on delivering great performance across agentic, multilingual, and coding use cases. The model emphasizes delivering...

C

Cohere: Command R

Cohere

Command R is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Cohere, this model provides solid performance and is suitable for most use cases.

Tools Streaming Vision
C

Cohere: Command R+

Cohere

Command R+ is a 104B-parameter large language model from Cohere, purpose-built for enterprise applications. It excels at roleplay, general consumer use cases, and Retrieval Augmented Generation (RAG)....

C

Cohere: Command R7B

Cohere

Command R7B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Cohere, this model is optimized for its specific use case category.

Streaming Vision
C

Cohere: Embed 4

Cohere

Embed 4 is a text embedding model for semantic search and vector-based tasks. Developed by Cohere, this model is optimized for its specific use case category.

Streaming Vision
C

Cohere: Rerank 4 Fast

Cohere

Rerank 4 Fast is a AI model for general-purpose tasks. Developed by Cohere, this model is optimized for its specific use case category.

Streaming Vision
C

Cohere: Rerank 4 Pro

Cohere

Rerank 4 Pro is a AI model for general-purpose tasks. Developed by Cohere, this model is optimized for its specific use case category.

Streaming Vision
C

Collections: Auto Free

Collection

The **Auto Free Collection** is a dynamic system-level collection that automatically routes inference requests to one of the predefined free models available in the system. This collection provides co...

C

Collections: Flash 2.5

Collection

The **Flash 2.5 Collection** is a curated organization-level collection of fast-responding, lightweight language models optimized for speed and cost-effectiveness. This collection focuses on models th...

Vision
C

Collections: Organization Shared Models

Collection

The **Organization Shared Models** collection is a flexible, team-managed collection that enables organizations to pool and share language models across all members. This collection uses a least-used ...

Vision
D

DeepSeek R1

DeepSeek

DeepSeek R1 is DeepSeek AI's first-generation reasoning model that achieves performance on par with OpenAI o1 across math, code, and reasoning tasks. It is fully open-source with MIT licensing, featur...

D

DeepSeek: DeepSeek Coder V2

DeepSeek

DeepSeek Coder V2 is a code generation and understanding model for programming tasks. Developed by DeepSeek, this model provides solid performance and is suitable for most use cases.

Tools Streaming Vision
D

DeepSeek: DeepSeek V3.2

DeepSeek

**Model ID:** `deepseek/deepseek-v3.2`

Vision
D

DeepSeek: DeepSeek-V3.2 (Non-thinking Mode)

DeepSeek

Main chat model with JSON output and tool calling capabilities. Optimized for conversational AI, general-purpose tasks, and integration with external tools and APIs.

Tools
D

DeepSeek: Reasoner

DeepSeek

DeepSeek's specialized reasoning model designed for complex problem-solving, mathematical proofs, and logical analysis. Features extended thinking capabilities for step-by-step reasoning.

Streaming Reasoning
E

EVA Llama 3.33 70B

Eva Unit 01

EVA Llama 3.33 70B is a roleplay and storywriting specialist model. It is a complete parameter fine-tune of the Llama-3.3-70B-Instruct base model using a mixture of synthetic and natural data. The mod...

G

Gemma 3 27B (free) - Complete Model Details

Google

Vision
G

Google AI: Gemini Embedding

Google

Gemini Embedding is a text embedding model for semantic search and vector-based tasks. Developed by Google AI, this model is optimized for its specific use case category.

Streaming Vision
G

Google Gemini 1.5 Flash

Google

Gemini 1.5 Flash is a foundation model that performs well at a variety of multimodal tasks such as visual understanding, classification, summarization, and creating content from image, audio and video...

Vision
G

Google Gemini 2.0 Flash Experimental (Free)

Google

Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5. It introduces notable e...

Vision
G

Google Gemini 2.0 Flash Lite - Complete Model Details

Google

Vision
G

Google: Codey Code Completion

Google

Code completion model.

Vision
G

Google: Deep Research Pro Preview (Dec-12-2025)

Google

A specialized research-focused model designed for deep analysis, literature review, and comprehensive research tasks. Optimized for synthesizing information from multiple sources and providing thoroug...

G

Google: Gecko Embedding

Google

Lightweight embedding model.

Vision
G

Google: Gemini 1.0 Pro Vision

Google

Gemini 1.0 with vision (deprecated).

Vision Tools
G

Google: Gemini 1.5 Pro

Google

Previous generation pro model.

Vision Tools
G

Google: Gemini 2 Flash Thinking

Google

Gemini 2 Flash with extended reasoning capabilities.

Vision Tools
G

Google: Gemini 2.0 Flash

Google

A cost-effective multimodal model optimized for general-purpose tasks requiring balanced performance. Gemini 2.0 Flash delivers strong capabilities across text, image, and code understanding while mai...

Vision Tools Streaming
G

Google: Gemini 2.0 Flash (Image Generation) Experimental

Google

An experimental version of Gemini 2.0 Flash with image generation capabilities. Combines text understanding with the ability to generate images based on prompts.

Streaming Vision
G

Google: Gemini 2.0 Flash 001

Google

The stable version 001 release of Gemini 2.0 Flash, providing a cost-effective multimodal model for general-purpose tasks. This versioned release ensures consistent behavior and reproducible results f...

Vision Tools Streaming
G

Google: Gemini 2.0 Flash Experimental

Google

An experimental preview of Gemini 2.0 Flash featuring the latest capabilities and improvements. This version provides early access to new features while maintaining the fast inference speeds character...

Vision Tools Streaming
G

Google: Gemini 2.0 Flash Preview Image Generation

Google

A preview version of Gemini 2.0 Flash optimized for image generation tasks. Designed for creating visual content from text descriptions.

Vision
G

Google: Gemini 2.0 Flash Thinking Experimental

Google

An experimental version of Gemini 2.0 Flash with enhanced reasoning capabilities. This model features a "thinking" mode that allows it to work through complex problems step-by-step before providing fi...

Vision Tools Streaming
G

Google: Gemini 2.0 Flash Thinking Preview 01-21

Google

A dated experimental version of Gemini 2.0 Flash with thinking capabilities from January 21, 2025. Features improved reasoning capabilities over earlier thinking model versions.

Vision Tools Streaming
G

Google: Gemini 2.0 Flash Thinking Preview 12-19

Google

A dated experimental version of Gemini 2.0 Flash with thinking capabilities from December 19, 2024. Provides enhanced reasoning through explicit step-by-step problem solving.

Vision Tools Streaming
G

Google: Gemini 2.0 Flash-Lite

Google

Streamlined and ultra-efficient model designed for simple, high-frequency tasks. Gemini 2.0 Flash-Lite prioritizes speed and affordability while maintaining essential multimodal capabilities.

Vision Tools Streaming
G

Google: Gemini 2.0 Flash-Lite Preview

Google

A preview version of Gemini 2.0 Flash-Lite, providing early access to streamlined capabilities optimized for high-frequency, simple tasks. This model supports multimodal capabilities including vision ...

Vision Tools
G

Google: Gemini 2.0 Flash-Lite Preview 02-05

Google

A dated preview version of Gemini 2.0 Flash-Lite from February 5, 2025. Provides a fixed checkpoint for reproducible results. This model supports multimodal capabilities including vision and image und...

Vision Tools
G

Google: Gemini 2.0 Pro

Google

Professional-grade Gemini 2.0 model.

Vision Tools
G

Google: Gemini 2.0 Pro Experimental

Google

An experimental version of Gemini 2.0 Pro offering higher capability than Flash variants. Designed for complex tasks requiring advanced reasoning, coding, and multimodal understanding.

Vision Tools Streaming Reasoning
G

Google: Gemini 2.0 Pro Experimental 02-05

Google

A dated experimental version of Gemini 2.0 Pro from February 5, 2025. Provides high-capability performance for complex tasks with a specific model checkpoint.

Vision Tools Streaming Reasoning
G

Google: Gemini 2.0 Pro Vision

Google

Vision-optimized Gemini 2.0 Pro.

Vision Tools
G

Google: Gemini 2.5 Computer Use Preview 10-2025

Google

A specialized preview model designed for computer use and automation tasks. Enables AI-driven interaction with computer interfaces, including clicking, typing, and navigating applications.

Vision
G

Google: Gemini 2.5 Flash

Google

Lightning-fast and highly capable model that delivers a balance of intelligence and latency. Gemini 2.5 Flash offers controllable thinking budgets for versatile applications, making it ideal for a wid...

Vision Tools Streaming
G

Google: Gemini 2.5 Flash Image (Nano Banana)

Google

A specialized image-focused variant of Gemini 2.5 Flash, codenamed Nano Banana. Optimized for image understanding and generation tasks with fast inference.

Vision
G

Google: Gemini 2.5 Flash Image Preview (Nano Banana)

Google

A preview version of the image-focused Gemini 2.5 Flash variant, codenamed Nano Banana. Provides early access to enhanced image capabilities.

Vision
G

Google: Gemini 2.5 Flash Preview

Google

Preview of Gemini 2.5 Flash.

Vision Tools
G

Google: Gemini 2.5 Flash Preview 05-20

Google

A dated preview of Gemini 2.5 Flash from May 20, 2025. Provides a fixed model checkpoint for reproducible experiments and applications.

Vision Tools Streaming
G

Google: Gemini 2.5 Flash Preview Sep 2025

Google

A September 2025 preview of Gemini 2.5 Flash with the latest capabilities and improvements before stable release. This model supports multimodal capabilities including vision and image understanding.

Vision Tools Streaming
G

Google: Gemini 2.5 Flash Preview TTS

Google

A specialized preview of Gemini 2.5 Flash with text-to-speech capabilities. Combines fast inference with high-quality voice synthesis for real-time audio applications.

Streaming
G

Google: Gemini 2.5 Flash-Lite

Google

Built for massive scale, Gemini 2.5 Flash-Lite balances cost and performance for high-throughput tasks. Optimized for efficiency without sacrificing multimodal capabilities.

Vision Tools Streaming
G

Google: Gemini 2.5 Flash-Lite Preview 06-17

Google

A dated preview version of Gemini 2.5 Flash-Lite from June 17, 2025. Optimized for efficiency and high-throughput tasks with a fixed checkpoint.

Vision
G

Google: Gemini 2.5 Flash-Lite Preview Sep 2025

Google

A September 2025 preview of Gemini 2.5 Flash-Lite, optimized for efficiency and cost-effectiveness in high-throughput applications. This model supports multimodal capabilities including vision and ima...

Vision
G

Google: Gemini 2.5 Pro

Google

Vision
G

Google: Gemini 2.5 Pro

Google

Google's high-capability model for complex reasoning and coding. Features adaptive thinking and a 1 million token context window, designed for complex agentic and multimodal challenges. Gemini 2.5 Pro...

Vision Tools Streaming Reasoning
G

Google: Gemini 2.5 Pro Preview 03-25

Google

A dated preview version of Gemini 2.5 Pro from March 25, 2025. Provides access to advanced capabilities with a fixed model checkpoint for reproducibility.

Vision Tools Streaming Reasoning
G

Google: Gemini 2.5 Pro Preview 06-05

Google

Gemini 2.5 Pro is Google's latest and most capable model, featuring a massive 1 million token context window. This preview version (06-05) represents the cutting edge of Google's multimodal AI capabil...

Vision
G

Google: Gemini 2.5 Pro Preview TTS

Google

A specialized preview of Gemini 2.5 Pro with text-to-speech capabilities. Designed for applications requiring high-quality voice synthesis alongside advanced language understanding.

Streaming
G

Google: Gemini 3 Flash Preview

Google

Fast Gemini 3 for speed and efficiency.

Vision Tools
G

Google: Gemini 3 Mobile

Google

Lightweight Gemini 3 optimized for mobile devices.

Vision Tools
G

Google: Gemini 3 Opus

Google

High-end model for demanding applications.

Vision Tools
G

Google: Gemini 3 Pro Preview

Google

Latest flagship Gemini model with advanced reasoning and multimodal capabilities.

Vision Tools
G

Google: Gemini 3.5 Sonnet

Google

Balanced mid-tier Gemini model.

Vision Tools
G

Google: Gemini Audio Understanding

Google

Audio analysis model.

Vision
G

Google: Gemini Code Reasoning

Google

Advanced code analysis and generation model.

Tools Vision
G

Google: Gemini Document Understanding

Google

Specialized model for document processing and extraction.

Vision
G

Google: Gemini Experimental 1206

Google

An experimental Gemini model from December 6, 2024. Provides early access to new capabilities and improvements in development. This model supports multimodal capabilities including vision and image un...

Vision Tools Streaming
G

Google: Gemini Flash Latest

Google

Gemini Flash Latest with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
G

Google: Gemini Flash-Lite Latest

Google

The latest stable version of Gemini Flash-Lite, automatically updated to the most recent stable release. Optimized for efficiency and high-throughput tasks.

Vision
G

Google: Gemini Image Generation 001

Google

Image generation from text.

Vision
G

Google: Gemini Multimodal Live

Google

Real-time streaming multimodal model.

Vision Tools
G

Google: Gemini Nano

Google

Smallest Gemini model.

Vision
G

Google: Gemini Pro Latest

Google

The latest stable version of Gemini Pro, automatically updated to the most recent stable release. Provides high-capability performance for complex tasks.

Vision Tools Reasoning
G

Google: Gemini Reasoning Engine

Google

Experimental advanced reasoning engine for complex tasks.

Vision Tools
G

Google: Gemini Robotics-ER 1.5 Preview

Google

A specialized model for robotics and embodied reasoning (ER) applications. Designed to understand and reason about physical environments, robot actions, and spatial relationships.

Vision
G

Google: Gemini Text Embedding 004

Google

Latest text embedding model.

Vision
G

Google: Gemini Video Understanding

Google

Video analysis and understanding.

Vision
G

Google: Gemma 1.1 7B Instruct-Tuned

Google

Previous Gemma generation 7B model.

Vision
G

Google: Gemma 2 27B Instruct-Tuned

Google

Previous generation 27B model.

Vision
G

Google: Gemma 2 9B Instruct-Tuned

Google

Vision
G

Google: Gemma 3 12B Instruct-Tuned

Google

Compact 12B instruction-tuned model.

Vision
G

Google: Gemma 3 1B

Google

A lightweight, instruction-tuned version of Gemma 3 with 1 billion parameters. Designed for edge deployment and resource-constrained environments while maintaining good instruction-following capabilit...

G

Google: Gemma 3 27B Instruct-Tuned

Google

Open-source 27B instruction-tuned model.

Vision
G

Google: Gemma 3 4B Instruct-Tuned

Google

Ultra-lightweight 4B model.

Vision
G

Google: Gemma 3 Long Context 27B

Google

Extended context Gemma 3 27B.

Vision
G

Google: Gemma 3 Long Context 27B

Google

Extended context version of Gemma 3 27B supporting up to 1M token context.

Vision
G

Google: Gemma 3n E2B

Google

An efficient variant of Gemma 3 with approximately 2B equivalent parameters, optimized for edge deployment and mobile applications. Part of the Gemma 3n efficient model series.

G

Google: Gemma 3n E4B

Google

An efficient variant of Gemma 3 with approximately 4B equivalent parameters, balancing capability and efficiency for edge and mobile deployment.

G

Google: LearnLM 2.0 Flash Experimental

Google

An experimental model designed specifically for educational applications. LearnLM is optimized for tutoring, explanation generation, and adaptive learning interactions. This model supports multimodal ...

Vision
G

Google: Multimodal Understanding Pro

Google

Advanced multimodal model.

Vision Tools
G

Google: Nano Banana Pro

Google

A high-capability image-focused model codenamed Nano Banana Pro. Designed for advanced image understanding and generation with professional-grade quality. This model supports multimodal capabilities i...

Vision
G

Google: Nano Banana Pro (Gemini 3 Pro Image Preview)

Google

A high-capability image-focused model codenamed Nano Banana Pro. Part of the Gemini 3 Pro family with specialized capabilities for advanced image understanding and generation.

Vision
G

Google: PaLM 2 Chat Bison

Google

Legacy PaLM chat model (deprecated).

Vision
G

Google: PaLM 2 Text Bison

Google

Legacy PaLM text model (deprecated).

Vision
G

Groq: Allam 2 7B

Groq

Allam 2 is a 7 billion parameter Arabic language model optimized for Arabic text understanding and generation. Hosted on Groq's LPU infrastructure for ultra-fast inference.

Streaming
G

Groq: Claude 3.5 Sonnet

Groq

Streaming Vision
G

Groq: Command Nightly

Groq

Groq: Command Nightly is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Compound

Groq

An AI system powered by openly available models that intelligently and selectively uses built-in tools to answer user queries, including web search and code execution. Compound represents Groq's appro...

Tools
G

Groq: Compound Mini

Groq

A lightweight version of Groq Compound with built-in web search and code execution capabilities. Optimized for faster responses and cost efficiency while maintaining agentic capabilities.

Tools
G

Groq: DeepSeek R1 Distill Llama 70B

Groq

Groq: DeepSeek R1 Distill Llama 70B is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Deprecated Model 123

Groq

This model has been deprecated and is no longer recommended for new deployments. It is retained in the system for backward compatibility only. Please migrate to current model versions for production u...

G

Groq: Distil Whisper Large V3 EN

Groq

Groq: Distil Whisper Large V3 EN is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Gemma 2 9B IT

Groq

Groq: Gemma 2 9B IT is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Gemma 7B IT

Groq

Groq: Gemma 7B IT is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: GPT OSS 120B 128k

Groq

GPT OSS 120B 128k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model is optimized for its specific use case category.

Streaming Vision
G

Groq: GPT OSS 20B 128k

Groq

GPT OSS 20B 128k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model is optimized for its specific use case category.

Streaming Vision
G

Groq: GPT OSS Safeguard 20B

Groq

GPT OSS Safeguard 20B is a content moderation model for safety and policy compliance checking. Developed by Groq, this model is optimized for its specific use case category.

Streaming Vision
G

Groq: GPT-4 Turbo

Groq

Groq: GPT-4 Turbo is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: GPT-4 Vision Preview

Groq

Groq: GPT-4 Vision Preview is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: GPT-4o Mini

Groq

Groq: GPT-4o Mini is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: GPT-OSS 120B

Groq

OpenAI's flagship open-weight language model with 120 billion parameters, hosted on Groq infrastructure. Features built-in browser search and code execution capabilities with reasoning.

Reasoning
G

Groq: GPT-OSS 20B

Groq

A lightweight 20 billion parameter version of OpenAI's open-weight GPT-OSS model. Optimized for fast inference on Groq infrastructure.

Streaming
G

Groq: GPT-OSS Safeguard 20B

Groq

A safety-focused 20 billion parameter model based on GPT-OSS. Fine-tuned for content moderation and safety-critical applications.

G

Groq: Kimi K2 1T 256k

Groq

Kimi K2 1T 256k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model is optimized for its specific use case category.

Streaming Vision
G

Groq: Kimi K2 Instruct

Groq

MoonshotAI's Kimi K2 instruction-tuned model, hosted on Groq infrastructure for fast inference. Designed for general-purpose instruction following with strong multilingual capabilities.

G

Groq: Kimi K2 Instruct 0905

Groq

A dated version of MoonshotAI's Kimi K2 instruction model from September 5, 2025. Features an extended context window for longer conversations.

G

Groq: Llama 3.1 8B Instant

Groq

Streaming Vision
G

Groq: Llama 3.3 70B Speculative Decoding

Groq

Streaming Vision
G

Groq: Llama 3.3 70B Versatile

Groq

Streaming Vision
G

Groq: Llama 4 Maverick

Groq

Llama 4 Maverick is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this flagship model represents the latest capabilities and state-of-the-art per...

Tools Streaming Vision
G

Groq: Llama 4 Maverick 17B 128E Instruct

Groq

Meta's Llama 4 Maverick model with 17 billion parameters and 128 experts, optimized for instruction following. Hosted on Groq for ultra-fast inference speeds.

Streaming
G

Groq: Llama 4 Scout

Groq

Llama 4 Scout is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model provides solid performance and is suitable for most use cases.

Tools Streaming Vision
G

Groq: Llama 4 Scout 17B 16E Instruct

Groq

Meta's Llama 4 Scout model with 17 billion parameters and 16 experts. A more efficient MoE variant designed for fast, lightweight inference on Groq infrastructure.

Streaming
G

Groq: Llama Guard 4 12B

Groq

Llama Guard 4 12B is a content moderation model for safety and policy compliance checking. Developed by Groq, this model is optimized for its specific use case category.

Streaming Vision
G

Groq: Llama Guard 4 12B

Groq

Meta's content moderation and safety model with 12 billion parameters. Designed for detecting harmful, toxic, or unsafe content in AI outputs. Hosted on Groq for ultra-fast moderation.

G

Groq: Llama Prompt Guard 2 22M

Groq

A lightweight prompt injection detection model with 22 million parameters. Designed to identify and prevent prompt injection attacks in real-time with minimal latency.

G

Groq: Llama Prompt Guard 2 86M

Groq

A larger prompt injection detection model with 86 million parameters. Offers improved accuracy over the 22M variant while maintaining fast inference speeds.

G

Groq: Mixtral 8x7B (Extended)

Groq

Groq: Mixtral 8x7B (Extended) is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Mixtral 8x7B 32K

Groq

Groq: Mixtral 8x7B 32K is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Neural Chat 7B v3.1

Groq

Streaming Vision
G

Groq: OpenChat 3.5

Groq

Streaming Vision
G

Groq: Orpheus Arabic Saudi

Groq

A specialized Arabic language model from Canopy Labs focused on Saudi Arabian dialect and culture. Optimized for Saudi Arabic text generation and understanding.

G

Groq: Orpheus V1 English

Groq

Canopy Labs' Orpheus V1 model for English language tasks. A lightweight model optimized for fast inference on Groq infrastructure.

G

Groq: PaLM 2 Chat Bison 32K

Groq

Groq: PaLM 2 Chat Bison 32K is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: PaLM 2 CodeChat Bison

Groq

Groq: PaLM 2 CodeChat Bison is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: PlayAI TTS

Groq

PlayAI's text-to-speech model hosted on Groq for ultra-fast audio generation. Provides high-quality voice synthesis with low latency.

G

Groq: PlayAI TTS Arabic

Groq

PlayAI's Arabic-specialized text-to-speech model hosted on Groq. Optimized for high-quality Arabic voice synthesis.

G

Groq: Qwen 2 72B 4-bit

Groq

Groq: Qwen 2 72B 4-bit is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Qwen 2 7B 4-bit

Groq

Groq: Qwen 2 7B 4-bit is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Qwen3 32B

Groq

Qwen3 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model provides solid performance and is suitable for most use cases.

Tools Streaming Vision
G

Groq: Qwen3 32B

Groq

Alibaba's Qwen3 32 billion parameter model hosted on Groq infrastructure. Offers strong multilingual capabilities with fast inference. The model is optimized for code generation and programming tasks....

Tools Streaming
G

Groq: Solar 10.7B Instruct v1

Groq

Streaming Vision
G

Groq: Whisper Large V3

Groq

OpenAI's Whisper Large V3 speech-to-text model hosted on Groq for ultra-fast transcription. Supports multiple languages and audio formats.

G

Groq: Whisper Large V3 Turbo

Groq

Groq: Whisper Large V3 Turbo is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

Groq: Whisper V3 Large

Groq

Whisper V3 Large is a audio processing model for speech synthesis and audio understanding. Developed by Groq, this model is optimized for its specific use case category.

Streaming Vision
G

Groq: Yi 34B Chat 3 32K

Groq

Groq: Yi 34B Chat 3 32K is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.

Streaming Vision
G

MythoMax 13B

Gryphe

One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. This is a merged model (#merge) that combines multiple fine-tuning approaches to achieve ...

Vision
I

Inflection 3 Pi

Inflection

Inflection 3 Pi powers Inflection's Pi chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like customer support and r...

Vision
K

Psyfighter v2 13B

Koboldai

A specialized merged model designed for enhanced fictional storytelling with supplementary medical knowledge. The model combines three base models to balance creative narrative generation with anatomi...

Vision
L

LZLV ARPO-34B

Lzlv

M

Llama 4 Scout - Model Details

Meta

Vision
M

Llama Guard 3 8B

Meta

M

Meta Llama 2 13B Chat

Meta

Meta Llama 2 13B Chat is a 13 billion parameter language model fine-tuned specifically for chat completions and conversational tasks. This is Meta's open-source contribution designed for dialogue-base...

M

Meta Llama 3.1 405B Instruct

Meta

Llama 3.1 405B Instruct is Meta's largest and most capable open-source language model, representing their flagship offering in the Llama 3.1 series. This 405-billion parameter model features a 128K to...

Vision
M

Meta Llama 3.1 8B Instruct

Meta

Llama 3.1 8B Instruct is part of Meta's latest class of language models, offering a balance between efficiency and capability. This 8-billion parameter instruction-tuned variant emphasizes speed and e...

Vision
M

Meta Llama 3.3 70B Instruct

Meta

Llama 3.3 70B Instruct is a pretrained and instruction-tuned generative model optimized for multilingual dialogue use cases. It outperforms many available open source and closed chat models on common ...

Vision
M

Meta: Llama 2 70B Chat

Meta

Vision
M

Meta: Llama 3.1 70B Instruct

Meta

Meta's latest language model offering comes in various sizes. This 70B parameter instruct-tuned variant targets high-quality dialogue applications. The model has demonstrated strong performance compar...

M

Meta: Llama 3.2 1B Instruct

Meta

Llama 3.2 1B is a 1-billion-parameter language model optimized for efficiently performing natural language tasks such as summarization, dialogue, and multilingual text analysis. With its smaller size,...

Vision
M

Meta: Llama 3.2 3B Instruct

Meta

Llama 3.2 3B is a 3-billion-parameter multilingual large language model optimized for advanced natural language processing tasks including dialogue generation, complex reasoning, and text summarizatio...

Vision
M

Meta: Llama 3.2 90B Vision Instruct

Meta

A 90-billion-parameter multimodal model excelling at visual reasoning and language tasks. The model handles image captioning, visual question answering, and advanced image-text comprehension through p...

Vision
M

Meta: Llama 4 Maverick

Meta

A high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total parameters). The mo...

Vision Tools
M

Microsoft Phi-3 Medium 128K Instruct

Microsoft

Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. The model excels in common sense reasoning, mathematics, ...

M

Microsoft Phi-4

Microsoft

Phi-4 targets "complex reasoning tasks and operates efficiently with limited memory or when quick responses are needed." The 14-billion parameter model trained on synthetic datasets, curated websites,...

M

WizardLM-2 8x22B

Microsoft

Microsoft's most advanced Wizard model, described as demonstrating "highly competitive performance compared to leading proprietary models" and consistently outperforming existing open-source alternati...

M

MiniMax M2.1

Minimax

A lightweight, state-of-the-art language model with 10 billion activated parameters, optimized for coding, agentic workflows, and application development. The model delivers cleaner, more concise outp...

Vision
M

Mistral 7B Instruct

Mistral AI

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this endpoint is intended to be the l...

Vision
M

Mistral AI Codestral

Mistral AI

Codestral is Mistral AI's cutting-edge language model explicitly designed for code generation tasks. It is Mistral's inaugural code-specific generative model, representing an open-weight generative AI...

Vision
M

Mistral AI: Codestral

Mistral AI

Codestral is a code generation and understanding model for programming tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Codestral Embed

Mistral AI

Codestral Embed is a text embedding model for semantic search and vector-based tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Devstral 2

Mistral AI

Devstral 2 is a code generation and understanding model for programming tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Devstral Small 2

Mistral AI

Devstral Small 2 is a code generation and understanding model for programming tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Magistral Medium

Mistral AI

Magistral Medium is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Magistral Small

Mistral AI

Magistral Small is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Ministral 14B

Mistral AI

Ministral 14B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Ministral 3B

Mistral AI

Ministral 3B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Ministral 8B

Mistral AI

Ministral 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Mistral Embed

Mistral AI

Mistral Embed is a text embedding model for semantic search and vector-based tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Mistral Large 3

Mistral AI

Mistral Large 3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this flagship model represents the latest capabilities and state-of-the-ar...

Tools Streaming Vision
M

Mistral AI: Mistral Medium 3

Mistral AI

Mistral Medium 3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this premium model offers excellent quality and balanced performance acro...

Tools Streaming Vision
M

Mistral AI: Mistral Moderation

Mistral AI

Mistral Moderation is a content moderation model for safety and policy compliance checking. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Mistral OCR

Mistral AI

Mistral OCR is a AI model for general-purpose tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Mistral Small

Mistral AI

Mistral Small is a 22-billion parameter model serving as a convenient mid-point between smaller and larger Mistral options. It emphasizes reasoning capabilities, code generation, and multilingual supp...

M

Mistral AI: Mistral Small 3.2

Mistral AI

Mistral Small 3.2 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model provides solid performance and is suitable for most use cases...

Tools Streaming Vision
M

Mistral AI: Mistral Small Creative

Mistral AI

Mistral Small Creative is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Voxtral Mini

Mistral AI

Voxtral Mini is a audio processing model for speech synthesis and audio understanding. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral AI: Voxtral Small

Mistral AI

Voxtral Small is a audio processing model for speech synthesis and audio understanding. Developed by Mistral AI, this model is optimized for its specific use case category.

Streaming Vision
M

Mistral Large

Mistral AI

Mistral Large is Mistral AI's flagship offering. The model excels at reasoning, code generation, JSON handling, and chat applications. It is a proprietary model with support for dozens of languages in...

Vision
M

Mistral Medium Model Documentation

Mistral AI

A closed-source, medium-sized model from Mistral AI that excels at reasoning, code, JSON, chat, and more. This model performs comparably to other companies' flagship models and represents Mistral's mi...

Vision
M

Mistral Nemo

Mistral AI

A 12-billion parameter model featuring a 128k token context window, developed by Mistral in partnership with NVIDIA. The model supports multiple languages including English, French, German, Spanish, I...

M

Mistral Small Creative

Mistral AI

Mistral Small Creative is an experimental small model designed for:

M

Mistral: Devstral 2 2512

Mistral AI

**Model ID:** `mistralai/devstral-2512`

M

Mistral: Devstral 2 2512 (Free)

Mistral AI

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window.

Vision
M

Mistral: Mixtral 8x7B Instruct

Mistral AI

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts model with 8 experts totaling 47 billion parameters. It has been fine-tuned by Mistral AI specifically for chat and instructi...

Vision
M

Mistral: Pixtral 12B

Mistral AI

The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent, making it openly available for research and commercial use.

Vision
N

Nous Research: Hermes 3 70B Instruct

Nous Research

Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...

N

Nous: Hermes 2 Mistral 7B DPO

Nous Research

This represents the primary 7B variant in the Hermes lineup, utilizing Direct Preference Optimization refinement. It's derived from Teknium/OpenHermes-2.5-Mistral-7B and demonstrates "improvement acro...

N

Nous: Hermes 2 Mixtral 8x7B DPO

Nous Research

Nous Hermes 2 Mixtral 8x7B DPO is the flagship Nous Research model trained over the Mixtral 8x7B MoE LLM. The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as ...

N

Nous: Hermes 2 Vision 7B (Alpha)

Nous Research

Nous: Hermes 2 Vision 7B is an alpha-stage vision-language model that extends the capabilities of OpenHermes-2.5 by incorporating visual perception abilities. The model was developed using a specializ...

Vision
N

Nous: Hermes 3 405B Instruct

Nous Research

Hermes 3 is a generalist language model with significant improvements over its predecessor Hermes 2. It is a full-parameter finetune of Llama-3.1 405B, making it one of the largest openly available in...

N

Nous: Hermes 3.1 Llama 3.1 405B

Nous Research

Nous Hermes 3.1 is an advanced iterative refinement of the Hermes 3 model family, built on top of Llama-3.1 405B. This model represents the cutting edge of open-source instruction-tuned models with en...

N

NVIDIA Llama 3.1 Nemotron 70B Instruct

NVIDIA

This model combines the Llama 3.1 70B architecture with Reinforcement Learning from Human Feedback (RLHF) to excel in automatic alignment benchmarks. It is designed for generating precise and useful r...

O

Ollama: Embeddinggemma:300m

Ollama

Embeddinggemma:300m optimized for generating high-quality embeddings. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

Ollama: Functiongemma:270m

Ollama

Functiongemma:270m is a capable language model from Ollama for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding....

Tools Streaming Vision
O

Ollama: Gemma3:1b

Ollama

Gemma3:1b is a capable language model from Ollama for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

Ollama: Llama 3.1 8B Instruct

Ollama

Llama 3.1 8B Instruct is Meta's state-of-the-art instruction-tuned language model with 8 billion parameters. It's a compact yet powerful model designed for general-purpose conversational AI, reasoning...

Vision
O

Ollama: Mistral 7B Instruct

Ollama

Mistral 7B Instruct is Mistral AI's powerful 7-billion parameter instruction-tuned language model, renowned for exceptional efficiency and speed. Despite having only 7B parameters, it achieves perform...

O

Ollama: Qwen2.5 7B Instruct

Ollama

Qwen2.5 7B Instruct is Alibaba's latest-generation instruction-tuned language model with 7.6 billion parameters, representing a significant upgrade to the Qwen family. Built on 18 trillion tokens of d...

Vision
O

OpenAI GPT-4 32K Model Documentation

OpenAI

**Source**: OpenRouter (https://langmart.ai/model-docs)

Vision
O

OpenAI GPT-4 Vision Model Specifications

OpenAI

**Last Updated:** December 24, 2025

Vision
O

OpenAI o1-preview

OpenAI

OpenAI o1-preview is a reasoning-focused model designed to "spend more time thinking before responding." It employs chain-of-thought reasoning with self-fact-checking capabilities, making it particula...

Vision
O

OpenAI: Babbage 002

OpenAI

Lightweight text completion model. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Chatgpt 4O Latest

OpenAI

OpenAI model: chatgpt-4o-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Chatgpt Image Latest

OpenAI

OpenAI model: chatgpt-image-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Codex Mini Latest

OpenAI

OpenAI model: codex-mini-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The...

Streaming Vision
O

OpenAI: Computer Use Preview

OpenAI

Computer Use Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: DALL-E 2

OpenAI

DALL-E 2 is a image generation model for creating visual content from descriptions. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: DALL-E 3

OpenAI

DALL-E 3 is a image generation model for creating visual content from descriptions. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: Davinci 002

OpenAI

Text completion model for legacy applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...

Streaming Vision
O

OpenAI: Gpt 4O Audio Preview 2024 12 17

OpenAI

OpenAI model: gpt-4o-audio-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solv...

Streaming Vision
O

OpenAI: Gpt 4O Audio Preview 2025 06 03

OpenAI

OpenAI model: gpt-4o-audio-preview-2025-06-03 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solv...

Streaming Vision
O

OpenAI: Gpt 4O Mini Audio Preview

OpenAI

OpenAI model: gpt-4o-mini-audio-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...

Streaming Vision
O

OpenAI: Gpt 4O Mini Audio Preview 2024 12 17

OpenAI

OpenAI model: gpt-4o-mini-audio-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem...

Streaming Vision
O

OpenAI: Gpt 4O Mini Realtime Preview

OpenAI

OpenAI model: gpt-4o-mini-realtime-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...

Streaming Vision
O

OpenAI: Gpt 4O Mini Realtime Preview 2024 12 17

OpenAI

OpenAI model: gpt-4o-mini-realtime-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex prob...

Streaming Vision
O

OpenAI: Gpt 4O Mini Search Preview

OpenAI

OpenAI model: gpt-4o-mini-search-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving t...

Streaming Vision
O

OpenAI: Gpt 4O Mini Search Preview 2025 03 11

OpenAI

OpenAI model: gpt-4o-mini-search-preview-2025-03-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex proble...

Streaming Vision
O

OpenAI: Gpt 4O Mini Transcribe

OpenAI

OpenAI model: gpt-4o-mini-transcribe This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks...

Streaming Vision
O

OpenAI: Gpt 4O Mini Transcribe 2025 03 20

OpenAI

OpenAI model: gpt-4o-mini-transcribe-2025-03-20 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...

Streaming Vision
O

OpenAI: Gpt 4O Mini Transcribe 2025 12 15

OpenAI

OpenAI model: gpt-4o-mini-transcribe-2025-12-15 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...

Streaming Vision
O

OpenAI: Gpt 4O Realtime Preview

OpenAI

OpenAI model: gpt-4o-realtime-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving task...

Streaming Vision
O

OpenAI: Gpt 4O Realtime Preview 2024 10 01

OpenAI

OpenAI model: gpt-4o-realtime-preview-2024-10-01 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...

Streaming Vision
O

OpenAI: Gpt 4O Realtime Preview 2024 12 17

OpenAI

OpenAI model: gpt-4o-realtime-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...

Streaming Vision
O

OpenAI: Gpt 4O Realtime Preview 2025 06 03

OpenAI

OpenAI model: gpt-4o-realtime-preview-2025-06-03 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...

Streaming Vision
O

OpenAI: Gpt 4O Search Preview

OpenAI

OpenAI model: gpt-4o-search-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....

Streaming Vision
O

OpenAI: Gpt 4O Search Preview 2025 03 11

OpenAI

OpenAI model: gpt-4o-search-preview-2025-03-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...

Streaming Vision
O

OpenAI: Gpt 4O Transcribe

OpenAI

OpenAI model: gpt-4o-transcribe This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 4O Transcribe Diarize

OpenAI

OpenAI model: gpt-4o-transcribe-diarize This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...

Streaming Vision
O

OpenAI: Gpt 5

OpenAI

OpenAI model: gpt-5 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 2025 08 07

OpenAI

OpenAI model: gpt-5-2025-08-07 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Chat Latest

OpenAI

OpenAI model: gpt-5-chat-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Codex

OpenAI

OpenAI model: gpt-5-codex This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The model...

Streaming Vision
O

OpenAI: Gpt 5 Mini

OpenAI

OpenAI model: gpt-5-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Mini 2025 08 07

OpenAI

OpenAI model: gpt-5-mini-2025-08-07 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....

Streaming Vision
O

OpenAI: Gpt 5 Nano

OpenAI

OpenAI model: gpt-5-nano This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Nano 2025 08 07

OpenAI

OpenAI model: gpt-5-nano-2025-08-07 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....

Streaming Vision
O

OpenAI: Gpt 5 Pro

OpenAI

OpenAI model: gpt-5-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Pro 2025 10 06

OpenAI

OpenAI model: gpt-5-pro-2025-10-06 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Search Api

OpenAI

OpenAI model: gpt-5-search-api This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5 Search Api 2025 10 14

OpenAI

OpenAI model: gpt-5-search-api-2025-10-14 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ...

Streaming Vision
O

OpenAI: Gpt 5.1

OpenAI

OpenAI model: gpt-5.1 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.1 2025 11 13

OpenAI

OpenAI model: gpt-5.1-2025-11-13 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.1 Chat Latest

OpenAI

OpenAI model: gpt-5.1-chat-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.1 Codex

OpenAI

OpenAI model: gpt-5.1-codex This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The mod...

Streaming Vision
O

OpenAI: Gpt 5.1 Codex Max

OpenAI

OpenAI model: gpt-5.1-codex-max This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The...

Streaming Vision
O

OpenAI: Gpt 5.1 Codex Mini

OpenAI

OpenAI model: gpt-5.1-codex-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. Th...

Streaming Vision
O

OpenAI: Gpt 5.2

OpenAI

OpenAI model: gpt-5.2 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.2 2025 12 11

OpenAI

OpenAI model: gpt-5.2-2025-12-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.2 Chat Latest

OpenAI

OpenAI model: gpt-5.2-chat-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.2 Pro

OpenAI

OpenAI model: gpt-5.2-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt 5.2 Pro 2025 12 11

OpenAI

OpenAI model: gpt-5.2-pro-2025-12-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks...

Streaming Vision
O

OpenAI: Gpt Audio

OpenAI

OpenAI model: gpt-audio This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt Audio 2025 08 28

OpenAI

OpenAI model: gpt-audio-2025-08-28 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt Audio Mini

OpenAI

OpenAI model: gpt-audio-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt Audio Mini 2025 10 06

OpenAI

OpenAI model: gpt-audio-mini-2025-10-06 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...

Streaming Vision
O

OpenAI: Gpt Audio Mini 2025 12 15

OpenAI

OpenAI model: gpt-audio-mini-2025-12-15 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...

Streaming Vision
O

OpenAI: GPT Image 1

OpenAI

OpenAI's first-generation image generation model integrated with GPT capabilities. Enables text-to-image generation with natural language understanding. This model supports multimodal capabilities inc...

Vision
O

OpenAI: GPT Image 1 Mini

OpenAI

A lightweight version of OpenAI's GPT Image 1, optimized for faster generation and lower cost while maintaining good quality. This model supports multimodal capabilities including vision and image und...

Vision
O

OpenAI: GPT Image 1.5

OpenAI

OpenAI's enhanced image generation model with improved quality, better prompt understanding, and more detailed outputs compared to GPT Image 1.

Vision
O

OpenAI: Gpt Realtime

OpenAI

OpenAI model: gpt-realtime This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt Realtime 2025 08 28

OpenAI

OpenAI model: gpt-realtime-2025-08-28 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving task...

Streaming Vision
O

OpenAI: Gpt Realtime Mini

OpenAI

OpenAI model: gpt-realtime-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Gpt Realtime Mini 2025 10 06

OpenAI

OpenAI model: gpt-realtime-mini-2025-10-06 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...

Streaming Vision
O

OpenAI: Gpt Realtime Mini 2025 12 15

OpenAI

OpenAI model: gpt-realtime-mini-2025-12-15 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...

Streaming Vision
O

OpenAI: GPT-3.5 Turbo

OpenAI

Fast and affordable model suitable for most applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex ...

Tools Streaming Vision
O

OpenAI: GPT-3.5 Turbo (January 2024)

OpenAI

January 2024 update with improved performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...

Tools Streaming Vision
O

OpenAI: GPT-3.5 Turbo (November 2023)

OpenAI

November 2023 version with extended context. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solvi...

Tools Streaming Vision
O

OpenAI: GPT-3.5 Turbo 16K

OpenAI

Extended context window version of GPT-3.5 Turbo. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-...

Tools Streaming Vision
O

OpenAI: GPT-3.5 Turbo Instruct

OpenAI

Instruction-following variant for supervised fine-tuning. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex ...

Streaming Vision
O

OpenAI: GPT-3.5 Turbo Instruct (September 2023)

OpenAI

September 2023 release of instruct variant for supervised fine-tuning and completion tasks. This model supports multimodal capabilities including vision and image understanding. It features advanced r...

Streaming Vision
O

OpenAI: GPT-4

OpenAI

OpenAI's flagship model with advanced reasoning and problem-solving capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning c...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4 (June 13, 2023)

OpenAI

Specific version of GPT-4 from June 2023 with enhanced instruction following. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capab...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4 Turbo

OpenAI

Faster variant of GPT-4 with extended context window and lower cost. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities f...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4 Turbo (April 2024)

OpenAI

April 2024 version of GPT-4 Turbo with updated knowledge cutoff. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for c...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4 Turbo Preview

OpenAI

Preview of GPT-4 Turbo with early access to new features. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex ...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4 Turbo Preview (January 2024)

OpenAI

Latest GPT-4 Turbo variant with expanded context window and improved capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning ...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4 Turbo Preview (November 2023)

OpenAI

GPT-4 Turbo with 128K context window for processing large documents. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities f...

Tools Streaming Reasoning Vision
O

OpenAI: GPT-4.1

OpenAI

Enhanced version of GPT-4 with improved reasoning and multimodal capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capa...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4.1 (April 2025)

OpenAI

Latest GPT-4.1 variant with current knowledge cutoff. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex prob...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4.1 Mini

OpenAI

Compact version of GPT-4.1 for cost-efficient applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for comple...

Tools Streaming Vision
O

OpenAI: GPT-4.1 Mini (April 2025)

OpenAI

Latest mini variant with efficient performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...

Tools Streaming Vision
O

OpenAI: GPT-4.1 Nano

OpenAI

Ultra-lightweight variant for cost-sensitive applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex...

Streaming Vision
O

OpenAI: GPT-4.1 Nano (April 2025)

OpenAI

Latest nano variant for ultra-fast inference with efficient performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabiliti...

Streaming Vision
O

OpenAI: GPT-4o (August 2024)

OpenAI

August 2024 update with improved performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solv...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4o (May 2024)

OpenAI

Initial release of GPT-4o optimized model. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4o (November 2024)

OpenAI

Latest GPT-4o with updated knowledge and improved capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for co...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4o Audio Preview

OpenAI

Preview of GPT-4o with audio processing capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex prob...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4o Audio Preview (October 2024)

OpenAI

October release of audio-enabled GPT-4o preview. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...

Vision Tools Streaming Reasoning
O

OpenAI: GPT-4o Mini

OpenAI

Compact multimodal model for efficient applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex probl...

Vision Tools Streaming
O

OpenAI: GPT-4o Mini (July 2024)

OpenAI

Initial release of the mini multimodal variant. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...

Vision Tools Streaming
O

OpenAI: GPT-4o Mini TTS

OpenAI

GPT-4o Mini TTS is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: GPT-5.2 Chat (AKA Instant)

OpenAI

GPT-5.2 Chat is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. The model uses adaptive reasoning to selectively engage deep...

Vision
O

OpenAI: O1

OpenAI

OpenAI model: o1 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O1 2024 12 17

OpenAI

OpenAI model: o1-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O1 Mini

OpenAI

OpenAI model: o1-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O1 Mini 2024 09 12

OpenAI

OpenAI model: o1-mini-2024-09-12 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O1 Pro

OpenAI

OpenAI model: o1-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O1 Pro 2025 03 19

OpenAI

OpenAI model: o1-pro-2025-03-19 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O3

OpenAI

OpenAI model: o3 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O3 2025 04 16

OpenAI

OpenAI model: o3-2025-04-16 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O3 Deep Research

OpenAI

O3 Deep Research is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: O3 Mini

OpenAI

OpenAI model: o3-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O3 Mini 2025 01 31

OpenAI

OpenAI model: o3-mini-2025-01-31 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O3 Pro

OpenAI

O3 Pro is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: O4 Mini

OpenAI

OpenAI model: o4-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O4 Mini 2025 04 16

OpenAI

OpenAI model: o4-mini-2025-04-16 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: O4 Mini Deep Research

OpenAI

OpenAI model: o4-mini-deep-research This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....

Streaming Vision
O

OpenAI: O4 Mini Deep Research 2025 06 26

OpenAI

OpenAI model: o4-mini-deep-research-2025-06-26 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...

Streaming Vision
O

OpenAI: Omni Moderation

OpenAI

Omni Moderation is a content moderation model for safety and policy compliance checking. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: Omni Moderation 2024 09 26

OpenAI

OpenAI model: omni-moderation-2024-09-26 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving t...

Streaming Vision
O

OpenAI: Omni Moderation Latest

OpenAI

OpenAI model: omni-moderation-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks...

Streaming Vision
O

OpenAI: Sora 2

OpenAI

OpenAI model: sora-2 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Sora 2 Pro

OpenAI

OpenAI model: sora-2-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.

Streaming Vision
O

OpenAI: Text Embedding 3 Large

OpenAI

Large embedding model for advanced semantic tasks. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem...

Streaming Vision
O

OpenAI: Text Embedding 3 Small

OpenAI

Efficient text embedding model with high quality representations. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for ...

Streaming Vision
O

OpenAI: Text Embedding Ada 002

OpenAI

Legacy embedding model still widely used. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ...

Streaming Vision
O

OpenAI: TTS

OpenAI

TTS is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: TTS HD

OpenAI

TTS HD is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

OpenAI: Whisper

OpenAI

Whisper is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.

Streaming Vision
O

Openai-compatible: Fake Gpt 4 Vision

Openai Compatible

Fake Gpt 4 Vision with vision capabilities for processing images and visual content. This model supports multimodal capabilities including vision and image understanding. It features advanced reasonin...

Vision Streaming
O

OpenChat 3.5 7B (Free)

Openchat

OpenChat is a library of open-source language models fine-tuned with C-RLFT, a strategy inspired by offline reinforcement learning. The model is trained on mixed-quality data without preference labels...

Reasoning Vision
O

LangMart: AI21 Jamba Large 1.7

Openrouter

AI21's Jamba Large 1.7 is a state-of-the-art hybrid SSM-Transformer model with a 256K context window. Designed for enterprise applications requiring long-context understanding.

Tools Streaming
O

LangMart: AI21 Jamba Mini 1.7

Openrouter

AI21's Jamba Mini 1.7 is a compact hybrid SSM-Transformer model with a 256K context window. Optimized for cost-effective long-context applications.

Streaming
O

LangMart: Aion Labs/aion 1.0

Openrouter

Aion Labs/aion 1.0 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandin...

Streaming Vision
O

LangMart: Aion Labs/aion 1.0 Mini

Openrouter

Aion Labs/aion 1.0 Mini is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...

Streaming Vision
O

LangMart: Aion Labs/aion Rp Llama 3.1 8b

Openrouter

Aion Labs/aion Rp Llama 3.1 8b is a capable language model from LangMart for general-purpose text generation and analysis tasks.

Streaming Vision
O

LangMart: AlfredPros: CodeLLaMa 7B Instruct Solidity

Openrouter

A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library.

O

LangMart: Alibaba/tongyi Deepresearch 30b A3b:free

Openrouter

Free tier version of Alibaba/tongyi Deepresearch 30b A3b:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: AllenAI: Olmo 2 32B Instruct

Openrouter

OLMo-2 32B Instruct is a supervised instruction-finetuned variant of the OLMo-2 32B March 2025 base model. It excels in complex reasoning and instruction-following tasks across diverse benchmarks such...

O

LangMart: AllenAI: Olmo 3 7B Instruct

Openrouter

Olmo 3 7B Instruct is a supervised instruction-fine-tuned variant of the Olmo 3 7B base model, optimized for instruction-following, question-answering, and natural conversational dialogue. By leveragi...

O

LangMart: AllenAI: Olmo 3 7B Think

Openrouter

Olmo 3 7B Think is a research-oriented language model in the Olmo family designed for advanced reasoning and instruction-driven tasks. It excels at multi-step problem solving, logical inference, and m...

O

LangMart: Allenai/olmo 3 32b Think:free

Openrouter

Free tier version of Allenai/olmo 3 32b Think:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Allenai/olmo 3.1 32b Think:free

Openrouter

Free tier version of Allenai/olmo 3.1 32b Think:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Amazon: Nova 2 Lite

Openrouter

Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can process text, images, and videos to generate text.

Vision
O

LangMart: Amazon: Nova Lite 1.0

Openrouter

Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text inputs to generate text output. Amazon Nova Lite can handle real-time cus...

Vision
O

LangMart: Amazon: Nova Micro 1.0

Openrouter

Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length of 128K tokens and optimized for spe...

O

LangMart: Amazon: Nova Premier 1.0

Openrouter

Amazon Nova Premier is the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher for distilling custom models.

Vision
O

LangMart: Amazon: Nova Pro 1.0

Openrouter

Amazon Nova Pro 1.0 is a capable multimodal model from Amazon focused on providing a combination of accuracy, speed, and cost for a wide range of tasks. As of December 2024, it achieves state-of-the-a...

Vision
O

LangMart: Anthropic Claude 3.5 Sonnet

Openrouter

Anthropic's Claude 3.5 Sonnet accessed via LangMart. A balanced model combining strong intelligence with fast response times, ideal for most use cases.

Vision Tools Streaming
O

LangMart: Anthropic Claude 3.7 Sonnet

Openrouter

Anthropic's Claude 3.7 Sonnet accessed via LangMart. An enhanced version with improved reasoning and capabilities over Claude 3.5 Sonnet. This model supports multimodal capabilities including vision a...

Vision Tools
O

LangMart: Anthropic Claude Haiku 4.5

Openrouter

Anthropic's fastest and most efficient Claude model accessed via LangMart. Designed for high-volume, low-latency applications requiring quick responses. This model supports multimodal capabilities inc...

Vision Streaming
O

LangMart: Anthropic: Claude 3 Haiku

Openrouter

Claude 3 Haiku is Anthropic's fastest and most compact model for

Vision
O

LangMart: Anthropic: Claude 3 Opus

Openrouter

Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding.

Vision
O

LangMart: Anthropic: Claude Opus 4

Openrouter

Claude Opus 4 is benchmarked as the world’s best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software...

Vision
O

LangMart: Anthropic: Claude Sonnet 4

Openrouter

Claude Sonnet 4 significantly enhances the capabilities of its predecessor, Sonnet 3.7, excelling in both coding and reasoning tasks with improved precision and controllability. Achieving state-of-the...

Vision
O

LangMart: Anthropic/claude 3.5 Haiku

Openrouter

Anthropic/claude 3.5 Haiku is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...

Streaming Vision
O

LangMart: Anthropic/claude 3.5 Haiku 20241022

Openrouter

Anthropic/claude 3.5 Haiku 20241022 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and i...

Streaming Vision
O

LangMart: Anthropic/claude 3.7 Sonnet:thinking

Openrouter

Anthropic/claude 3.7 Sonnet:thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Anthropic/claude Opus 4.1

Openrouter

Anthropic/claude Opus 4.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Anthropic/claude Opus 4.5

Openrouter

Anthropic/claude Opus 4.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Anthropic/claude Sonnet 4.5

Openrouter

Anthropic/claude Sonnet 4.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...

Streaming Vision
O

LangMart: Arcee AI: Coder Large

Openrouter

Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context win...

O

LangMart: Arcee AI: Maestro Reasoning

Openrouter

Maestro Reasoning is Arcee's flagship analysis model: a 32 B‑parameter derivative of Qwen 2.5‑32 B tuned with DPO and chain‑of‑thought RL for step‑by‑step logic. Compared to the earlier 7 B preview, t...

O

LangMart: Arcee AI: Spotlight

Openrouter

Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers a 32 k‑token context window, enabling rich ...

Vision
O

LangMart: Arcee AI: Trinity Mini

Openrouter

Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with ro...

O

LangMart: Arcee AI: Virtuoso Large

Openrouter

Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k cont...

O

LangMart: Arcee Ai/trinity Mini:free

Openrouter

Free tier version of Arcee Ai/trinity Mini:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: ArliAI: QwQ 32B RpR v1

Openrouter

QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain ...

O

LangMart: Auto Router

Openrouter

Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output.

O

LangMart: Baidu/ernie 4.5 21b A3b

Openrouter

Baidu/ernie 4.5 21b A3b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...

Streaming Vision
O

LangMart: Baidu/ernie 4.5 21b A3b Thinking

Openrouter

Baidu/ernie 4.5 21b A3b Thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Baidu/ernie 4.5 300b A47b

Openrouter

Baidu/ernie 4.5 300b A47b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Baidu/ernie 4.5 Vl 28b A3b

Openrouter

Baidu/ernie 4.5 Vl 28b A3b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...

Streaming Vision
O

LangMart: Baidu/ernie 4.5 Vl 424b A47b

Openrouter

Baidu/ernie 4.5 Vl 424b A47b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...

Streaming Vision
O

LangMart: Body Builder (beta)

Openrouter

Transform your natural language requests into structured LangMart API request objects. Describe what you want to accomplish with AI models, and Body Builder will construct the appropriate API calls. E...

O

LangMart: Bytedance Seed/seed 1.6

Openrouter

Bytedance Seed/seed 1.6 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...

Streaming Vision
O

LangMart: Bytedance Seed/seed 1.6 Flash

Openrouter

Bytedance Seed/seed 1.6 Flash with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Bytedance/ui Tars 1.5 7b

Openrouter

Bytedance/ui Tars 1.5 7b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unders...

Streaming Vision
O

LangMart: Cogito V2 Preview Llama 109B

Openrouter

An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated ...

Vision
O

LangMart: Cognitivecomputations/dolphin Mistral 24b Venice Edition:free

Openrouter

Free tier version of Cognitivecomputations/dolphin Mistral 24b Venice Edition:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Cohere: Command A

Openrouter

Command A is an open-weights 111B parameter model with a 256k context window focused on delivering great performance across agentic, multilingual, and coding use cases.

O

LangMart: Cohere: Command R (08-2024)

Openrouter

command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at ...

O

LangMart: Cohere: Command R+ (08-2024)

Openrouter

command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while ...

O

LangMart: Cohere: Command R7B (12-2024)

Openrouter

Command R7B (12-2024) is a small, fast update of the Command R+ model, delivered in December 2024. It excels at RAG, tool use, agents, and similar tasks requiring complex reasoning and multiple steps....

O

LangMart: Deep Cogito: Cogito V2 Preview Llama 405B

Openrouter

Cogito v2 405B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. It represents a significant step toward frontier intelligence with dense arc...

O

LangMart: Deep Cogito: Cogito V2 Preview Llama 70B

Openrouter

Cogito v2 70B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. Built with iterative policy improvement, it delivers strong performance acros...

O

LangMart: Deepcogito/cogito V2.1 671b

Openrouter

Deepcogito/cogito V2.1 671b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...

Streaming Vision
O

LangMart: DeepSeek: DeepSeek Prover V2

Openrouter

DeepSeek Prover V2 is a 671B parameter model, speculated to be geared towards logic and mathematics. Likely an upgrade from [DeepSeek-Prover-V1.5](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1...

O

LangMart: DeepSeek: DeepSeek R1 0528 Qwen3 8B

Openrouter

DeepSeek-R1-0528 is a lightly upgraded release of DeepSeek R1 that taps more compute and smarter post-training tricks, pushing its reasoning and inference to the brink of flagship models like O3 and G...

O

LangMart: DeepSeek: DeepSeek V3

Openrouter

DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported ev...

O

LangMart: DeepSeek: DeepSeek V3 0324

Openrouter

DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team.

O

LangMart: DeepSeek: R1

Openrouter

DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

O

LangMart: DeepSeek: R1 0528

Openrouter

May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in siz...

O

LangMart: DeepSeek: R1 Distill Llama 70B

Openrouter

DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The mo...

O

LangMart: DeepSeek: R1 Distill Qwen 14B

Openrouter

DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/de...

O

LangMart: DeepSeek: R1 Distill Qwen 32B

Openrouter

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperfor...

O

LangMart: Deepseek/deepseek Chat V3.1

Openrouter

Deepseek/deepseek Chat V3.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...

Streaming Vision
O

LangMart: Deepseek/deepseek R1 0528:free

Openrouter

Free tier version of Deepseek/deepseek R1 0528:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Deepseek/deepseek V3.1 Terminus

Openrouter

Deepseek/deepseek V3.1 Terminus is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image...

Streaming Vision
O

LangMart: Deepseek/deepseek V3.1 Terminus:exacto

Openrouter

Deepseek/deepseek V3.1 Terminus:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision an...

Streaming Vision
O

LangMart: Deepseek/deepseek V3.2

Openrouter

Deepseek/deepseek V3.2 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understa...

Streaming Vision
O

LangMart: Deepseek/deepseek V3.2 Exp

Openrouter

Deepseek/deepseek V3.2 Exp is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...

Streaming Vision
O

LangMart: Deepseek/deepseek V3.2 Speciale

Openrouter

Deepseek/deepseek V3.2 Speciale is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image...

Streaming Vision
O

LangMart: EleutherAI: Llemma 7b

Openrouter

EleutherAI: Llemma 7b is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: EssentialAI: Rnj 1 Instruct

Openrouter

Rnj-1 is an 8B-parameter, dense, open-weight model family developed by Essential AI and trained from scratch with a focus on programming, math, and scientific reasoning. The model demonstrates strong ...

O

LangMart: Goliath 120B

Openrouter

A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale.

O

LangMart: Google: Gemini 2.0 Flash

Openrouter

Google: Gemini 2.0 Flash is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemini 2.5 Flash

Openrouter

Google: Gemini 2.5 Flash is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemini 2.5 Flash Image (Nano Banana)

Openrouter

Google: Gemini 2.5 Flash Image (Nano Banana) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case categor...

Streaming Vision
O

LangMart: Google: Gemini 2.5 Flash Image Preview (Nano Banana)

Openrouter

Google: Gemini 2.5 Flash Image Preview (Nano Banana) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case...

Streaming Vision
O

LangMart: Google: Gemini 2.5 Flash Lite

Openrouter

Google: Gemini 2.5 Flash Lite is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemini 2.5 Flash Lite Preview 09-2025

Openrouter

Google: Gemini 2.5 Flash Lite Preview 09-2025 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case catego...

Streaming Vision
O

LangMart: Google: Gemini 2.5 Flash Preview 09-2025

Openrouter

Google: Gemini 2.5 Flash Preview 09-2025 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemini 2.5 Pro

Openrouter

Google: Gemini 2.5 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemini 3 Flash Preview

Openrouter

Google: Gemini 3 Flash Preview is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemini 3 Pro Preview

Openrouter

Google: Gemini 3 Pro Preview is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemma 2 27B

Openrouter

Google: Gemma 2 27B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemma 2 9B

Openrouter

Google: Gemma 2 9B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemma 3 12B

Openrouter

Google: Gemma 3 12B is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemma 3 27B

Openrouter

Google: Gemma 3 27B is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemma 3 4B

Openrouter

Google: Gemma 3 4B is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Gemma 3n 4B

Openrouter

Google: Gemma 3n 4B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: Google: Nano Banana Pro (Gemini 3 Pro Image Preview)

Openrouter

Google: Nano Banana Pro (Gemini 3 Pro Image Preview) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case...

Streaming Vision
O

LangMart: Google/gemini 2.0 Flash Exp:free

Openrouter

Google/gemini 2.0 Flash Exp:free with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemini 2.0 Flash Lite 001

Openrouter

Lightweight variant of Google/gemini 2.0 Flash Lite 001 optimized for reduced latency and cost. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemini 2.5 Pro Preview

Openrouter

Preview version of Google/gemini 2.5 Pro Preview providing early access to experimental features. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemini 2.5 Pro Preview 05 06

Openrouter

Preview version of Google/gemini 2.5 Pro Preview 05 06 providing early access to experimental features. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemma 3 12b It:free

Openrouter

Free tier version of Google/gemma 3 12b It:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemma 3 27b It:free

Openrouter

Free tier version of Google/gemma 3 27b It:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemma 3 4b It:free

Openrouter

Free tier version of Google/gemma 3 4b It:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemma 3n E2b It:free

Openrouter

Free tier version of Google/gemma 3n E2b It:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Google/gemma 3n E4b It:free

Openrouter

Free tier version of Google/gemma 3n E4b It:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Ibm Granite/granite 4.0 H Micro

Openrouter

Ibm Granite/granite 4.0 H Micro is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image...

Streaming Vision
O

LangMart: Inception: Mercury

Openrouter

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Clau...

O

LangMart: Inception: Mercury Coder

Openrouter

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haik...

O

LangMart: Inflection: Inflection 3 Pi

Openrouter

Inflection 3 Pi powers Inflection's [Pi](https://pi.ai) chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like custo...

O

LangMart: Inflection: Inflection 3 Productivity

Openrouter

Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news.

O

LangMart: Kwaipilot/kat Coder Pro:free

Openrouter

Free tier version of Kwaipilot/kat Coder Pro:free. This model supports multimodal capabilities including vision and image understanding. The model is optimized for code generation and programming task...

Streaming Vision
O

LangMart: Liquid/lfm 2.2 6b

Openrouter

Liquid/lfm 2.2 6b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding...

Streaming Vision
O

LangMart: LiquidAI/LFM2-8B-A1B

Openrouter

Model created via inbox interface

O

LangMart: Llama Guard 3 8B

Openrouter

Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classificati...

O

LangMart: Magnum v4 72B

Openrouter

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://langmart.ai/model-docs.5-sonnet) and Opus(https://langmart.ai/model-docs).

O

LangMart: Mancer: Weaver (alpha)

Openrouter

An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.

O

LangMart: Meituan: LongCat Flash Chat

Openrouter

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-conn...

O

LangMart: Meta Llama/llama 3.1 405b

Openrouter

Meta Llama/llama 3.1 405b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Meta Llama/llama 3.1 405b Instruct

Openrouter

Meta Llama/llama 3.1 405b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...

Streaming Vision
O

LangMart: Meta Llama/llama 3.1 405b Instruct:free

Openrouter

Free tier version of Meta Llama/llama 3.1 405b Instruct:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Meta Llama/llama 3.1 70b Instruct

Openrouter

Meta Llama/llama 3.1 70b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ima...

Streaming Vision
O

LangMart: Meta Llama/llama 3.1 8b Instruct

Openrouter

Meta Llama/llama 3.1 8b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...

Streaming Vision
O

LangMart: Meta Llama/llama 3.2 11b Vision Instruct

Openrouter

Meta Llama/llama 3.2 11b Vision Instruct with vision capabilities for processing images and visual content. This model supports multimodal capabilities including vision and image understanding. It fea...

Vision Streaming
O

LangMart: Meta Llama/llama 3.2 1b Instruct

Openrouter

Meta Llama/llama 3.2 1b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...

Streaming Vision
O

LangMart: Meta Llama/llama 3.2 3b Instruct

Openrouter

Meta Llama/llama 3.2 3b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...

Streaming Vision
O

LangMart: Meta Llama/llama 3.2 3b Instruct:free

Openrouter

Free tier version of Meta Llama/llama 3.2 3b Instruct:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Meta Llama/llama 3.2 90b Vision Instruct

Openrouter

Meta Llama/llama 3.2 90b Vision Instruct with vision capabilities for processing images and visual content. This model supports multimodal capabilities including vision and image understanding. It fea...

Vision Streaming
O

LangMart: Meta Llama/llama 3.3 70b Instruct

Openrouter

Meta Llama/llama 3.3 70b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ima...

Streaming Vision
O

LangMart: Meta Llama/llama 3.3 70b Instruct:free

Openrouter

Free tier version of Meta Llama/llama 3.3 70b Instruct:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Meta: Llama 3 70B Instruct

Openrouter

Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases.

O

LangMart: Meta: Llama 3 8B Instruct

Openrouter

Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.

O

LangMart: Meta: Llama 4 Maverick

Openrouter

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forw...

Vision
O

LangMart: Meta: Llama 4 Scout

Openrouter

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and ...

Vision
O

LangMart: Meta: Llama Guard 4 12B

Openrouter

Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs ...

Vision
O

LangMart: Meta: LlamaGuard 2 8B

Openrouter

This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response...

O

LangMart: Microsoft: Phi 4

Openrouter

[Microsoft Research](/microsoft) Phi-4 is designed to perform well in complex reasoning tasks and can operate efficiently in situations with limited memory or where quick responses are needed.

O

LangMart: Microsoft: Phi 4 Multimodal Instruct

Openrouter

Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate ...

Vision
O

LangMart: Microsoft: Phi 4 Reasoning Plus

Openrouter

Phi-4-reasoning-plus is an enhanced 14B parameter model from Microsoft, fine-tuned from Phi-4 with additional reinforcement learning to boost accuracy on math, science, and code reasoning tasks. It us...

O

LangMart: Microsoft: Phi-3 Medium 128K Instruct

Openrouter

Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference a...

O

LangMart: Microsoft: Phi-3 Mini 128K Instruct

Openrouter

Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, i...

O

LangMart: Microsoft/phi 3.5 Mini 128k Instruct

Openrouter

Microsoft/phi 3.5 Mini 128k Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ...

Streaming Vision
O

LangMart: MiniMax: MiniMax M1

Openrouter

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "...

O

LangMart: MiniMax: MiniMax M2

Openrouter

MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier...

O

LangMart: MiniMax: MiniMax-01

Openrouter

MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can han...

Vision
O

LangMart: Minimax/minimax M2.1

Openrouter

Minimax/minimax M2.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understand...

Streaming Vision
O

LangMart: Mistral Large

Openrouter

This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch ann...

O

LangMart: Mistral Large 2407

Openrouter

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch annou...

O

LangMart: Mistral Large 2411

Openrouter

Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411)

O

LangMart: Mistral Tiny

Openrouter

Note: This model is being deprecated. Recommended replacement is the newer [Ministral 8B](/mistral/ministral-8b)

O

LangMart: Mistral: Codestral 2508

Openrouter

Mistral's cutting-edge language model for coding released end of July 2025. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test genera...

O

LangMart: Mistral: Devstral 2 2512

Openrouter

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window.

O

LangMart: Mistral: Devstral Medium

Openrouter

Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI. Positioned as a step up from Devstral Small, it achieves 61.6% on SW...

O

LangMart: Mistral: Devstral Small 1.1

Openrouter

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and relea...

O

LangMart: Mistral: Devstral Small 2505

Openrouter

Devstral-Small-2505 is a 24B parameter agentic LLM fine-tuned from Mistral-Small-3.1, jointly developed by Mistral AI and All Hands AI for advanced software engineering tasks. It is optimized for code...

Vision
O

LangMart: Mistral: Ministral 3 14B 2512

Openrouter

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language ...

Vision
O

LangMart: Mistral: Ministral 3 3B 2512

Openrouter

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.

Vision
O

LangMart: Mistral: Ministral 3 8B 2512

Openrouter

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

Vision
O

LangMart: Mistral: Ministral 3B

Openrouter

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on mos...

O

LangMart: Mistral: Ministral 8B

Openrouter

Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k contex...

O

LangMart: Mistral: Mistral 7B Instruct

Openrouter

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

O

LangMart: Mistral: Mistral Large 3 2512

Openrouter

Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.

Vision
O

LangMart: Mistral: Mistral Medium 3

Openrouter

Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning...

Vision
O

LangMart: Mistral: Mistral Nemo

Openrouter

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.

O

LangMart: Mistral: Mistral Small 3

Openrouter

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tune...

O

LangMart: Mistral: Mistral Small Creative

Openrouter

Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversati...

O

LangMart: Mistral: Mixtral 8x22B Instruct

Openrouter

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its s...

O

LangMart: Mistral: Mixtral 8x7B Instruct

Openrouter

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parame...

O

LangMart: Mistral: Pixtral 12B

Openrouter

The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.

Vision
O

LangMart: Mistral: Pixtral Large 2411

Openrouter

Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411). The model is able to understand documents, charts and natural images....

Vision
O

LangMart: Mistral: Saba

Openrouter

Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance...

O

LangMart: Mistral: Voxtral Small 24B 2507

Openrouter

Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance. It excels at speech transcription, translati...

O

LangMart: Mistralai/devstral 2512:free

Openrouter

Free tier version of Mistralai/devstral 2512:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Mistralai/mistral 7b Instruct V0.1

Openrouter

Mistralai/mistral 7b Instruct V0.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...

Streaming Vision
O

LangMart: Mistralai/mistral 7b Instruct V0.2

Openrouter

Mistralai/mistral 7b Instruct V0.2 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...

Streaming Vision
O

LangMart: Mistralai/mistral 7b Instruct V0.3

Openrouter

Mistralai/mistral 7b Instruct V0.3 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...

Streaming Vision
O

LangMart: Mistralai/mistral 7b Instruct:free

Openrouter

Free tier version of Mistralai/mistral 7b Instruct:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Mistralai/mistral Medium 3.1

Openrouter

Mistralai/mistral Medium 3.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...

Streaming Vision
O

LangMart: Mistralai/mistral Small 3.1 24b Instruct

Openrouter

Mistralai/mistral Small 3.1 24b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision ...

Streaming Vision
O

LangMart: Mistralai/mistral Small 3.1 24b Instruct:free

Openrouter

Free tier version of Mistralai/mistral Small 3.1 24b Instruct:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Mistralai/mistral Small 3.2 24b Instruct

Openrouter

Mistralai/mistral Small 3.2 24b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision ...

Streaming Vision
O

LangMart: MoonshotAI: Kimi Dev 72B

Openrouter

Kimi-Dev-72B is an open-source large language model fine-tuned for software engineering and issue resolution tasks. Based on Qwen2.5-72B, it is optimized using large-scale reinforcement learning that ...

O

LangMart: MoonshotAI: Kimi K2 0711

Openrouter

Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for a...

O

LangMart: MoonshotAI: Kimi K2 0905

Openrouter

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters ...

O

LangMart: MoonshotAI: Kimi K2 Thinking

Openrouter

Kimi K2 Thinking is Moonshot AI’s most advanced open reasoning model to date, extending the K2 series into agentic, long-horizon reasoning. Built on the trillion-parameter Mixture-of-Experts (MoE) arc...

O

LangMart: Moonshotai/kimi K2 0905:exacto

Openrouter

Moonshotai/kimi K2 0905:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image ...

Streaming Vision
O

LangMart: Moonshotai/kimi K2:free

Openrouter

Free tier version of Moonshotai/kimi K2:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Morph: Morph V3 Fast

Openrouter

Morph's fastest apply model for code edits. ~10,500 tokens/sec with 96% accuracy for rapid code transformations.

O

LangMart: Morph: Morph V3 Large

Openrouter

Morph's high-accuracy apply model for complex code edits. ~4,500 tokens/sec with 98% accuracy for precise code transformations.

O

LangMart: MythoMax 13B

Openrouter

One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge

O

LangMart: Neversleep/llama 3.1 Lumimaid 8b

Openrouter

Neversleep/llama 3.1 Lumimaid 8b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...

Streaming Vision
O

LangMart: Nex Agi/deepseek V3.1 Nex N1:free

Openrouter

Free tier version of Nex Agi/deepseek V3.1 Nex N1:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Noromaid 20B

Openrouter

A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge.

O

LangMart: Nous: DeepHermes 3 Mistral 24B Preview

Openrouter

DeepHermes 3 (Mistral 24B Preview) is an instruction-tuned language model by Nous Research based on Mistral-Small-24B, designed for chat, function calling, and advanced multi-turn reasoning. It introd...

O

LangMart: Nous: Hermes 4 405B

Openrouter

Hermes 4 is a large-scale reasoning model built on Meta-Llama-3.1-405B and released by Nous Research. It introduces a hybrid reasoning mode, where the model can choose to deliberate internally with

O

LangMart: Nous: Hermes 4 70B

Openrouter

Hermes 4 70B is a hybrid reasoning model from Nous Research, built on Meta-Llama-3.1-70B. It introduces the same hybrid mode as the larger 405B release, allowing the model to either respond directly o...

O

LangMart: NousResearch: Hermes 2 Pro - Llama-3 8B

Openrouter

Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mod...

O

LangMart: Nousresearch/hermes 3 Llama 3.1 405b

Openrouter

Nousresearch/hermes 3 Llama 3.1 405b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ...

Streaming Vision
O

LangMart: Nousresearch/hermes 3 Llama 3.1 405b:free

Openrouter

Free tier version of Nousresearch/hermes 3 Llama 3.1 405b:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Nousresearch/hermes 3 Llama 3.1 70b

Openrouter

Nousresearch/hermes 3 Llama 3.1 70b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and i...

Streaming Vision
O

LangMart: NVIDIA: Nemotron 3 Nano 30B A3B

Openrouter

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems.

O

LangMart: NVIDIA: Nemotron Nano 12B 2 VL

Openrouter

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, c...

Vision
O

LangMart: NVIDIA: Nemotron Nano 9B V2

Openrouter

NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and t...

O

LangMart: Nvidia/llama 3.1 Nemotron 70b Instruct

Openrouter

Nvidia/llama 3.1 Nemotron 70b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision an...

Streaming Vision
O

LangMart: Nvidia/llama 3.1 Nemotron Ultra 253b V1

Openrouter

Nvidia/llama 3.1 Nemotron Ultra 253b V1 is a capable language model from LangMart for general-purpose text generation and analysis tasks.

Streaming Vision
O

LangMart: Nvidia/llama 3.3 Nemotron Super 49b V1.5

Openrouter

Nvidia/llama 3.3 Nemotron Super 49b V1.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks.

Streaming Vision
O

LangMart: Nvidia/nemotron 3 Nano 30b A3b:free

Openrouter

Free tier version of Nvidia/nemotron 3 Nano 30b A3b:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Nvidia/nemotron Nano 12b V2 Vl:free

Openrouter

Free tier version of Nvidia/nemotron Nano 12b V2 Vl:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Nvidia/nemotron Nano 9b V2:free

Openrouter

Free tier version of Nvidia/nemotron Nano 9b V2:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: OpenAI: ChatGPT-4o

Openrouter

OpenAI: ChatGPT-4o is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: Codex Mini

Openrouter

codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI. For direct use in the API, we recommend starting with gpt-4.1.

Vision
O

LangMart: OpenAI: GPT-3.5 Turbo

Openrouter

OpenAI: GPT-3.5 Turbo is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-3.5 Turbo 16k

Openrouter

OpenAI: GPT-3.5 Turbo 16k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-3.5 Turbo Instruct

Openrouter

OpenAI: GPT-3.5 Turbo Instruct is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4

Openrouter

OpenAI: GPT-4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4 (older v0314)

Openrouter

GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.

O

LangMart: OpenAI: GPT-4 Turbo

Openrouter

OpenAI: GPT-4 Turbo is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4 Turbo (older v1106)

Openrouter

OpenAI: GPT-4 Turbo (older v1106) is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category...

Streaming Vision
O

LangMart: OpenAI: GPT-4 Turbo Preview

Openrouter

OpenAI: GPT-4 Turbo Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4.1

Openrouter

OpenAI: GPT-4.1 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4.1 Mini

Openrouter

OpenAI: GPT-4.1 Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4.1 Nano

Openrouter

OpenAI: GPT-4.1 Nano is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o

Openrouter

OpenAI: GPT-4o is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o (2024-05-13)

Openrouter

OpenAI: GPT-4o (2024-05-13) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o (2024-08-06)

Openrouter

OpenAI: GPT-4o (2024-08-06) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o (2024-11-20)

Openrouter

OpenAI: GPT-4o (2024-11-20) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o Audio

Openrouter

OpenAI: GPT-4o Audio is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o Search Preview

Openrouter

OpenAI: GPT-4o Search Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o-mini

Openrouter

OpenAI: GPT-4o-mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o-mini (2024-07-18)

Openrouter

OpenAI: GPT-4o-mini (2024-07-18) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-4o-mini Search Preview

Openrouter

OpenAI: GPT-4o-mini Search Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case categor...

Streaming Vision
O

LangMart: OpenAI: GPT-5

Openrouter

OpenAI: GPT-5 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5 Chat

Openrouter

GPT-5 Chat is designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.

Vision
O

LangMart: OpenAI: GPT-5 Codex

Openrouter

OpenAI: GPT-5 Codex is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5 Image

Openrouter

[GPT-5](https://langmart.ai/model-docs) Image combines OpenAI's GPT-5 model with state-of-the-art image generation capabilities. It offers major improvements in reasoning, code quality, and user exper...

Vision
O

LangMart: OpenAI: GPT-5 Image Mini

Openrouter

GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini](https://langmart.ai/model-docs), with GPT Image 1 Mini for efficient image generation. This natively multimod...

Vision
O

LangMart: OpenAI: GPT-5 Mini

Openrouter

OpenAI: GPT-5 Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5 Nano

Openrouter

OpenAI: GPT-5 Nano is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5 Pro

Openrouter

OpenAI: GPT-5 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5.1

Openrouter

OpenAI: GPT-5.1 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5.1-Codex

Openrouter

OpenAI: GPT-5.1-Codex is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5.1-Codex-Max

Openrouter

OpenAI: GPT-5.1-Codex-Max is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5.1-Codex-Mini

Openrouter

OpenAI: GPT-5.1-Codex-Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5.2

Openrouter

OpenAI: GPT-5.2 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: GPT-5.2 Pro

Openrouter

OpenAI: GPT-5.2 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: gpt-oss-120b

Openrouter

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B par...

O

LangMart: OpenAI: gpt-oss-20b

Openrouter

gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimiz...

O

LangMart: OpenAI: gpt-oss-safeguard-20b

Openrouter

gpt-oss-safeguard-20b is a safety reasoning model from OpenAI built upon gpt-oss-20b. This open-weight, 21B-parameter Mixture-of-Experts (MoE) model offers lower latency for safety tasks like content ...

O

LangMart: OpenAI: o1

Openrouter

OpenAI: o1 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o1-pro

Openrouter

OpenAI: o1-pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o3

Openrouter

OpenAI: o3 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o3 Deep Research

Openrouter

OpenAI: o3 Deep Research is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o3 Mini

Openrouter

OpenAI: o3 Mini is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o3 Mini High

Openrouter

OpenAI o3-mini-high is the same model as [o3-mini](/openai/o3-mini) with reasoning_effort set to high.

Vision
O

LangMart: OpenAI: o3 Pro

Openrouter

OpenAI: o3 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o4 Mini

Openrouter

OpenAI: o4 Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o4 Mini Deep Research

Openrouter

OpenAI: o4 Mini Deep Research is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.

Streaming Vision
O

LangMart: OpenAI: o4 Mini High

Openrouter

OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high.

Vision
O

LangMart: Openai/gpt 3.5 Turbo 0613

Openrouter

Openai/gpt 3.5 Turbo 0613 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Openai/gpt 4o:extended

Openrouter

Openai/gpt 4o:extended is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understa...

Streaming Vision
O

LangMart: Openai/gpt 5.1 Chat

Openrouter

Openai/gpt 5.1 Chat is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandi...

Streaming Vision
O

LangMart: Openai/gpt 5.2 Chat

Openrouter

Openai/gpt 5.2 Chat is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandi...

Streaming Vision
O

LangMart: Openai/gpt Oss 120b:exacto

Openrouter

Openai/gpt Oss 120b:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...

Streaming Vision
O

LangMart: Openai/gpt Oss 120b:free

Openrouter

Free tier version of Openai/gpt Oss 120b:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Openai/gpt Oss 20b:free

Openrouter

Free tier version of Openai/gpt Oss 20b:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: OpenGVLab: InternVL3 78B

Openrouter

The InternVL3 series is an advanced multimodal large language model (MLLM). Compared to InternVL 2.5, InternVL3 demonstrates stronger multimodal perception and reasoning capabilities.

Vision
O

LangMart: Perplexity: Sonar

Openrouter

Sonar is lightweight, affordable, fast, and simple to use — now featuring citations and the ability to customize sources. It is designed for companies seeking to integrate lightweight question-and-ans...

Vision
O

LangMart: Perplexity: Sonar Deep Research

Openrouter

Sonar Deep Research is a research-focused model designed for multi-step retrieval, synthesis, and reasoning across complex topics. It autonomously searches, reads, and evaluates sources, refining its ...

O

LangMart: Perplexity: Sonar Pro

Openrouter

Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro)

Vision
O

LangMart: Perplexity: Sonar Pro Search

Openrouter

Exclusively available on the LangMart API, Sonar Pro's new Pro Search mode is Perplexity's most advanced agentic search system. It is designed for deeper reasoning and analysis. Pricing is based on to...

Vision
O

LangMart: Perplexity: Sonar Reasoning

Openrouter

Sonar Reasoning is a reasoning model provided by Perplexity based on [DeepSeek R1](/deepseek/deepseek-r1).

O

LangMart: Perplexity: Sonar Reasoning Pro

Openrouter

Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro)

Vision
O

LangMart: Prime Intellect: INTELLECT-3

Openrouter

INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It offe...

O

LangMart: Qwen: Qwen Plus 0728

Openrouter

Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.

O

LangMart: Qwen: Qwen VL Max

Openrouter

Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.

Vision
O

LangMart: Qwen: Qwen VL Plus

Openrouter

Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixe...

Vision
O

LangMart: Qwen: Qwen-Max

Openrouter

Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 2...

O

LangMart: Qwen: Qwen-Plus

Openrouter

Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination.

O

LangMart: Qwen: Qwen-Turbo

Openrouter

Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks.

O

LangMart: Qwen: Qwen3 14B

Openrouter

Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode f...

O

LangMart: Qwen: Qwen3 235B A22B

Openrouter

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex r...

O

LangMart: Qwen: Qwen3 235B A22B Instruct 2507

Openrouter

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized ...

O

LangMart: Qwen: Qwen3 235B A22B Thinking 2507

Openrouter

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass...

O

LangMart: Qwen: Qwen3 30B A3B

Openrouter

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tas...

O

LangMart: Qwen: Qwen3 30B A3B Instruct 2507

Openrouter

Qwen3-30B-A3B-Instruct-2507 is a 30.5B-parameter mixture-of-experts language model from Qwen, with 3.3B active parameters per inference. It operates in non-thinking mode and is designed for high-quali...

O

LangMart: Qwen: Qwen3 30B A3B Thinking 2507

Openrouter

Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking m...

O

LangMart: Qwen: Qwen3 32B

Openrouter

Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode ...

O

LangMart: Qwen: Qwen3 8B

Openrouter

Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode f...

O

LangMart: Qwen: Qwen3 Coder 30B A3B Instruct

Openrouter

Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, an...

O

LangMart: Qwen: Qwen3 Coder 480B A35B

Openrouter

Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-con...

O

LangMart: Qwen: Qwen3 Coder Flash

Openrouter

Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling and en...

O

LangMart: Qwen: Qwen3 Coder Plus

Openrouter

Qwen3 Coder Plus is Alibaba's proprietary version of the Open Source Qwen3 Coder 480B A35B. It is a powerful coding agent model specializing in autonomous programming via tool calling and environment ...

O

LangMart: Qwen: Qwen3 Max

Openrouter

Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the Janua...

O

LangMart: Qwen: Qwen3 Next 80B A3B Instruct

Openrouter

Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code ...

O

LangMart: Qwen: Qwen3 Next 80B A3B Thinking

Openrouter

Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step problems; math proofs, code s...

O

LangMart: Qwen: Qwen3 VL 235B A22B Instruct

Openrouter

Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language...

Vision
O

LangMart: Qwen: Qwen3 VL 235B A22B Thinking

Openrouter

Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STE...

Vision
O

LangMart: Qwen: Qwen3 VL 30B A3B Instruct

Openrouter

Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general mu...

Vision
O

LangMart: Qwen: Qwen3 VL 30B A3B Thinking

Openrouter

Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning in STEM, math, and complex ...

Vision
O

LangMart: Qwen: Qwen3 VL 32B Instruct

Openrouter

Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-precision understanding and reasoning across text, images, and video. With 32 billion parameters, it combines ...

Vision
O

LangMart: Qwen: Qwen3 VL 8B Instruct

Openrouter

Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal...

Vision
O

LangMart: Qwen: Qwen3 VL 8B Thinking

Openrouter

Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences...

Vision
O

LangMart: Qwen: QwQ 32B

Openrouter

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in d...

O

LangMart: Qwen/qwen 2.5 72b Instruct

Openrouter

Qwen/qwen 2.5 72b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...

Streaming Vision
O

LangMart: Qwen/qwen 2.5 7b Instruct

Openrouter

Qwen/qwen 2.5 7b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Qwen/qwen 2.5 Coder 32b Instruct

Openrouter

Qwen/qwen 2.5 Coder 32b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...

Streaming Vision
O

LangMart: Qwen/qwen 2.5 Vl 7b Instruct

Openrouter

Qwen/qwen 2.5 Vl 7b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...

Streaming Vision
O

LangMart: Qwen/qwen 2.5 Vl 7b Instruct:free

Openrouter

Free tier version of Qwen/qwen 2.5 Vl 7b Instruct:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Qwen/qwen Plus 2025 07 28:thinking

Openrouter

Qwen/qwen Plus 2025 07 28:thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Qwen/qwen2.5 Coder 7b Instruct

Openrouter

Qwen/qwen2.5 Coder 7b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image ...

Streaming Vision
O

LangMart: Qwen/qwen2.5 Vl 32b Instruct

Openrouter

Qwen/qwen2.5 Vl 32b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...

Streaming Vision
O

LangMart: Qwen/qwen2.5 Vl 72b Instruct

Openrouter

Qwen/qwen2.5 Vl 72b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...

Streaming Vision
O

LangMart: Qwen/qwen3 4b:free

Openrouter

Free tier version of Qwen/qwen3 4b:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Qwen/qwen3 Coder:exacto

Openrouter

Qwen/qwen3 Coder:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...

Streaming Vision
O

LangMart: Qwen/qwen3 Coder:free

Openrouter

Free tier version of Qwen/qwen3 Coder:free. This model supports multimodal capabilities including vision and image understanding. The model is optimized for code generation and programming tasks.

Streaming Vision
O

LangMart: Relace: Relace Apply 3

Openrouter

Relace Apply 3 is a specialized code-patching LLM that merges AI-suggested edits straight into your source files. It can apply updates from GPT-4o, Claude, and others into your files at 10,000 tokens/...

O

LangMart: Relace: Relace Search

Openrouter

The relace-search model uses 4-12 `view_file` and `grep` tools in parallel to explore a codebase and return relevant files to the user request.

O

LangMart: ReMM SLERP 13B

Openrouter

A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge

O

LangMart: Sao10K: Llama 3 8B Lunaris

Openrouter

Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge.

O

LangMart: Sao10k: Llama 3 Euryale 70B v2.1

Openrouter

Euryale 70B v2.1 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k).

O

LangMart: Sao10k/l3.1 70b Hanami X1

Openrouter

Sao10k/l3.1 70b Hanami X1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...

Streaming Vision
O

LangMart: Sao10k/l3.1 Euryale 70b

Openrouter

Sao10k/l3.1 Euryale 70b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...

Streaming Vision
O

LangMart: Sao10k/l3.3 Euryale 70b

Openrouter

Sao10k/l3.3 Euryale 70b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...

Streaming Vision
O

LangMart: SorcererLM 8x22B

Openrouter

SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on [WizardLM-2 8x22B](/microsoft/wizardlm-2-8x22b).

O

LangMart: StepFun: Step3

Openrouter

Step3 is a cutting-edge multimodal reasoning model—built on a Mixture-of-Experts architecture with 321B total parameters and 38B active. It is designed end-to-end to minimize decoding costs while deli...

Vision
O

LangMart: Switchpoint Router

Openrouter

Switchpoint AI's router instantly analyzes your request and directs it to the optimal AI from an ever-evolving library.

O

LangMart: Tencent: Hunyuan A13B Instruct

Openrouter

Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers compe...

O

LangMart: TheDrummer: Rocinante 12B

Openrouter

Rocinante 12B is designed for engaging storytelling and rich prose.

O

LangMart: TheDrummer: Skyfall 36B V2

Openrouter

Skyfall 36B v2 is an enhanced iteration of Mistral Small 2501, specifically fine-tuned for improved creativity, nuanced writing, role-playing, and coherent storytelling.

O

LangMart: TheDrummer: UnslopNemo 12B

Openrouter

UnslopNemo v4.1 is the latest addition from the creator of Rocinante, designed for adventure writing and role-play scenarios.

O

LangMart: Thedrummer/cydonia 24b V4.1

Openrouter

Thedrummer/cydonia 24b V4.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...

Streaming Vision
O

LangMart: Thudm/glm 4.1v 9b Thinking

Openrouter

Thudm/glm 4.1v 9b Thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: TNG: DeepSeek R1T Chimera

Openrouter

DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Tr...

O

LangMart: TNG: DeepSeek R1T2 Chimera

Openrouter

DeepSeek-TNG-R1T2-Chimera is the second-generation Chimera model from TNG Tech. It is a 671 B-parameter mixture-of-experts text-generation model assembled from DeepSeek-AI’s R1-0528, R1, and V3-0324 c...

O

LangMart: TNG: R1T Chimera

Openrouter

TNG-R1T-Chimera is an experimental LLM with a faible for creative storytelling and character interaction. It is a derivate of the original TNG/DeepSeek-R1T-Chimera released in April 2025 and is availa...

O

LangMart: Tngtech/deepseek R1t Chimera:free

Openrouter

Free tier version of Tngtech/deepseek R1t Chimera:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Tngtech/deepseek R1t2 Chimera:free

Openrouter

Free tier version of Tngtech/deepseek R1t2 Chimera:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Tngtech/tng R1t Chimera:free

Openrouter

Free tier version of Tngtech/tng R1t Chimera:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Tongyi DeepResearch 30B A3B

Openrouter

Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-...

O

LangMart: WizardLM-2 8x22B

Openrouter

WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state...

O

LangMart: X Ai/grok 4.1 Fast

Openrouter

X Ai/grok 4.1 Fast is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandin...

Streaming Vision
O

LangMart: xAI: Grok 3

Openrouter

Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, hea...

O

LangMart: xAI: Grok 3 Beta

Openrouter

Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, hea...

O

LangMart: xAI: Grok 3 Mini

Openrouter

A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.

O

LangMart: xAI: Grok 3 Mini Beta

Openrouter

Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It’s ideal for reasoning-heavy tasks that don’t...

O

LangMart: xAI: Grok 4

Openrouter

Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not exposed, reasoning ...

Vision
O

LangMart: xAI: Grok 4 Fast

Openrouter

Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning. Read more about the model on xAI's [news pos...

Vision
O

LangMart: xAI: Grok Code Fast 1

Openrouter

Grok Code Fast 1 is a speedy and economical reasoning model that excels at agentic coding. With reasoning traces visible in the response, developers can steer Grok Code for high-quality work flows.

O

LangMart: Xiaomi/mimo V2 Flash:free

Openrouter

Xiaomi/mimo V2 Flash:free with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z Ai/glm 4.5

Openrouter

Z Ai/glm 4.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z Ai/glm 4.5 Air

Openrouter

Z Ai/glm 4.5 Air is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding....

Streaming Vision
O

LangMart: Z Ai/glm 4.5 Air:free

Openrouter

Free tier version of Z Ai/glm 4.5 Air:free. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z Ai/glm 4.5v

Openrouter

Z Ai/glm 4.5v is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z Ai/glm 4.6

Openrouter

Z Ai/glm 4.6 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z Ai/glm 4.6:exacto

Openrouter

Z Ai/glm 4.6:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandi...

Streaming Vision
O

LangMart: Z Ai/glm 4.6v

Openrouter

Z Ai/glm 4.6v is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z Ai/glm 4.7

Openrouter

Z Ai/glm 4.7 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.

Streaming Vision
O

LangMart: Z.AI: GLM 4 32B

Openrouter

GLM 4 32B is a cost-effective foundation language model.

P

Perplexity PPLX 7B Online

Perplexity

**Model ID**: `perplexity/pplx-7b-online`

Vision
P

Perplexity Sonar Reasoning

Perplexity

**Model ID**: `perplexity/sonar-reasoning`

Vision
P

Perplexity: Sonar Pro

Perplexity

Perplexity Sonar Pro is an advanced search-augmented language model designed for in-depth, multi-step queries with added extensibility. It offers approximately double the citations per search compared...

Vision
Q

Qwen 1.5 14B Chat

Qwen

**Model ID**: `qwen/qwen-1.5-14b-chat`

Vision
Q

Qwen 2.5 72B Instruct

Qwen

**Model ID**: `qwen/qwen-2.5-72b-instruct`

Vision
Q

Qwen 2.5 7B Instruct

Qwen

**Model ID**: `qwen/qwen-2.5-7b-instruct`

Vision
Q

Qwen: Qwen3 235B A22B

Qwen

Qwen3-235B-A22B is a 235 billion parameter mixture-of-experts (MoE) model developed by Qwen (Alibaba Cloud), activating 22 billion parameters per forward pass. This architecture allows the model to de...

Q

Qwen: Qwen3 VL 8B Instruct

Qwen

**Model ID:** `qwen/qwen3-vl-8b-instruct`

Vision
Q

Qwen2.5 Coder 32B Instruct

Qwen

Qwen2.5 Coder 32B Instruct is a code-focused large language model representing the latest iteration in the Qwen coding series. It replaces the earlier CodeQwen1.5 with significantly enhanced capabilit...

Q

Qwen2.5 VL 72B Instruct

Qwen

Qwen2.5 VL 72B Instruct is a state-of-the-art multimodal model that excels at visual understanding and reasoning tasks. The model demonstrates exceptional capabilities in:

Vision
R

Reka Flash

Reka

**Model ID**: `reka/reka-flash`

Vision
S

Stable Diffusion 3.5 Large

Stabilityai

Stable Diffusion 3.5 Large is the most powerful model in the Stable Diffusion family, featuring superior quality and prompt adherence. It is a Multimodal Diffusion Transformer (MMDiT) text-to-image ge...

Vision
T

Together AI: Arcee AI Agent

Together AI

Arcee AI Agent is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Arcee AI Caller Agent

Together AI

Arcee AI Caller Agent is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Arcee AI Spotlight

Together AI

Arcee AI Spotlight is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Arcee AI Virtuoso

Together AI

Arcee AI Virtuoso is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: BGE Base EN v1.5

Together AI

BGE Base EN v1.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: BGE Large EN v1.5

Together AI

BGE Large EN v1.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Cogito v1 Preview 32B

Together AI

Cogito v1 Preview 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Cogito v1 Preview 70B

Together AI

Cogito v1 Preview 70B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Cogito v1 Preview 8B

Together AI

Cogito v1 Preview 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: DeepSeek R1

Together AI

DeepSeek R1 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: DeepSeek V3

Together AI

DeepSeek V3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: DeepSeek V3 0324

Together AI

DeepSeek V3 0324 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: FLUX.1 Dev

Together AI

FLUX.1 Dev is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: FLUX.1 Pro

Together AI

FLUX.1 Pro is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: FLUX.1.1 Pro

Together AI

FLUX.1.1 Pro is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: FLUX.1.1 Pro Ultra

Together AI

FLUX.1.1 Pro Ultra is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: GLM-4 9B

Together AI

GLM-4 9B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: GLM-Z1 9B

Together AI

GLM-Z1 9B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: GTE ModernBERT Base

Together AI

GTE ModernBERT Base is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Kimi K2 Instruct

Together AI

Kimi K2 Instruct is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Llama 3.1 405B

Together AI

Llama 3.1 405B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Tools Streaming Vision
T

Together AI: Llama 3.1 70B

Together AI

Llama 3.1 70B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Tools Streaming Vision
T

Together AI: Llama 3.1 8B

Together AI

Llama 3.1 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Tools Streaming Vision
T

Together AI: Llama 3.3 70B

Together AI

Llama 3.3 70B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Tools Streaming Vision
T

Together AI: Llama 4 Maverick

Together AI

Llama 4 Maverick is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Llama 4 Scout

Together AI

Llama 4 Scout is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Llama Guard 2 8B

Together AI

Llama Guard 2 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Llama Guard 3 11B Vision Turbo

Together AI

Llama Guard 3 11B Vision Turbo is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applica...

Streaming Vision
T

Together AI: Llama Guard 3 8B

Together AI

Llama Guard 3 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Llama Guard 4 12B

Together AI

Llama Guard 4 12B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: M2-BERT 80M 32K Retrieval

Together AI

M2-BERT 80M 32K Retrieval is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications...

Streaming Vision
T

Together AI: Mixtral 8x22B

Together AI

Mixtral 8x22B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Tools Streaming Vision
T

Together AI: Multilingual e5 Large Instruct

Together AI

Multilingual e5 Large Instruct is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applica...

Streaming Vision
T

Together AI: Mxbai Rerank Large V2

Together AI

Mxbai Rerank Large V2 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen 2.5 72B

Together AI

Qwen 2.5 72B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Tools Streaming Vision
T

Together AI: Qwen 2.5 7B

Together AI

Qwen 2.5 7B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen 2.5 Coder 32B

Together AI

Qwen 2.5 Coder 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen QwQ 32B

Together AI

Qwen QwQ 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen3 235B A22B

Together AI

Qwen3 235B A22B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen3 30B A3B

Together AI

Qwen3 30B A3B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen3 32B

Together AI

Qwen3 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Qwen3 8B

Together AI

Qwen3 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Salesforce Llama Rank V1 8B

Together AI

Salesforce Llama Rank V1 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applicatio...

Streaming Vision
T

Together AI: Stable Diffusion 3.5 Large Turbo

Together AI

Stable Diffusion 3.5 Large Turbo is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse appli...

Streaming Vision
T

Together AI: Stable Diffusion XL

Together AI

Stable Diffusion XL is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: VirtueGuard Text Lite

Together AI

VirtueGuard Text Lite is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
T

Together AI: Whisper Large v3

Together AI

Whisper Large v3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.

Streaming Vision
U

Toppy M 7B

Undi95

Toppy M 7B is a wild 7B parameter model that merges several models using the new `task_arithmetic` merge method from [mergekit](https://github.com/cg123/mergekit). This model combines multiple fine-tu...

U

SOLAR-10.7B-Instruct-v1.0 Model Documentation

Upstage

tokenizer = AutoTokenizer.from_pretrained("upstage/SOLAR-10.7B-Instruct-v1.0")

X

xAI Grok 3 Beta

Xai

Grok 3 Beta is xAI's flagship reasoning model, described as their "most advanced model" showcasing superior reasoning capabilities and extensive pretraining knowledge. It excels at enterprise use case...

Vision
X

MiMo-V2-Flash (Free)

Xiaomi

MiMo-V2-Flash is an open-source language model developed by Xiaomi featuring a Mixture-of-Experts (MoE) architecture with 309B total parameters and 15B active parameters. It employs hybrid attention m...

Reasoning
X

Z.AI: GLM 4.7

Xai

GLM-4.7 is Z.AI's latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution.

Vision