Model Documentation
Comprehensive reference for AI and LLM models. Browse pricing, capabilities, and usage examples.
Featured Models
Anthropic: Claude 3.5 Sonnet (20241022)
Anthropic's Claude 3.5 Sonnet is a balanced AI model that combines strong intelligence with fast response times. This specific version from October 22, 2024 provides a fixed checkpoint for reproducibl...
Anthropic: Claude Opus 4
Claude Opus 4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this model is optimized for its specific use case category.
Anthropic: Claude Opus 4.1
Hybrid reasoning model pushing frontier for coding and AI agents with extended thinking capabilities. Achieves 74.5% on SWE-bench Verified with 32K max output tokens.
Anthropic: Claude Opus 4.5
Claude Opus 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this flagship model represents the latest capabilities and state-of-the-art...
Anthropic: Claude Sonnet 4
Claude Sonnet 4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this model is optimized for its specific use case category.
Anthropic: Claude Sonnet 4.5
Claude Sonnet 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this premium model offers excellent quality and balanced performance acro...
Recently Updated
Z.AI: GLM 4.7
GLM-4.7 is Z.AI's latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution.
MiMo-V2-Flash (Free)
MiMo-V2-Flash is an open-source language model developed by Xiaomi featuring a Mixture-of-Experts (MoE) architecture with 309B total parameters and 15B active parameters. It employs hybrid attention m...
xAI Grok 3 Beta
Grok 3 Beta is xAI's flagship reasoning model, described as their "most advanced model" showcasing superior reasoning capabilities and extensive pretraining knowledge. It excels at enterprise use case...
SOLAR-10.7B-Instruct-v1.0 Model Documentation
tokenizer = AutoTokenizer.from_pretrained("upstage/SOLAR-10.7B-Instruct-v1.0")
Toppy M 7B
Toppy M 7B is a wild 7B parameter model that merges several models using the new `task_arithmetic` merge method from [mergekit](https://github.com/cg123/mergekit). This model combines multiple fine-tu...
Together AI: VirtueGuard Text Lite
VirtueGuard Text Lite is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
All Models
779 modelsYi 34B Chat Model Documentation
The Yi series models are large language models trained from scratch by developers at 01.AI. This 34B parameter model has been instruct-tuned specifically for chat applications, providing optimized per...
Yi Vision 34B Model Documentation
Yi-VL-34B is the world's first open-source 34 billion parameter vision-language model, combining advanced image understanding with multilingual text generation capabilities. It represents a significan...
AllenAI: Olmo 3.1 32B Think (free)
This is a 32-billion parameter model emphasizing reasoning capabilities. The system excels at deep reasoning, complex multi-step logic, and advanced instruction following. Version 3.1 represents impro...
Goliath 120B Model Documentation
Goliath 120B is a merged model that combines "two fine-tuned Llama 70B models into one 120B model" by merging Xwin and Euryale variants. The model was created using the mergekit framework by @chargodd...
Amazon Nova 2 Lite v1
Amazon Nova Pro 1.0
Amazon's multimodal model designed to balance "accuracy, speed, and cost for a wide range of tasks." As of December 2024, it demonstrates state-of-the-art performance on visual question answering (Tex...
Magnum v4 72B
Magnum v4 72B is a fine-tuned version of Qwen2.5 72B that aims to replicate the prose quality of Claude 3 models, specifically Sonnet and Opus. This model is designed for creative writing and roleplay...
Anthropic Claude Models on LangMart
Anthropic: Claude 3 Haiku (20240307)
Anthropic's fastest and most compact Claude 3 model. Designed for near-instant responses while maintaining high-quality output. Ideal for high-volume, cost-sensitive applications.
Anthropic: Claude 3 Opus
Anthropic's most intelligent model with best-in-market performance on highly complex tasks. Navigates open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding...
Anthropic: Claude 3 Sonnet
Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments. It offers excellent performance w...
Anthropic: Claude 3 Sonnet (20240229)
Anthropic's balanced Claude 3 model offering a good combination of capability and speed. Suitable for most general-purpose applications requiring quality responses.
Anthropic: Claude 3.5 Sonnet (20241022)
Anthropic's Claude 3.5 Sonnet is a balanced AI model that combines strong intelligence with fast response times. This specific version from October 22, 2024 provides a fixed checkpoint for reproducibl...
Anthropic: Claude Haiku 4.5
Claude Haiku 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this cost-effective model is optimized for efficient inference while maint...
Anthropic: Claude Opus 4
Claude Opus 4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this model is optimized for its specific use case category.
Anthropic: Claude Opus 4.1
Hybrid reasoning model pushing frontier for coding and AI agents with extended thinking capabilities. Achieves 74.5% on SWE-bench Verified with 32K max output tokens.
Anthropic: Claude Opus 4.5
Claude Opus 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this flagship model represents the latest capabilities and state-of-the-art...
Anthropic: Claude Sonnet 4
Claude Sonnet 4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this model is optimized for its specific use case category.
Anthropic: Claude Sonnet 4.5
Claude Sonnet 4.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Anthropic, this premium model offers excellent quality and balanced performance acro...
Claude 3 Haiku
Claude 3 Haiku is Anthropic's fastest and most compact model, designed for near-instant responsiveness with quick and accurate targeted performance. It excels at tasks requiring rapid responses while ...
Claude 3.5 Haiku
Claude 3.5 Haiku offers enhanced capabilities in speed, coding accuracy, and tool use. It is engineered to excel in real-time applications, delivering quick response times.
Claude Opus 4
Claude Opus 4 is benchmarked as the world's best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software...
Claude Sonnet 4
Claude Sonnet 4 represents a significant upgrade from its predecessor, Claude Sonnet 3.7, with particular strength in coding and reasoning tasks. The model achieves state-of-the-art performance on SWE...
Arcee AI: Spotlight
Trinity Mini - API, Providers, Stats
Baidu: ERNIE 4.5 21B A3B Thinking
**Model ID:** `baidu/ernie-4.5-21b-a3b-thinking`
ERNIE 4.5 VL 424B A47B Model Details
Black Forest Labs: FLUX.2 Max
FLUX.2 [max] is the new top-tier image model from Black Forest Labs, pushing image quality, prompt understanding, and editing consistency to the highest level yet.
FLUX.1 [schnell] - Black Forest Labs
FLUX.1 [schnell] is Black Forest Labs' fastest image generation model, a 12 billion parameter rectified flow transformer capable of generating high-quality images from text descriptions. Trained using...
Cohere Command R (08-2024)
Cohere Command R+ (08-2024)
Cohere: Command A
Command A is an open-weights 111B parameter model with a 256k context window focused on delivering great performance across agentic, multilingual, and coding use cases. The model emphasizes delivering...
Cohere: Command R
Command R is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Cohere, this model provides solid performance and is suitable for most use cases.
Cohere: Command R+
Command R+ is a 104B-parameter large language model from Cohere, purpose-built for enterprise applications. It excels at roleplay, general consumer use cases, and Retrieval Augmented Generation (RAG)....
Cohere: Command R7B
Command R7B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Cohere, this model is optimized for its specific use case category.
Cohere: Embed 4
Embed 4 is a text embedding model for semantic search and vector-based tasks. Developed by Cohere, this model is optimized for its specific use case category.
Cohere: Rerank 4 Fast
Rerank 4 Fast is a AI model for general-purpose tasks. Developed by Cohere, this model is optimized for its specific use case category.
Cohere: Rerank 4 Pro
Rerank 4 Pro is a AI model for general-purpose tasks. Developed by Cohere, this model is optimized for its specific use case category.
Collections: Auto Free
The **Auto Free Collection** is a dynamic system-level collection that automatically routes inference requests to one of the predefined free models available in the system. This collection provides co...
Collections: Flash 2.5
The **Flash 2.5 Collection** is a curated organization-level collection of fast-responding, lightweight language models optimized for speed and cost-effectiveness. This collection focuses on models th...
Collections: Organization Shared Models
The **Organization Shared Models** collection is a flexible, team-managed collection that enables organizations to pool and share language models across all members. This collection uses a least-used ...
DeepSeek R1
DeepSeek R1 is DeepSeek AI's first-generation reasoning model that achieves performance on par with OpenAI o1 across math, code, and reasoning tasks. It is fully open-source with MIT licensing, featur...
DeepSeek: DeepSeek Coder V2
DeepSeek Coder V2 is a code generation and understanding model for programming tasks. Developed by DeepSeek, this model provides solid performance and is suitable for most use cases.
DeepSeek: DeepSeek V3.2
**Model ID:** `deepseek/deepseek-v3.2`
DeepSeek: DeepSeek-V3.2 (Non-thinking Mode)
Main chat model with JSON output and tool calling capabilities. Optimized for conversational AI, general-purpose tasks, and integration with external tools and APIs.
DeepSeek: Reasoner
DeepSeek's specialized reasoning model designed for complex problem-solving, mathematical proofs, and logical analysis. Features extended thinking capabilities for step-by-step reasoning.
EVA Llama 3.33 70B
EVA Llama 3.33 70B is a roleplay and storywriting specialist model. It is a complete parameter fine-tune of the Llama-3.3-70B-Instruct base model using a mixture of synthetic and natural data. The mod...
Gemma 3 27B (free) - Complete Model Details
Google AI: Gemini Embedding
Gemini Embedding is a text embedding model for semantic search and vector-based tasks. Developed by Google AI, this model is optimized for its specific use case category.
Google Gemini 1.5 Flash
Gemini 1.5 Flash is a foundation model that performs well at a variety of multimodal tasks such as visual understanding, classification, summarization, and creating content from image, audio and video...
Google Gemini 2.0 Flash Experimental (Free)
Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5. It introduces notable e...
Google Gemini 2.0 Flash Lite - Complete Model Details
Google: Codey Code Completion
Code completion model.
Google: Deep Research Pro Preview (Dec-12-2025)
A specialized research-focused model designed for deep analysis, literature review, and comprehensive research tasks. Optimized for synthesizing information from multiple sources and providing thoroug...
Google: Gecko Embedding
Lightweight embedding model.
Google: Gemini 1.0 Pro Vision
Gemini 1.0 with vision (deprecated).
Google: Gemini 1.5 Pro
Previous generation pro model.
Google: Gemini 2 Flash Thinking
Gemini 2 Flash with extended reasoning capabilities.
Google: Gemini 2.0 Flash
A cost-effective multimodal model optimized for general-purpose tasks requiring balanced performance. Gemini 2.0 Flash delivers strong capabilities across text, image, and code understanding while mai...
Google: Gemini 2.0 Flash (Image Generation) Experimental
An experimental version of Gemini 2.0 Flash with image generation capabilities. Combines text understanding with the ability to generate images based on prompts.
Google: Gemini 2.0 Flash 001
The stable version 001 release of Gemini 2.0 Flash, providing a cost-effective multimodal model for general-purpose tasks. This versioned release ensures consistent behavior and reproducible results f...
Google: Gemini 2.0 Flash Experimental
An experimental preview of Gemini 2.0 Flash featuring the latest capabilities and improvements. This version provides early access to new features while maintaining the fast inference speeds character...
Google: Gemini 2.0 Flash Preview Image Generation
A preview version of Gemini 2.0 Flash optimized for image generation tasks. Designed for creating visual content from text descriptions.
Google: Gemini 2.0 Flash Thinking Experimental
An experimental version of Gemini 2.0 Flash with enhanced reasoning capabilities. This model features a "thinking" mode that allows it to work through complex problems step-by-step before providing fi...
Google: Gemini 2.0 Flash Thinking Preview 01-21
A dated experimental version of Gemini 2.0 Flash with thinking capabilities from January 21, 2025. Features improved reasoning capabilities over earlier thinking model versions.
Google: Gemini 2.0 Flash Thinking Preview 12-19
A dated experimental version of Gemini 2.0 Flash with thinking capabilities from December 19, 2024. Provides enhanced reasoning through explicit step-by-step problem solving.
Google: Gemini 2.0 Flash-Lite
Streamlined and ultra-efficient model designed for simple, high-frequency tasks. Gemini 2.0 Flash-Lite prioritizes speed and affordability while maintaining essential multimodal capabilities.
Google: Gemini 2.0 Flash-Lite Preview
A preview version of Gemini 2.0 Flash-Lite, providing early access to streamlined capabilities optimized for high-frequency, simple tasks. This model supports multimodal capabilities including vision ...
Google: Gemini 2.0 Flash-Lite Preview 02-05
A dated preview version of Gemini 2.0 Flash-Lite from February 5, 2025. Provides a fixed checkpoint for reproducible results. This model supports multimodal capabilities including vision and image und...
Google: Gemini 2.0 Pro
Professional-grade Gemini 2.0 model.
Google: Gemini 2.0 Pro Experimental
An experimental version of Gemini 2.0 Pro offering higher capability than Flash variants. Designed for complex tasks requiring advanced reasoning, coding, and multimodal understanding.
Google: Gemini 2.0 Pro Experimental 02-05
A dated experimental version of Gemini 2.0 Pro from February 5, 2025. Provides high-capability performance for complex tasks with a specific model checkpoint.
Google: Gemini 2.0 Pro Vision
Vision-optimized Gemini 2.0 Pro.
Google: Gemini 2.5 Computer Use Preview 10-2025
A specialized preview model designed for computer use and automation tasks. Enables AI-driven interaction with computer interfaces, including clicking, typing, and navigating applications.
Google: Gemini 2.5 Flash
Lightning-fast and highly capable model that delivers a balance of intelligence and latency. Gemini 2.5 Flash offers controllable thinking budgets for versatile applications, making it ideal for a wid...
Google: Gemini 2.5 Flash Image (Nano Banana)
A specialized image-focused variant of Gemini 2.5 Flash, codenamed Nano Banana. Optimized for image understanding and generation tasks with fast inference.
Google: Gemini 2.5 Flash Image Preview (Nano Banana)
A preview version of the image-focused Gemini 2.5 Flash variant, codenamed Nano Banana. Provides early access to enhanced image capabilities.
Google: Gemini 2.5 Flash Preview
Preview of Gemini 2.5 Flash.
Google: Gemini 2.5 Flash Preview 05-20
A dated preview of Gemini 2.5 Flash from May 20, 2025. Provides a fixed model checkpoint for reproducible experiments and applications.
Google: Gemini 2.5 Flash Preview Sep 2025
A September 2025 preview of Gemini 2.5 Flash with the latest capabilities and improvements before stable release. This model supports multimodal capabilities including vision and image understanding.
Google: Gemini 2.5 Flash Preview TTS
A specialized preview of Gemini 2.5 Flash with text-to-speech capabilities. Combines fast inference with high-quality voice synthesis for real-time audio applications.
Google: Gemini 2.5 Flash-Lite
Built for massive scale, Gemini 2.5 Flash-Lite balances cost and performance for high-throughput tasks. Optimized for efficiency without sacrificing multimodal capabilities.
Google: Gemini 2.5 Flash-Lite Preview 06-17
A dated preview version of Gemini 2.5 Flash-Lite from June 17, 2025. Optimized for efficiency and high-throughput tasks with a fixed checkpoint.
Google: Gemini 2.5 Flash-Lite Preview Sep 2025
A September 2025 preview of Gemini 2.5 Flash-Lite, optimized for efficiency and cost-effectiveness in high-throughput applications. This model supports multimodal capabilities including vision and ima...
Google: Gemini 2.5 Pro
Google: Gemini 2.5 Pro
Google's high-capability model for complex reasoning and coding. Features adaptive thinking and a 1 million token context window, designed for complex agentic and multimodal challenges. Gemini 2.5 Pro...
Google: Gemini 2.5 Pro Preview 03-25
A dated preview version of Gemini 2.5 Pro from March 25, 2025. Provides access to advanced capabilities with a fixed model checkpoint for reproducibility.
Google: Gemini 2.5 Pro Preview 06-05
Gemini 2.5 Pro is Google's latest and most capable model, featuring a massive 1 million token context window. This preview version (06-05) represents the cutting edge of Google's multimodal AI capabil...
Google: Gemini 2.5 Pro Preview TTS
A specialized preview of Gemini 2.5 Pro with text-to-speech capabilities. Designed for applications requiring high-quality voice synthesis alongside advanced language understanding.
Google: Gemini 3 Flash Preview
Fast Gemini 3 for speed and efficiency.
Google: Gemini 3 Mobile
Lightweight Gemini 3 optimized for mobile devices.
Google: Gemini 3 Opus
High-end model for demanding applications.
Google: Gemini 3 Pro Preview
Latest flagship Gemini model with advanced reasoning and multimodal capabilities.
Google: Gemini 3.5 Sonnet
Balanced mid-tier Gemini model.
Google: Gemini Audio Understanding
Audio analysis model.
Google: Gemini Code Reasoning
Advanced code analysis and generation model.
Google: Gemini Document Understanding
Specialized model for document processing and extraction.
Google: Gemini Experimental 1206
An experimental Gemini model from December 6, 2024. Provides early access to new capabilities and improvements in development. This model supports multimodal capabilities including vision and image un...
Google: Gemini Flash Latest
Gemini Flash Latest with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.
Google: Gemini Flash-Lite Latest
The latest stable version of Gemini Flash-Lite, automatically updated to the most recent stable release. Optimized for efficiency and high-throughput tasks.
Google: Gemini Image Generation 001
Image generation from text.
Google: Gemini Multimodal Live
Real-time streaming multimodal model.
Google: Gemini Nano
Smallest Gemini model.
Google: Gemini Pro Latest
The latest stable version of Gemini Pro, automatically updated to the most recent stable release. Provides high-capability performance for complex tasks.
Google: Gemini Reasoning Engine
Experimental advanced reasoning engine for complex tasks.
Google: Gemini Robotics-ER 1.5 Preview
A specialized model for robotics and embodied reasoning (ER) applications. Designed to understand and reason about physical environments, robot actions, and spatial relationships.
Google: Gemini Text Embedding 004
Latest text embedding model.
Google: Gemini Video Understanding
Video analysis and understanding.
Google: Gemma 1.1 7B Instruct-Tuned
Previous Gemma generation 7B model.
Google: Gemma 2 27B Instruct-Tuned
Previous generation 27B model.
Google: Gemma 2 9B Instruct-Tuned
Google: Gemma 3 12B Instruct-Tuned
Compact 12B instruction-tuned model.
Google: Gemma 3 1B
A lightweight, instruction-tuned version of Gemma 3 with 1 billion parameters. Designed for edge deployment and resource-constrained environments while maintaining good instruction-following capabilit...
Google: Gemma 3 27B Instruct-Tuned
Open-source 27B instruction-tuned model.
Google: Gemma 3 4B Instruct-Tuned
Ultra-lightweight 4B model.
Google: Gemma 3 Long Context 27B
Extended context Gemma 3 27B.
Google: Gemma 3 Long Context 27B
Extended context version of Gemma 3 27B supporting up to 1M token context.
Google: Gemma 3n E2B
An efficient variant of Gemma 3 with approximately 2B equivalent parameters, optimized for edge deployment and mobile applications. Part of the Gemma 3n efficient model series.
Google: Gemma 3n E4B
An efficient variant of Gemma 3 with approximately 4B equivalent parameters, balancing capability and efficiency for edge and mobile deployment.
Google: LearnLM 2.0 Flash Experimental
An experimental model designed specifically for educational applications. LearnLM is optimized for tutoring, explanation generation, and adaptive learning interactions. This model supports multimodal ...
Google: Multimodal Understanding Pro
Advanced multimodal model.
Google: Nano Banana Pro
A high-capability image-focused model codenamed Nano Banana Pro. Designed for advanced image understanding and generation with professional-grade quality. This model supports multimodal capabilities i...
Google: Nano Banana Pro (Gemini 3 Pro Image Preview)
A high-capability image-focused model codenamed Nano Banana Pro. Part of the Gemini 3 Pro family with specialized capabilities for advanced image understanding and generation.
Google: PaLM 2 Chat Bison
Legacy PaLM chat model (deprecated).
Google: PaLM 2 Text Bison
Legacy PaLM text model (deprecated).
Groq: Allam 2 7B
Allam 2 is a 7 billion parameter Arabic language model optimized for Arabic text understanding and generation. Hosted on Groq's LPU infrastructure for ultra-fast inference.
Groq: Claude 3.5 Sonnet
Groq: Command Nightly
Groq: Command Nightly is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Compound
An AI system powered by openly available models that intelligently and selectively uses built-in tools to answer user queries, including web search and code execution. Compound represents Groq's appro...
Groq: Compound Mini
A lightweight version of Groq Compound with built-in web search and code execution capabilities. Optimized for faster responses and cost efficiency while maintaining agentic capabilities.
Groq: DeepSeek R1 Distill Llama 70B
Groq: DeepSeek R1 Distill Llama 70B is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Deprecated Model 123
This model has been deprecated and is no longer recommended for new deployments. It is retained in the system for backward compatibility only. Please migrate to current model versions for production u...
Groq: Distil Whisper Large V3 EN
Groq: Distil Whisper Large V3 EN is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Gemma 2 9B IT
Groq: Gemma 2 9B IT is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Gemma 7B IT
Groq: Gemma 7B IT is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: GPT OSS 120B 128k
GPT OSS 120B 128k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model is optimized for its specific use case category.
Groq: GPT OSS 20B 128k
GPT OSS 20B 128k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model is optimized for its specific use case category.
Groq: GPT OSS Safeguard 20B
GPT OSS Safeguard 20B is a content moderation model for safety and policy compliance checking. Developed by Groq, this model is optimized for its specific use case category.
Groq: GPT-4 Turbo
Groq: GPT-4 Turbo is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: GPT-4 Vision Preview
Groq: GPT-4 Vision Preview is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: GPT-4o Mini
Groq: GPT-4o Mini is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: GPT-OSS 120B
OpenAI's flagship open-weight language model with 120 billion parameters, hosted on Groq infrastructure. Features built-in browser search and code execution capabilities with reasoning.
Groq: GPT-OSS 20B
A lightweight 20 billion parameter version of OpenAI's open-weight GPT-OSS model. Optimized for fast inference on Groq infrastructure.
Groq: GPT-OSS Safeguard 20B
A safety-focused 20 billion parameter model based on GPT-OSS. Fine-tuned for content moderation and safety-critical applications.
Groq: Kimi K2 1T 256k
Kimi K2 1T 256k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model is optimized for its specific use case category.
Groq: Kimi K2 Instruct
MoonshotAI's Kimi K2 instruction-tuned model, hosted on Groq infrastructure for fast inference. Designed for general-purpose instruction following with strong multilingual capabilities.
Groq: Kimi K2 Instruct 0905
A dated version of MoonshotAI's Kimi K2 instruction model from September 5, 2025. Features an extended context window for longer conversations.
Groq: Llama 3.1 8B Instant
Groq: Llama 3.3 70B Speculative Decoding
Groq: Llama 3.3 70B Versatile
Groq: Llama 4 Maverick
Llama 4 Maverick is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this flagship model represents the latest capabilities and state-of-the-art per...
Groq: Llama 4 Maverick 17B 128E Instruct
Meta's Llama 4 Maverick model with 17 billion parameters and 128 experts, optimized for instruction following. Hosted on Groq for ultra-fast inference speeds.
Groq: Llama 4 Scout
Llama 4 Scout is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model provides solid performance and is suitable for most use cases.
Groq: Llama 4 Scout 17B 16E Instruct
Meta's Llama 4 Scout model with 17 billion parameters and 16 experts. A more efficient MoE variant designed for fast, lightweight inference on Groq infrastructure.
Groq: Llama Guard 4 12B
Llama Guard 4 12B is a content moderation model for safety and policy compliance checking. Developed by Groq, this model is optimized for its specific use case category.
Groq: Llama Guard 4 12B
Meta's content moderation and safety model with 12 billion parameters. Designed for detecting harmful, toxic, or unsafe content in AI outputs. Hosted on Groq for ultra-fast moderation.
Groq: Llama Prompt Guard 2 22M
A lightweight prompt injection detection model with 22 million parameters. Designed to identify and prevent prompt injection attacks in real-time with minimal latency.
Groq: Llama Prompt Guard 2 86M
A larger prompt injection detection model with 86 million parameters. Offers improved accuracy over the 22M variant while maintaining fast inference speeds.
Groq: Mixtral 8x7B (Extended)
Groq: Mixtral 8x7B (Extended) is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Mixtral 8x7B 32K
Groq: Mixtral 8x7B 32K is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Neural Chat 7B v3.1
Groq: OpenChat 3.5
Groq: Orpheus Arabic Saudi
A specialized Arabic language model from Canopy Labs focused on Saudi Arabian dialect and culture. Optimized for Saudi Arabic text generation and understanding.
Groq: Orpheus V1 English
Canopy Labs' Orpheus V1 model for English language tasks. A lightweight model optimized for fast inference on Groq infrastructure.
Groq: PaLM 2 Chat Bison 32K
Groq: PaLM 2 Chat Bison 32K is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: PaLM 2 CodeChat Bison
Groq: PaLM 2 CodeChat Bison is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: PlayAI TTS
PlayAI's text-to-speech model hosted on Groq for ultra-fast audio generation. Provides high-quality voice synthesis with low latency.
Groq: PlayAI TTS Arabic
PlayAI's Arabic-specialized text-to-speech model hosted on Groq. Optimized for high-quality Arabic voice synthesis.
Groq: Qwen 2 72B 4-bit
Groq: Qwen 2 72B 4-bit is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Qwen 2 7B 4-bit
Groq: Qwen 2 7B 4-bit is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Qwen3 32B
Qwen3 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Groq, this model provides solid performance and is suitable for most use cases.
Groq: Qwen3 32B
Alibaba's Qwen3 32 billion parameter model hosted on Groq infrastructure. Offers strong multilingual capabilities with fast inference. The model is optimized for code generation and programming tasks....
Groq: Solar 10.7B Instruct v1
Groq: Whisper Large V3
OpenAI's Whisper Large V3 speech-to-text model hosted on Groq for ultra-fast transcription. Supports multiple languages and audio formats.
Groq: Whisper Large V3 Turbo
Groq: Whisper Large V3 Turbo is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
Groq: Whisper V3 Large
Whisper V3 Large is a audio processing model for speech synthesis and audio understanding. Developed by Groq, this model is optimized for its specific use case category.
Groq: Yi 34B Chat 3 32K
Groq: Yi 34B Chat 3 32K is a language model provided by the provider. This model offers advanced capabilities for natural language processing tasks.
MythoMax 13B
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. This is a merged model (#merge) that combines multiple fine-tuning approaches to achieve ...
Inflection 3 Pi
Inflection 3 Pi powers Inflection's Pi chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like customer support and r...
Psyfighter v2 13B
A specialized merged model designed for enhanced fictional storytelling with supplementary medical knowledge. The model combines three base models to balance creative narrative generation with anatomi...
LZLV ARPO-34B
Llama 4 Scout - Model Details
Llama Guard 3 8B
Meta Llama 2 13B Chat
Meta Llama 2 13B Chat is a 13 billion parameter language model fine-tuned specifically for chat completions and conversational tasks. This is Meta's open-source contribution designed for dialogue-base...
Meta Llama 3.1 405B Instruct
Llama 3.1 405B Instruct is Meta's largest and most capable open-source language model, representing their flagship offering in the Llama 3.1 series. This 405-billion parameter model features a 128K to...
Meta Llama 3.1 8B Instruct
Llama 3.1 8B Instruct is part of Meta's latest class of language models, offering a balance between efficiency and capability. This 8-billion parameter instruction-tuned variant emphasizes speed and e...
Meta Llama 3.3 70B Instruct
Llama 3.3 70B Instruct is a pretrained and instruction-tuned generative model optimized for multilingual dialogue use cases. It outperforms many available open source and closed chat models on common ...
Meta: Llama 2 70B Chat
Meta: Llama 3.1 70B Instruct
Meta's latest language model offering comes in various sizes. This 70B parameter instruct-tuned variant targets high-quality dialogue applications. The model has demonstrated strong performance compar...
Meta: Llama 3.2 1B Instruct
Llama 3.2 1B is a 1-billion-parameter language model optimized for efficiently performing natural language tasks such as summarization, dialogue, and multilingual text analysis. With its smaller size,...
Meta: Llama 3.2 3B Instruct
Llama 3.2 3B is a 3-billion-parameter multilingual large language model optimized for advanced natural language processing tasks including dialogue generation, complex reasoning, and text summarizatio...
Meta: Llama 3.2 90B Vision Instruct
A 90-billion-parameter multimodal model excelling at visual reasoning and language tasks. The model handles image captioning, visual question answering, and advanced image-text comprehension through p...
Meta: Llama 4 Maverick
A high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total parameters). The mo...
Microsoft Phi-3 Medium 128K Instruct
Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. The model excels in common sense reasoning, mathematics, ...
Microsoft Phi-4
Phi-4 targets "complex reasoning tasks and operates efficiently with limited memory or when quick responses are needed." The 14-billion parameter model trained on synthetic datasets, curated websites,...
WizardLM-2 8x22B
Microsoft's most advanced Wizard model, described as demonstrating "highly competitive performance compared to leading proprietary models" and consistently outperforming existing open-source alternati...
MiniMax M2.1
A lightweight, state-of-the-art language model with 10 billion activated parameters, optimized for coding, agentic workflows, and application development. The model delivers cleaner, more concise outp...
Mistral 7B Instruct
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this endpoint is intended to be the l...
Mistral AI Codestral
Codestral is Mistral AI's cutting-edge language model explicitly designed for code generation tasks. It is Mistral's inaugural code-specific generative model, representing an open-weight generative AI...
Mistral AI: Codestral
Codestral is a code generation and understanding model for programming tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Codestral Embed
Codestral Embed is a text embedding model for semantic search and vector-based tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Devstral 2
Devstral 2 is a code generation and understanding model for programming tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Devstral Small 2
Devstral Small 2 is a code generation and understanding model for programming tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Magistral Medium
Magistral Medium is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Magistral Small
Magistral Small is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Ministral 14B
Ministral 14B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Ministral 3B
Ministral 3B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Ministral 8B
Ministral 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Mistral Embed
Mistral Embed is a text embedding model for semantic search and vector-based tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Mistral Large 3
Mistral Large 3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this flagship model represents the latest capabilities and state-of-the-ar...
Mistral AI: Mistral Medium 3
Mistral Medium 3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this premium model offers excellent quality and balanced performance acro...
Mistral AI: Mistral Moderation
Mistral Moderation is a content moderation model for safety and policy compliance checking. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Mistral OCR
Mistral OCR is a AI model for general-purpose tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Mistral Small
Mistral Small is a 22-billion parameter model serving as a convenient mid-point between smaller and larger Mistral options. It emphasizes reasoning capabilities, code generation, and multilingual supp...
Mistral AI: Mistral Small 3.2
Mistral Small 3.2 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model provides solid performance and is suitable for most use cases...
Mistral AI: Mistral Small Creative
Mistral Small Creative is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Voxtral Mini
Voxtral Mini is a audio processing model for speech synthesis and audio understanding. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral AI: Voxtral Small
Voxtral Small is a audio processing model for speech synthesis and audio understanding. Developed by Mistral AI, this model is optimized for its specific use case category.
Mistral Large
Mistral Large is Mistral AI's flagship offering. The model excels at reasoning, code generation, JSON handling, and chat applications. It is a proprietary model with support for dozens of languages in...
Mistral Medium Model Documentation
A closed-source, medium-sized model from Mistral AI that excels at reasoning, code, JSON, chat, and more. This model performs comparably to other companies' flagship models and represents Mistral's mi...
Mistral Nemo
A 12-billion parameter model featuring a 128k token context window, developed by Mistral in partnership with NVIDIA. The model supports multiple languages including English, French, German, Spanish, I...
Mistral Small Creative
Mistral Small Creative is an experimental small model designed for:
Mistral: Devstral 2 2512
**Model ID:** `mistralai/devstral-2512`
Mistral: Devstral 2 2512 (Free)
Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window.
Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts model with 8 experts totaling 47 billion parameters. It has been fine-tuned by Mistral AI specifically for chat and instructi...
Mistral: Pixtral 12B
The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent, making it openly available for research and commercial use.
Nous Research: Hermes 3 70B Instruct
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
Nous: Hermes 2 Mistral 7B DPO
This represents the primary 7B variant in the Hermes lineup, utilizing Direct Preference Optimization refinement. It's derived from Teknium/OpenHermes-2.5-Mistral-7B and demonstrates "improvement acro...
Nous: Hermes 2 Mixtral 8x7B DPO
Nous Hermes 2 Mixtral 8x7B DPO is the flagship Nous Research model trained over the Mixtral 8x7B MoE LLM. The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as ...
Nous: Hermes 2 Vision 7B (Alpha)
Nous: Hermes 2 Vision 7B is an alpha-stage vision-language model that extends the capabilities of OpenHermes-2.5 by incorporating visual perception abilities. The model was developed using a specializ...
Nous: Hermes 3 405B Instruct
Hermes 3 is a generalist language model with significant improvements over its predecessor Hermes 2. It is a full-parameter finetune of Llama-3.1 405B, making it one of the largest openly available in...
Nous: Hermes 3.1 Llama 3.1 405B
Nous Hermes 3.1 is an advanced iterative refinement of the Hermes 3 model family, built on top of Llama-3.1 405B. This model represents the cutting edge of open-source instruction-tuned models with en...
NVIDIA Llama 3.1 Nemotron 70B Instruct
This model combines the Llama 3.1 70B architecture with Reinforcement Learning from Human Feedback (RLHF) to excel in automatic alignment benchmarks. It is designed for generating precise and useful r...
Ollama: Embeddinggemma:300m
Embeddinggemma:300m optimized for generating high-quality embeddings. This model supports multimodal capabilities including vision and image understanding.
Ollama: Functiongemma:270m
Functiongemma:270m is a capable language model from Ollama for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding....
Ollama: Gemma3:1b
Gemma3:1b is a capable language model from Ollama for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.
Ollama: Llama 3.1 8B Instruct
Llama 3.1 8B Instruct is Meta's state-of-the-art instruction-tuned language model with 8 billion parameters. It's a compact yet powerful model designed for general-purpose conversational AI, reasoning...
Ollama: Mistral 7B Instruct
Mistral 7B Instruct is Mistral AI's powerful 7-billion parameter instruction-tuned language model, renowned for exceptional efficiency and speed. Despite having only 7B parameters, it achieves perform...
Ollama: Qwen2.5 7B Instruct
Qwen2.5 7B Instruct is Alibaba's latest-generation instruction-tuned language model with 7.6 billion parameters, representing a significant upgrade to the Qwen family. Built on 18 trillion tokens of d...
OpenAI GPT-4 32K Model Documentation
**Source**: OpenRouter (https://langmart.ai/model-docs)
OpenAI GPT-4 Vision Model Specifications
**Last Updated:** December 24, 2025
OpenAI o1-preview
OpenAI o1-preview is a reasoning-focused model designed to "spend more time thinking before responding." It employs chain-of-thought reasoning with self-fact-checking capabilities, making it particula...
OpenAI: Babbage 002
Lightweight text completion model. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Chatgpt 4O Latest
OpenAI model: chatgpt-4o-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Chatgpt Image Latest
OpenAI model: chatgpt-image-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Codex Mini Latest
OpenAI model: codex-mini-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The...
OpenAI: Computer Use Preview
Computer Use Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: DALL-E 2
DALL-E 2 is a image generation model for creating visual content from descriptions. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: DALL-E 3
DALL-E 3 is a image generation model for creating visual content from descriptions. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: Davinci 002
Text completion model for legacy applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...
OpenAI: Gpt 4O Audio Preview 2024 12 17
OpenAI model: gpt-4o-audio-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solv...
OpenAI: Gpt 4O Audio Preview 2025 06 03
OpenAI model: gpt-4o-audio-preview-2025-06-03 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solv...
OpenAI: Gpt 4O Mini Audio Preview
OpenAI model: gpt-4o-mini-audio-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...
OpenAI: Gpt 4O Mini Audio Preview 2024 12 17
OpenAI model: gpt-4o-mini-audio-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem...
OpenAI: Gpt 4O Mini Realtime Preview
OpenAI model: gpt-4o-mini-realtime-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...
OpenAI: Gpt 4O Mini Realtime Preview 2024 12 17
OpenAI model: gpt-4o-mini-realtime-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex prob...
OpenAI: Gpt 4O Mini Search Preview
OpenAI model: gpt-4o-mini-search-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving t...
OpenAI: Gpt 4O Mini Search Preview 2025 03 11
OpenAI model: gpt-4o-mini-search-preview-2025-03-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex proble...
OpenAI: Gpt 4O Mini Transcribe
OpenAI model: gpt-4o-mini-transcribe This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks...
OpenAI: Gpt 4O Mini Transcribe 2025 03 20
OpenAI model: gpt-4o-mini-transcribe-2025-03-20 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...
OpenAI: Gpt 4O Mini Transcribe 2025 12 15
OpenAI model: gpt-4o-mini-transcribe-2025-12-15 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...
OpenAI: Gpt 4O Realtime Preview
OpenAI model: gpt-4o-realtime-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving task...
OpenAI: Gpt 4O Realtime Preview 2024 10 01
OpenAI model: gpt-4o-realtime-preview-2024-10-01 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...
OpenAI: Gpt 4O Realtime Preview 2024 12 17
OpenAI model: gpt-4o-realtime-preview-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...
OpenAI: Gpt 4O Realtime Preview 2025 06 03
OpenAI model: gpt-4o-realtime-preview-2025-06-03 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...
OpenAI: Gpt 4O Search Preview
OpenAI model: gpt-4o-search-preview This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....
OpenAI: Gpt 4O Search Preview 2025 03 11
OpenAI model: gpt-4o-search-preview-2025-03-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...
OpenAI: Gpt 4O Transcribe
OpenAI model: gpt-4o-transcribe This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 4O Transcribe Diarize
OpenAI model: gpt-4o-transcribe-diarize This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...
OpenAI: Gpt 5
OpenAI model: gpt-5 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 2025 08 07
OpenAI model: gpt-5-2025-08-07 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Chat Latest
OpenAI model: gpt-5-chat-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Codex
OpenAI model: gpt-5-codex This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The model...
OpenAI: Gpt 5 Mini
OpenAI model: gpt-5-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Mini 2025 08 07
OpenAI model: gpt-5-mini-2025-08-07 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....
OpenAI: Gpt 5 Nano
OpenAI model: gpt-5-nano This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Nano 2025 08 07
OpenAI model: gpt-5-nano-2025-08-07 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....
OpenAI: Gpt 5 Pro
OpenAI model: gpt-5-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Pro 2025 10 06
OpenAI model: gpt-5-pro-2025-10-06 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Search Api
OpenAI model: gpt-5-search-api This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5 Search Api 2025 10 14
OpenAI model: gpt-5-search-api-2025-10-14 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ...
OpenAI: Gpt 5.1
OpenAI model: gpt-5.1 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.1 2025 11 13
OpenAI model: gpt-5.1-2025-11-13 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.1 Chat Latest
OpenAI model: gpt-5.1-chat-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.1 Codex
OpenAI model: gpt-5.1-codex This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The mod...
OpenAI: Gpt 5.1 Codex Max
OpenAI model: gpt-5.1-codex-max This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. The...
OpenAI: Gpt 5.1 Codex Mini
OpenAI model: gpt-5.1-codex-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks. Th...
OpenAI: Gpt 5.2
OpenAI model: gpt-5.2 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.2 2025 12 11
OpenAI model: gpt-5.2-2025-12-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.2 Chat Latest
OpenAI model: gpt-5.2-chat-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.2 Pro
OpenAI model: gpt-5.2-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt 5.2 Pro 2025 12 11
OpenAI model: gpt-5.2-pro-2025-12-11 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks...
OpenAI: Gpt Audio
OpenAI model: gpt-audio This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt Audio 2025 08 28
OpenAI model: gpt-audio-2025-08-28 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt Audio Mini
OpenAI model: gpt-audio-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt Audio Mini 2025 10 06
OpenAI model: gpt-audio-mini-2025-10-06 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...
OpenAI: Gpt Audio Mini 2025 12 15
OpenAI model: gpt-audio-mini-2025-12-15 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ta...
OpenAI: GPT Image 1
OpenAI's first-generation image generation model integrated with GPT capabilities. Enables text-to-image generation with natural language understanding. This model supports multimodal capabilities inc...
OpenAI: GPT Image 1 Mini
A lightweight version of OpenAI's GPT Image 1, optimized for faster generation and lower cost while maintaining good quality. This model supports multimodal capabilities including vision and image und...
OpenAI: GPT Image 1.5
OpenAI's enhanced image generation model with improved quality, better prompt understanding, and more detailed outputs compared to GPT Image 1.
OpenAI: Gpt Realtime
OpenAI model: gpt-realtime This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt Realtime 2025 08 28
OpenAI model: gpt-realtime-2025-08-28 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving task...
OpenAI: Gpt Realtime Mini
OpenAI model: gpt-realtime-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Gpt Realtime Mini 2025 10 06
OpenAI model: gpt-realtime-mini-2025-10-06 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...
OpenAI: Gpt Realtime Mini 2025 12 15
OpenAI model: gpt-realtime-mini-2025-12-15 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...
OpenAI: GPT-3.5 Turbo
Fast and affordable model suitable for most applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex ...
OpenAI: GPT-3.5 Turbo (January 2024)
January 2024 update with improved performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...
OpenAI: GPT-3.5 Turbo (November 2023)
November 2023 version with extended context. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solvi...
OpenAI: GPT-3.5 Turbo 16K
Extended context window version of GPT-3.5 Turbo. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-...
OpenAI: GPT-3.5 Turbo Instruct
Instruction-following variant for supervised fine-tuning. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex ...
OpenAI: GPT-3.5 Turbo Instruct (September 2023)
September 2023 release of instruct variant for supervised fine-tuning and completion tasks. This model supports multimodal capabilities including vision and image understanding. It features advanced r...
OpenAI: GPT-4
OpenAI's flagship model with advanced reasoning and problem-solving capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning c...
OpenAI: GPT-4 (June 13, 2023)
Specific version of GPT-4 from June 2023 with enhanced instruction following. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capab...
OpenAI: GPT-4 Turbo
Faster variant of GPT-4 with extended context window and lower cost. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities f...
OpenAI: GPT-4 Turbo (April 2024)
April 2024 version of GPT-4 Turbo with updated knowledge cutoff. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for c...
OpenAI: GPT-4 Turbo Preview
Preview of GPT-4 Turbo with early access to new features. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex ...
OpenAI: GPT-4 Turbo Preview (January 2024)
Latest GPT-4 Turbo variant with expanded context window and improved capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning ...
OpenAI: GPT-4 Turbo Preview (November 2023)
GPT-4 Turbo with 128K context window for processing large documents. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities f...
OpenAI: GPT-4.1
Enhanced version of GPT-4 with improved reasoning and multimodal capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capa...
OpenAI: GPT-4.1 (April 2025)
Latest GPT-4.1 variant with current knowledge cutoff. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex prob...
OpenAI: GPT-4.1 Mini
Compact version of GPT-4.1 for cost-efficient applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for comple...
OpenAI: GPT-4.1 Mini (April 2025)
Latest mini variant with efficient performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...
OpenAI: GPT-4.1 Nano
Ultra-lightweight variant for cost-sensitive applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex...
OpenAI: GPT-4.1 Nano (April 2025)
Latest nano variant for ultra-fast inference with efficient performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabiliti...
OpenAI: GPT-4o (August 2024)
August 2024 update with improved performance. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solv...
OpenAI: GPT-4o (May 2024)
Initial release of GPT-4o optimized model. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving...
OpenAI: GPT-4o (November 2024)
Latest GPT-4o with updated knowledge and improved capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for co...
OpenAI: GPT-4o Audio Preview
Preview of GPT-4o with audio processing capabilities. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex prob...
OpenAI: GPT-4o Audio Preview (October 2024)
October release of audio-enabled GPT-4o preview. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-s...
OpenAI: GPT-4o Mini
Compact multimodal model for efficient applications. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex probl...
OpenAI: GPT-4o Mini (July 2024)
Initial release of the mini multimodal variant. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-so...
OpenAI: GPT-4o Mini TTS
GPT-4o Mini TTS is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: GPT-5.2 Chat (AKA Instant)
GPT-5.2 Chat is the fast, lightweight member of the 5.2 family, optimized for low-latency chat while retaining strong general intelligence. The model uses adaptive reasoning to selectively engage deep...
OpenAI: O1
OpenAI model: o1 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O1 2024 12 17
OpenAI model: o1-2024-12-17 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O1 Mini
OpenAI model: o1-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O1 Mini 2024 09 12
OpenAI model: o1-mini-2024-09-12 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O1 Pro
OpenAI model: o1-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O1 Pro 2025 03 19
OpenAI model: o1-pro-2025-03-19 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O3
OpenAI model: o3 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O3 2025 04 16
OpenAI model: o3-2025-04-16 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O3 Deep Research
O3 Deep Research is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: O3 Mini
OpenAI model: o3-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O3 Mini 2025 01 31
OpenAI model: o3-mini-2025-01-31 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O3 Pro
O3 Pro is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: O4 Mini
OpenAI model: o4-mini This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O4 Mini 2025 04 16
OpenAI model: o4-mini-2025-04-16 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: O4 Mini Deep Research
OpenAI model: o4-mini-deep-research This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks....
OpenAI: O4 Mini Deep Research 2025 06 26
OpenAI model: o4-mini-deep-research-2025-06-26 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-sol...
OpenAI: Omni Moderation
Omni Moderation is a content moderation model for safety and policy compliance checking. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: Omni Moderation 2024 09 26
OpenAI model: omni-moderation-2024-09-26 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving t...
OpenAI: Omni Moderation Latest
OpenAI model: omni-moderation-latest This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks...
OpenAI: Sora 2
OpenAI model: sora-2 This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Sora 2 Pro
OpenAI model: sora-2-pro This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving tasks.
OpenAI: Text Embedding 3 Large
Large embedding model for advanced semantic tasks. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem...
OpenAI: Text Embedding 3 Small
Efficient text embedding model with high quality representations. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for ...
OpenAI: Text Embedding Ada 002
Legacy embedding model still widely used. This model supports multimodal capabilities including vision and image understanding. It features advanced reasoning capabilities for complex problem-solving ...
OpenAI: TTS
TTS is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: TTS HD
TTS HD is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.
OpenAI: Whisper
Whisper is a audio processing model for speech synthesis and audio understanding. Developed by OpenAI, this model is optimized for its specific use case category.
Openai-compatible: Fake Gpt 4 Vision
Fake Gpt 4 Vision with vision capabilities for processing images and visual content. This model supports multimodal capabilities including vision and image understanding. It features advanced reasonin...
OpenChat 3.5 7B (Free)
OpenChat is a library of open-source language models fine-tuned with C-RLFT, a strategy inspired by offline reinforcement learning. The model is trained on mixed-quality data without preference labels...
LangMart: AI21 Jamba Large 1.7
AI21's Jamba Large 1.7 is a state-of-the-art hybrid SSM-Transformer model with a 256K context window. Designed for enterprise applications requiring long-context understanding.
LangMart: AI21 Jamba Mini 1.7
AI21's Jamba Mini 1.7 is a compact hybrid SSM-Transformer model with a 256K context window. Optimized for cost-effective long-context applications.
LangMart: Aion Labs/aion 1.0
Aion Labs/aion 1.0 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandin...
LangMart: Aion Labs/aion 1.0 Mini
Aion Labs/aion 1.0 Mini is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...
LangMart: Aion Labs/aion Rp Llama 3.1 8b
Aion Labs/aion Rp Llama 3.1 8b is a capable language model from LangMart for general-purpose text generation and analysis tasks.
LangMart: AlfredPros: CodeLLaMa 7B Instruct Solidity
A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library.
LangMart: Alibaba/tongyi Deepresearch 30b A3b:free
Free tier version of Alibaba/tongyi Deepresearch 30b A3b:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: AllenAI: Olmo 2 32B Instruct
OLMo-2 32B Instruct is a supervised instruction-finetuned variant of the OLMo-2 32B March 2025 base model. It excels in complex reasoning and instruction-following tasks across diverse benchmarks such...
LangMart: AllenAI: Olmo 3 7B Instruct
Olmo 3 7B Instruct is a supervised instruction-fine-tuned variant of the Olmo 3 7B base model, optimized for instruction-following, question-answering, and natural conversational dialogue. By leveragi...
LangMart: AllenAI: Olmo 3 7B Think
Olmo 3 7B Think is a research-oriented language model in the Olmo family designed for advanced reasoning and instruction-driven tasks. It excels at multi-step problem solving, logical inference, and m...
LangMart: Allenai/olmo 3 32b Think:free
Free tier version of Allenai/olmo 3 32b Think:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Allenai/olmo 3.1 32b Think:free
Free tier version of Allenai/olmo 3.1 32b Think:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Amazon: Nova 2 Lite
Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can process text, images, and videos to generate text.
LangMart: Amazon: Nova Lite 1.0
Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text inputs to generate text output. Amazon Nova Lite can handle real-time cus...
LangMart: Amazon: Nova Micro 1.0
Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length of 128K tokens and optimized for spe...
LangMart: Amazon: Nova Premier 1.0
Amazon Nova Premier is the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher for distilling custom models.
LangMart: Amazon: Nova Pro 1.0
Amazon Nova Pro 1.0 is a capable multimodal model from Amazon focused on providing a combination of accuracy, speed, and cost for a wide range of tasks. As of December 2024, it achieves state-of-the-a...
LangMart: Anthropic Claude 3.5 Sonnet
Anthropic's Claude 3.5 Sonnet accessed via LangMart. A balanced model combining strong intelligence with fast response times, ideal for most use cases.
LangMart: Anthropic Claude 3.7 Sonnet
Anthropic's Claude 3.7 Sonnet accessed via LangMart. An enhanced version with improved reasoning and capabilities over Claude 3.5 Sonnet. This model supports multimodal capabilities including vision a...
LangMart: Anthropic Claude Haiku 4.5
Anthropic's fastest and most efficient Claude model accessed via LangMart. Designed for high-volume, low-latency applications requiring quick responses. This model supports multimodal capabilities inc...
LangMart: Anthropic: Claude 3 Haiku
Claude 3 Haiku is Anthropic's fastest and most compact model for
LangMart: Anthropic: Claude 3 Opus
Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding.
LangMart: Anthropic: Claude Opus 4
Claude Opus 4 is benchmarked as the world’s best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software...
LangMart: Anthropic: Claude Sonnet 4
Claude Sonnet 4 significantly enhances the capabilities of its predecessor, Sonnet 3.7, excelling in both coding and reasoning tasks with improved precision and controllability. Achieving state-of-the...
LangMart: Anthropic/claude 3.5 Haiku
Anthropic/claude 3.5 Haiku is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...
LangMart: Anthropic/claude 3.5 Haiku 20241022
Anthropic/claude 3.5 Haiku 20241022 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and i...
LangMart: Anthropic/claude 3.7 Sonnet:thinking
Anthropic/claude 3.7 Sonnet:thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.
LangMart: Anthropic/claude Opus 4.1
Anthropic/claude Opus 4.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Anthropic/claude Opus 4.5
Anthropic/claude Opus 4.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Anthropic/claude Sonnet 4.5
Anthropic/claude Sonnet 4.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...
LangMart: Arcee AI: Coder Large
Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context win...
LangMart: Arcee AI: Maestro Reasoning
Maestro Reasoning is Arcee's flagship analysis model: a 32 B‑parameter derivative of Qwen 2.5‑32 B tuned with DPO and chain‑of‑thought RL for step‑by‑step logic. Compared to the earlier 7 B preview, t...
LangMart: Arcee AI: Spotlight
Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers a 32 k‑token context window, enabling rich ...
LangMart: Arcee AI: Trinity Mini
Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with ro...
LangMart: Arcee AI: Virtuoso Large
Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k cont...
LangMart: Arcee Ai/trinity Mini:free
Free tier version of Arcee Ai/trinity Mini:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: ArliAI: QwQ 32B RpR v1
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain ...
LangMart: Auto Router
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output.
LangMart: Baidu/ernie 4.5 21b A3b
Baidu/ernie 4.5 21b A3b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...
LangMart: Baidu/ernie 4.5 21b A3b Thinking
Baidu/ernie 4.5 21b A3b Thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.
LangMart: Baidu/ernie 4.5 300b A47b
Baidu/ernie 4.5 300b A47b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Baidu/ernie 4.5 Vl 28b A3b
Baidu/ernie 4.5 Vl 28b A3b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...
LangMart: Baidu/ernie 4.5 Vl 424b A47b
Baidu/ernie 4.5 Vl 424b A47b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...
LangMart: Body Builder (beta)
Transform your natural language requests into structured LangMart API request objects. Describe what you want to accomplish with AI models, and Body Builder will construct the appropriate API calls. E...
LangMart: Bytedance Seed/seed 1.6
Bytedance Seed/seed 1.6 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...
LangMart: Bytedance Seed/seed 1.6 Flash
Bytedance Seed/seed 1.6 Flash with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.
LangMart: Bytedance/ui Tars 1.5 7b
Bytedance/ui Tars 1.5 7b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unders...
LangMart: Cogito V2 Preview Llama 109B
An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated ...
LangMart: Cognitivecomputations/dolphin Mistral 24b Venice Edition:free
Free tier version of Cognitivecomputations/dolphin Mistral 24b Venice Edition:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Cohere: Command A
Command A is an open-weights 111B parameter model with a 256k context window focused on delivering great performance across agentic, multilingual, and coding use cases.
LangMart: Cohere: Command R (08-2024)
command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at ...
LangMart: Cohere: Command R+ (08-2024)
command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while ...
LangMart: Cohere: Command R7B (12-2024)
Command R7B (12-2024) is a small, fast update of the Command R+ model, delivered in December 2024. It excels at RAG, tool use, agents, and similar tasks requiring complex reasoning and multiple steps....
LangMart: Deep Cogito: Cogito V2 Preview Llama 405B
Cogito v2 405B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. It represents a significant step toward frontier intelligence with dense arc...
LangMart: Deep Cogito: Cogito V2 Preview Llama 70B
Cogito v2 70B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. Built with iterative policy improvement, it delivers strong performance acros...
LangMart: Deepcogito/cogito V2.1 671b
Deepcogito/cogito V2.1 671b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...
LangMart: DeepSeek: DeepSeek Prover V2
DeepSeek Prover V2 is a 671B parameter model, speculated to be geared towards logic and mathematics. Likely an upgrade from [DeepSeek-Prover-V1.5](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1...
LangMart: DeepSeek: DeepSeek R1 0528 Qwen3 8B
DeepSeek-R1-0528 is a lightly upgraded release of DeepSeek R1 that taps more compute and smarter post-training tricks, pushing its reasoning and inference to the brink of flagship models like O3 and G...
LangMart: DeepSeek: DeepSeek V3
DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported ev...
LangMart: DeepSeek: DeepSeek V3 0324
DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team.
LangMart: DeepSeek: R1
DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.
LangMart: DeepSeek: R1 0528
May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in siz...
LangMart: DeepSeek: R1 Distill Llama 70B
DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The mo...
LangMart: DeepSeek: R1 Distill Qwen 14B
DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/de...
LangMart: DeepSeek: R1 Distill Qwen 32B
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperfor...
LangMart: Deepseek/deepseek Chat V3.1
Deepseek/deepseek Chat V3.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...
LangMart: Deepseek/deepseek R1 0528:free
Free tier version of Deepseek/deepseek R1 0528:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Deepseek/deepseek V3.1 Terminus
Deepseek/deepseek V3.1 Terminus is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image...
LangMart: Deepseek/deepseek V3.1 Terminus:exacto
Deepseek/deepseek V3.1 Terminus:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision an...
LangMart: Deepseek/deepseek V3.2
Deepseek/deepseek V3.2 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understa...
LangMart: Deepseek/deepseek V3.2 Exp
Deepseek/deepseek V3.2 Exp is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...
LangMart: Deepseek/deepseek V3.2 Speciale
Deepseek/deepseek V3.2 Speciale is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image...
LangMart: EleutherAI: Llemma 7b
EleutherAI: Llemma 7b is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: EssentialAI: Rnj 1 Instruct
Rnj-1 is an 8B-parameter, dense, open-weight model family developed by Essential AI and trained from scratch with a focus on programming, math, and scientific reasoning. The model demonstrates strong ...
LangMart: Goliath 120B
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale.
LangMart: Google: Gemini 2.0 Flash
Google: Gemini 2.0 Flash is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemini 2.5 Flash
Google: Gemini 2.5 Flash is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemini 2.5 Flash Image (Nano Banana)
Google: Gemini 2.5 Flash Image (Nano Banana) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case categor...
LangMart: Google: Gemini 2.5 Flash Image Preview (Nano Banana)
Google: Gemini 2.5 Flash Image Preview (Nano Banana) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case...
LangMart: Google: Gemini 2.5 Flash Lite
Google: Gemini 2.5 Flash Lite is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemini 2.5 Flash Lite Preview 09-2025
Google: Gemini 2.5 Flash Lite Preview 09-2025 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case catego...
LangMart: Google: Gemini 2.5 Flash Preview 09-2025
Google: Gemini 2.5 Flash Preview 09-2025 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemini 2.5 Pro
Google: Gemini 2.5 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemini 3 Flash Preview
Google: Gemini 3 Flash Preview is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemini 3 Pro Preview
Google: Gemini 3 Pro Preview is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemma 2 27B
Google: Gemma 2 27B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemma 2 9B
Google: Gemma 2 9B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemma 3 12B
Google: Gemma 3 12B is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemma 3 27B
Google: Gemma 3 27B is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemma 3 4B
Google: Gemma 3 4B is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Gemma 3n 4B
Google: Gemma 3n 4B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: Google: Nano Banana Pro (Gemini 3 Pro Image Preview)
Google: Nano Banana Pro (Gemini 3 Pro Image Preview) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case...
LangMart: Google/gemini 2.0 Flash Exp:free
Google/gemini 2.0 Flash Exp:free with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemini 2.0 Flash Lite 001
Lightweight variant of Google/gemini 2.0 Flash Lite 001 optimized for reduced latency and cost. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemini 2.5 Pro Preview
Preview version of Google/gemini 2.5 Pro Preview providing early access to experimental features. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemini 2.5 Pro Preview 05 06
Preview version of Google/gemini 2.5 Pro Preview 05 06 providing early access to experimental features. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemma 3 12b It:free
Free tier version of Google/gemma 3 12b It:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemma 3 27b It:free
Free tier version of Google/gemma 3 27b It:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemma 3 4b It:free
Free tier version of Google/gemma 3 4b It:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemma 3n E2b It:free
Free tier version of Google/gemma 3n E2b It:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Google/gemma 3n E4b It:free
Free tier version of Google/gemma 3n E4b It:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Ibm Granite/granite 4.0 H Micro
Ibm Granite/granite 4.0 H Micro is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image...
LangMart: Inception: Mercury
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Clau...
LangMart: Inception: Mercury Coder
Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haik...
LangMart: Inflection: Inflection 3 Pi
Inflection 3 Pi powers Inflection's [Pi](https://pi.ai) chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like custo...
LangMart: Inflection: Inflection 3 Productivity
Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news.
LangMart: Kwaipilot/kat Coder Pro:free
Free tier version of Kwaipilot/kat Coder Pro:free. This model supports multimodal capabilities including vision and image understanding. The model is optimized for code generation and programming task...
LangMart: Liquid/lfm 2.2 6b
Liquid/lfm 2.2 6b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding...
LangMart: LiquidAI/LFM2-8B-A1B
Model created via inbox interface
LangMart: Llama Guard 3 8B
Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classificati...
LangMart: Magnum v4 72B
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://langmart.ai/model-docs.5-sonnet) and Opus(https://langmart.ai/model-docs).
LangMart: Mancer: Weaver (alpha)
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.
LangMart: Meituan: LongCat Flash Chat
LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-conn...
LangMart: Meta Llama/llama 3.1 405b
Meta Llama/llama 3.1 405b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Meta Llama/llama 3.1 405b Instruct
Meta Llama/llama 3.1 405b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...
LangMart: Meta Llama/llama 3.1 405b Instruct:free
Free tier version of Meta Llama/llama 3.1 405b Instruct:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Meta Llama/llama 3.1 70b Instruct
Meta Llama/llama 3.1 70b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ima...
LangMart: Meta Llama/llama 3.1 8b Instruct
Meta Llama/llama 3.1 8b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...
LangMart: Meta Llama/llama 3.2 11b Vision Instruct
Meta Llama/llama 3.2 11b Vision Instruct with vision capabilities for processing images and visual content. This model supports multimodal capabilities including vision and image understanding. It fea...
LangMart: Meta Llama/llama 3.2 1b Instruct
Meta Llama/llama 3.2 1b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...
LangMart: Meta Llama/llama 3.2 3b Instruct
Meta Llama/llama 3.2 3b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...
LangMart: Meta Llama/llama 3.2 3b Instruct:free
Free tier version of Meta Llama/llama 3.2 3b Instruct:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Meta Llama/llama 3.2 90b Vision Instruct
Meta Llama/llama 3.2 90b Vision Instruct with vision capabilities for processing images and visual content. This model supports multimodal capabilities including vision and image understanding. It fea...
LangMart: Meta Llama/llama 3.3 70b Instruct
Meta Llama/llama 3.3 70b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ima...
LangMart: Meta Llama/llama 3.3 70b Instruct:free
Free tier version of Meta Llama/llama 3.3 70b Instruct:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Meta: Llama 3 70B Instruct
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases.
LangMart: Meta: Llama 3 8B Instruct
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.
LangMart: Meta: Llama 4 Maverick
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forw...
LangMart: Meta: Llama 4 Scout
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and ...
LangMart: Meta: Llama Guard 4 12B
Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs ...
LangMart: Meta: LlamaGuard 2 8B
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response...
LangMart: Microsoft: Phi 4
[Microsoft Research](/microsoft) Phi-4 is designed to perform well in complex reasoning tasks and can operate efficiently in situations with limited memory or where quick responses are needed.
LangMart: Microsoft: Phi 4 Multimodal Instruct
Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate ...
LangMart: Microsoft: Phi 4 Reasoning Plus
Phi-4-reasoning-plus is an enhanced 14B parameter model from Microsoft, fine-tuned from Phi-4 with additional reinforcement learning to boost accuracy on math, science, and code reasoning tasks. It us...
LangMart: Microsoft: Phi-3 Medium 128K Instruct
Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference a...
LangMart: Microsoft: Phi-3 Mini 128K Instruct
Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, i...
LangMart: Microsoft/phi 3.5 Mini 128k Instruct
Microsoft/phi 3.5 Mini 128k Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ...
LangMart: MiniMax: MiniMax M1
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "...
LangMart: MiniMax: MiniMax M2
MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier...
LangMart: MiniMax: MiniMax-01
MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can han...
LangMart: Minimax/minimax M2.1
Minimax/minimax M2.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understand...
LangMart: Mistral Large
This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch ann...
LangMart: Mistral Large 2407
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch annou...
LangMart: Mistral Large 2411
Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411)
LangMart: Mistral Tiny
Note: This model is being deprecated. Recommended replacement is the newer [Ministral 8B](/mistral/ministral-8b)
LangMart: Mistral: Codestral 2508
Mistral's cutting-edge language model for coding released end of July 2025. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test genera...
LangMart: Mistral: Devstral 2 2512
Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window.
LangMart: Mistral: Devstral Medium
Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI. Positioned as a step up from Devstral Small, it achieves 61.6% on SW...
LangMart: Mistral: Devstral Small 1.1
Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and relea...
LangMart: Mistral: Devstral Small 2505
Devstral-Small-2505 is a 24B parameter agentic LLM fine-tuned from Mistral-Small-3.1, jointly developed by Mistral AI and All Hands AI for advanced software engineering tasks. It is optimized for code...
LangMart: Mistral: Ministral 3 14B 2512
The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language ...
LangMart: Mistral: Ministral 3 3B 2512
The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.
LangMart: Mistral: Ministral 3 8B 2512
A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.
LangMart: Mistral: Ministral 3B
Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on mos...
LangMart: Mistral: Ministral 8B
Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k contex...
LangMart: Mistral: Mistral 7B Instruct
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
LangMart: Mistral: Mistral Large 3 2512
Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.
LangMart: Mistral: Mistral Medium 3
Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning...
LangMart: Mistral: Mistral Nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.
LangMart: Mistral: Mistral Small 3
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tune...
LangMart: Mistral: Mistral Small Creative
Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversati...
LangMart: Mistral: Mixtral 8x22B Instruct
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its s...
LangMart: Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parame...
LangMart: Mistral: Pixtral 12B
The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.
LangMart: Mistral: Pixtral Large 2411
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411). The model is able to understand documents, charts and natural images....
LangMart: Mistral: Saba
Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance...
LangMart: Mistral: Voxtral Small 24B 2507
Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance. It excels at speech transcription, translati...
LangMart: Mistralai/devstral 2512:free
Free tier version of Mistralai/devstral 2512:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Mistralai/mistral 7b Instruct V0.1
Mistralai/mistral 7b Instruct V0.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...
LangMart: Mistralai/mistral 7b Instruct V0.2
Mistralai/mistral 7b Instruct V0.2 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...
LangMart: Mistralai/mistral 7b Instruct V0.3
Mistralai/mistral 7b Instruct V0.3 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and im...
LangMart: Mistralai/mistral 7b Instruct:free
Free tier version of Mistralai/mistral 7b Instruct:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Mistralai/mistral Medium 3.1
Mistralai/mistral Medium 3.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...
LangMart: Mistralai/mistral Small 3.1 24b Instruct
Mistralai/mistral Small 3.1 24b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision ...
LangMart: Mistralai/mistral Small 3.1 24b Instruct:free
Free tier version of Mistralai/mistral Small 3.1 24b Instruct:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Mistralai/mistral Small 3.2 24b Instruct
Mistralai/mistral Small 3.2 24b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision ...
LangMart: MoonshotAI: Kimi Dev 72B
Kimi-Dev-72B is an open-source large language model fine-tuned for software engineering and issue resolution tasks. Based on Qwen2.5-72B, it is optimized using large-scale reinforcement learning that ...
LangMart: MoonshotAI: Kimi K2 0711
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for a...
LangMart: MoonshotAI: Kimi K2 0905
Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters ...
LangMart: MoonshotAI: Kimi K2 Thinking
Kimi K2 Thinking is Moonshot AI’s most advanced open reasoning model to date, extending the K2 series into agentic, long-horizon reasoning. Built on the trillion-parameter Mixture-of-Experts (MoE) arc...
LangMart: Moonshotai/kimi K2 0905:exacto
Moonshotai/kimi K2 0905:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image ...
LangMart: Moonshotai/kimi K2:free
Free tier version of Moonshotai/kimi K2:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Morph: Morph V3 Fast
Morph's fastest apply model for code edits. ~10,500 tokens/sec with 96% accuracy for rapid code transformations.
LangMart: Morph: Morph V3 Large
Morph's high-accuracy apply model for complex code edits. ~4,500 tokens/sec with 98% accuracy for precise code transformations.
LangMart: MythoMax 13B
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge
LangMart: Neversleep/llama 3.1 Lumimaid 8b
Neversleep/llama 3.1 Lumimaid 8b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...
LangMart: Nex Agi/deepseek V3.1 Nex N1:free
Free tier version of Nex Agi/deepseek V3.1 Nex N1:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Noromaid 20B
A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge.
LangMart: Nous: DeepHermes 3 Mistral 24B Preview
DeepHermes 3 (Mistral 24B Preview) is an instruction-tuned language model by Nous Research based on Mistral-Small-24B, designed for chat, function calling, and advanced multi-turn reasoning. It introd...
LangMart: Nous: Hermes 4 405B
Hermes 4 is a large-scale reasoning model built on Meta-Llama-3.1-405B and released by Nous Research. It introduces a hybrid reasoning mode, where the model can choose to deliberate internally with
LangMart: Nous: Hermes 4 70B
Hermes 4 70B is a hybrid reasoning model from Nous Research, built on Meta-Llama-3.1-70B. It introduces the same hybrid mode as the larger 405B release, allowing the model to either respond directly o...
LangMart: NousResearch: Hermes 2 Pro - Llama-3 8B
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mod...
LangMart: Nousresearch/hermes 3 Llama 3.1 405b
Nousresearch/hermes 3 Llama 3.1 405b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and ...
LangMart: Nousresearch/hermes 3 Llama 3.1 405b:free
Free tier version of Nousresearch/hermes 3 Llama 3.1 405b:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Nousresearch/hermes 3 Llama 3.1 70b
Nousresearch/hermes 3 Llama 3.1 70b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and i...
LangMart: NVIDIA: Nemotron 3 Nano 30B A3B
NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems.
LangMart: NVIDIA: Nemotron Nano 12B 2 VL
NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, c...
LangMart: NVIDIA: Nemotron Nano 9B V2
NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and t...
LangMart: Nvidia/llama 3.1 Nemotron 70b Instruct
Nvidia/llama 3.1 Nemotron 70b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision an...
LangMart: Nvidia/llama 3.1 Nemotron Ultra 253b V1
Nvidia/llama 3.1 Nemotron Ultra 253b V1 is a capable language model from LangMart for general-purpose text generation and analysis tasks.
LangMart: Nvidia/llama 3.3 Nemotron Super 49b V1.5
Nvidia/llama 3.3 Nemotron Super 49b V1.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks.
LangMart: Nvidia/nemotron 3 Nano 30b A3b:free
Free tier version of Nvidia/nemotron 3 Nano 30b A3b:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Nvidia/nemotron Nano 12b V2 Vl:free
Free tier version of Nvidia/nemotron Nano 12b V2 Vl:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Nvidia/nemotron Nano 9b V2:free
Free tier version of Nvidia/nemotron Nano 9b V2:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: OpenAI: ChatGPT-4o
OpenAI: ChatGPT-4o is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: Codex Mini
codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI. For direct use in the API, we recommend starting with gpt-4.1.
LangMart: OpenAI: GPT-3.5 Turbo
OpenAI: GPT-3.5 Turbo is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-3.5 Turbo 16k
OpenAI: GPT-3.5 Turbo 16k is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-3.5 Turbo Instruct
OpenAI: GPT-3.5 Turbo Instruct is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4
OpenAI: GPT-4 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4 (older v0314)
GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.
LangMart: OpenAI: GPT-4 Turbo
OpenAI: GPT-4 Turbo is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4 Turbo (older v1106)
OpenAI: GPT-4 Turbo (older v1106) is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category...
LangMart: OpenAI: GPT-4 Turbo Preview
OpenAI: GPT-4 Turbo Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4.1
OpenAI: GPT-4.1 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4.1 Mini
OpenAI: GPT-4.1 Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4.1 Nano
OpenAI: GPT-4.1 Nano is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o
OpenAI: GPT-4o is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o (2024-05-13)
OpenAI: GPT-4o (2024-05-13) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o (2024-08-06)
OpenAI: GPT-4o (2024-08-06) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o (2024-11-20)
OpenAI: GPT-4o (2024-11-20) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o Audio
OpenAI: GPT-4o Audio is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o Search Preview
OpenAI: GPT-4o Search Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o-mini
OpenAI: GPT-4o-mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o-mini (2024-07-18)
OpenAI: GPT-4o-mini (2024-07-18) is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-4o-mini Search Preview
OpenAI: GPT-4o-mini Search Preview is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case categor...
LangMart: OpenAI: GPT-5
OpenAI: GPT-5 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5 Chat
GPT-5 Chat is designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.
LangMart: OpenAI: GPT-5 Codex
OpenAI: GPT-5 Codex is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5 Image
[GPT-5](https://langmart.ai/model-docs) Image combines OpenAI's GPT-5 model with state-of-the-art image generation capabilities. It offers major improvements in reasoning, code quality, and user exper...
LangMart: OpenAI: GPT-5 Image Mini
GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini](https://langmart.ai/model-docs), with GPT Image 1 Mini for efficient image generation. This natively multimod...
LangMart: OpenAI: GPT-5 Mini
OpenAI: GPT-5 Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5 Nano
OpenAI: GPT-5 Nano is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5 Pro
OpenAI: GPT-5 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5.1
OpenAI: GPT-5.1 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5.1-Codex
OpenAI: GPT-5.1-Codex is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5.1-Codex-Max
OpenAI: GPT-5.1-Codex-Max is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5.1-Codex-Mini
OpenAI: GPT-5.1-Codex-Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5.2
OpenAI: GPT-5.2 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: GPT-5.2 Pro
OpenAI: GPT-5.2 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: gpt-oss-120b
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B par...
LangMart: OpenAI: gpt-oss-20b
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimiz...
LangMart: OpenAI: gpt-oss-safeguard-20b
gpt-oss-safeguard-20b is a safety reasoning model from OpenAI built upon gpt-oss-20b. This open-weight, 21B-parameter Mixture-of-Experts (MoE) model offers lower latency for safety tasks like content ...
LangMart: OpenAI: o1
OpenAI: o1 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o1-pro
OpenAI: o1-pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o3
OpenAI: o3 is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o3 Deep Research
OpenAI: o3 Deep Research is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o3 Mini
OpenAI: o3 Mini is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o3 Mini High
OpenAI o3-mini-high is the same model as [o3-mini](/openai/o3-mini) with reasoning_effort set to high.
LangMart: OpenAI: o3 Pro
OpenAI: o3 Pro is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o4 Mini
OpenAI: o4 Mini is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o4 Mini Deep Research
OpenAI: o4 Mini Deep Research is a image generation model for creating visual content from descriptions. Developed by LangMart, this model is optimized for its specific use case category.
LangMart: OpenAI: o4 Mini High
OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high.
LangMart: Openai/gpt 3.5 Turbo 0613
Openai/gpt 3.5 Turbo 0613 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Openai/gpt 4o:extended
Openai/gpt 4o:extended is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understa...
LangMart: Openai/gpt 5.1 Chat
Openai/gpt 5.1 Chat is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandi...
LangMart: Openai/gpt 5.2 Chat
Openai/gpt 5.2 Chat is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandi...
LangMart: Openai/gpt Oss 120b:exacto
Openai/gpt Oss 120b:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...
LangMart: Openai/gpt Oss 120b:free
Free tier version of Openai/gpt Oss 120b:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Openai/gpt Oss 20b:free
Free tier version of Openai/gpt Oss 20b:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: OpenGVLab: InternVL3 78B
The InternVL3 series is an advanced multimodal large language model (MLLM). Compared to InternVL 2.5, InternVL3 demonstrates stronger multimodal perception and reasoning capabilities.
LangMart: Perplexity: Sonar
Sonar is lightweight, affordable, fast, and simple to use — now featuring citations and the ability to customize sources. It is designed for companies seeking to integrate lightweight question-and-ans...
LangMart: Perplexity: Sonar Deep Research
Sonar Deep Research is a research-focused model designed for multi-step retrieval, synthesis, and reasoning across complex topics. It autonomously searches, reads, and evaluates sources, refining its ...
LangMart: Perplexity: Sonar Pro
Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro)
LangMart: Perplexity: Sonar Pro Search
Exclusively available on the LangMart API, Sonar Pro's new Pro Search mode is Perplexity's most advanced agentic search system. It is designed for deeper reasoning and analysis. Pricing is based on to...
LangMart: Perplexity: Sonar Reasoning
Sonar Reasoning is a reasoning model provided by Perplexity based on [DeepSeek R1](/deepseek/deepseek-r1).
LangMart: Perplexity: Sonar Reasoning Pro
Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro)
LangMart: Prime Intellect: INTELLECT-3
INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It offe...
LangMart: Qwen: Qwen Plus 0728
Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.
LangMart: Qwen: Qwen VL Max
Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.
LangMart: Qwen: Qwen VL Plus
Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixe...
LangMart: Qwen: Qwen-Max
Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 2...
LangMart: Qwen: Qwen-Plus
Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination.
LangMart: Qwen: Qwen-Turbo
Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks.
LangMart: Qwen: Qwen3 14B
Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode f...
LangMart: Qwen: Qwen3 235B A22B
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex r...
LangMart: Qwen: Qwen3 235B A22B Instruct 2507
Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized ...
LangMart: Qwen: Qwen3 235B A22B Thinking 2507
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass...
LangMart: Qwen: Qwen3 30B A3B
Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tas...
LangMart: Qwen: Qwen3 30B A3B Instruct 2507
Qwen3-30B-A3B-Instruct-2507 is a 30.5B-parameter mixture-of-experts language model from Qwen, with 3.3B active parameters per inference. It operates in non-thinking mode and is designed for high-quali...
LangMart: Qwen: Qwen3 30B A3B Thinking 2507
Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking m...
LangMart: Qwen: Qwen3 32B
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode ...
LangMart: Qwen: Qwen3 8B
Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode f...
LangMart: Qwen: Qwen3 Coder 30B A3B Instruct
Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, an...
LangMart: Qwen: Qwen3 Coder 480B A35B
Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-con...
LangMart: Qwen: Qwen3 Coder Flash
Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling and en...
LangMart: Qwen: Qwen3 Coder Plus
Qwen3 Coder Plus is Alibaba's proprietary version of the Open Source Qwen3 Coder 480B A35B. It is a powerful coding agent model specializing in autonomous programming via tool calling and environment ...
LangMart: Qwen: Qwen3 Max
Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the Janua...
LangMart: Qwen: Qwen3 Next 80B A3B Instruct
Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code ...
LangMart: Qwen: Qwen3 Next 80B A3B Thinking
Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step problems; math proofs, code s...
LangMart: Qwen: Qwen3 VL 235B A22B Instruct
Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language...
LangMart: Qwen: Qwen3 VL 235B A22B Thinking
Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STE...
LangMart: Qwen: Qwen3 VL 30B A3B Instruct
Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general mu...
LangMart: Qwen: Qwen3 VL 30B A3B Thinking
Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning in STEM, math, and complex ...
LangMart: Qwen: Qwen3 VL 32B Instruct
Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-precision understanding and reasoning across text, images, and video. With 32 billion parameters, it combines ...
LangMart: Qwen: Qwen3 VL 8B Instruct
Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal...
LangMart: Qwen: Qwen3 VL 8B Thinking
Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences...
LangMart: Qwen: QwQ 32B
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in d...
LangMart: Qwen/qwen 2.5 72b Instruct
Qwen/qwen 2.5 72b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image unde...
LangMart: Qwen/qwen 2.5 7b Instruct
Qwen/qwen 2.5 7b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Qwen/qwen 2.5 Coder 32b Instruct
Qwen/qwen 2.5 Coder 32b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and imag...
LangMart: Qwen/qwen 2.5 Vl 7b Instruct
Qwen/qwen 2.5 Vl 7b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...
LangMart: Qwen/qwen 2.5 Vl 7b Instruct:free
Free tier version of Qwen/qwen 2.5 Vl 7b Instruct:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Qwen/qwen Plus 2025 07 28:thinking
Qwen/qwen Plus 2025 07 28:thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.
LangMart: Qwen/qwen2.5 Coder 7b Instruct
Qwen/qwen2.5 Coder 7b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image ...
LangMart: Qwen/qwen2.5 Vl 32b Instruct
Qwen/qwen2.5 Vl 32b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...
LangMart: Qwen/qwen2.5 Vl 72b Instruct
Qwen/qwen2.5 Vl 72b Instruct is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image un...
LangMart: Qwen/qwen3 4b:free
Free tier version of Qwen/qwen3 4b:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Qwen/qwen3 Coder:exacto
Qwen/qwen3 Coder:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...
LangMart: Qwen/qwen3 Coder:free
Free tier version of Qwen/qwen3 Coder:free. This model supports multimodal capabilities including vision and image understanding. The model is optimized for code generation and programming tasks.
LangMart: Relace: Relace Apply 3
Relace Apply 3 is a specialized code-patching LLM that merges AI-suggested edits straight into your source files. It can apply updates from GPT-4o, Claude, and others into your files at 10,000 tokens/...
LangMart: Relace: Relace Search
The relace-search model uses 4-12 `view_file` and `grep` tools in parallel to explore a codebase and return relevant files to the user request.
LangMart: ReMM SLERP 13B
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
LangMart: Sao10K: Llama 3 8B Lunaris
Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge.
LangMart: Sao10k: Llama 3 Euryale 70B v2.1
Euryale 70B v2.1 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k).
LangMart: Sao10k/l3.1 70b Hanami X1
Sao10k/l3.1 70b Hanami X1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image under...
LangMart: Sao10k/l3.1 Euryale 70b
Sao10k/l3.1 Euryale 70b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...
LangMart: Sao10k/l3.3 Euryale 70b
Sao10k/l3.3 Euryale 70b is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image underst...
LangMart: SorcererLM 8x22B
SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on [WizardLM-2 8x22B](/microsoft/wizardlm-2-8x22b).
LangMart: StepFun: Step3
Step3 is a cutting-edge multimodal reasoning model—built on a Mixture-of-Experts architecture with 321B total parameters and 38B active. It is designed end-to-end to minimize decoding costs while deli...
LangMart: Switchpoint Router
Switchpoint AI's router instantly analyzes your request and directs it to the optimal AI from an ever-evolving library.
LangMart: Tencent: Hunyuan A13B Instruct
Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers compe...
LangMart: TheDrummer: Rocinante 12B
Rocinante 12B is designed for engaging storytelling and rich prose.
LangMart: TheDrummer: Skyfall 36B V2
Skyfall 36B v2 is an enhanced iteration of Mistral Small 2501, specifically fine-tuned for improved creativity, nuanced writing, role-playing, and coherent storytelling.
LangMart: TheDrummer: UnslopNemo 12B
UnslopNemo v4.1 is the latest addition from the creator of Rocinante, designed for adventure writing and role-play scenarios.
LangMart: Thedrummer/cydonia 24b V4.1
Thedrummer/cydonia 24b V4.1 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image und...
LangMart: Thudm/glm 4.1v 9b Thinking
Thudm/glm 4.1v 9b Thinking with extended thinking capabilities. This model supports multimodal capabilities including vision and image understanding.
LangMart: TNG: DeepSeek R1T Chimera
DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Tr...
LangMart: TNG: DeepSeek R1T2 Chimera
DeepSeek-TNG-R1T2-Chimera is the second-generation Chimera model from TNG Tech. It is a 671 B-parameter mixture-of-experts text-generation model assembled from DeepSeek-AI’s R1-0528, R1, and V3-0324 c...
LangMart: TNG: R1T Chimera
TNG-R1T-Chimera is an experimental LLM with a faible for creative storytelling and character interaction. It is a derivate of the original TNG/DeepSeek-R1T-Chimera released in April 2025 and is availa...
LangMart: Tngtech/deepseek R1t Chimera:free
Free tier version of Tngtech/deepseek R1t Chimera:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Tngtech/deepseek R1t2 Chimera:free
Free tier version of Tngtech/deepseek R1t2 Chimera:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Tngtech/tng R1t Chimera:free
Free tier version of Tngtech/tng R1t Chimera:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Tongyi DeepResearch 30B A3B
Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-...
LangMart: WizardLM-2 8x22B
WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state...
LangMart: X Ai/grok 4.1 Fast
X Ai/grok 4.1 Fast is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandin...
LangMart: xAI: Grok 3
Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, hea...
LangMart: xAI: Grok 3 Beta
Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, hea...
LangMart: xAI: Grok 3 Mini
A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.
LangMart: xAI: Grok 3 Mini Beta
Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It’s ideal for reasoning-heavy tasks that don’t...
LangMart: xAI: Grok 4
Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not exposed, reasoning ...
LangMart: xAI: Grok 4 Fast
Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning. Read more about the model on xAI's [news pos...
LangMart: xAI: Grok Code Fast 1
Grok Code Fast 1 is a speedy and economical reasoning model that excels at agentic coding. With reasoning traces visible in the response, developers can steer Grok Code for high-quality work flows.
LangMart: Xiaomi/mimo V2 Flash:free
Xiaomi/mimo V2 Flash:free with optimized speed for rapid response generation. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z Ai/glm 4.5
Z Ai/glm 4.5 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z Ai/glm 4.5 Air
Z Ai/glm 4.5 Air is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding....
LangMart: Z Ai/glm 4.5 Air:free
Free tier version of Z Ai/glm 4.5 Air:free. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z Ai/glm 4.5v
Z Ai/glm 4.5v is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z Ai/glm 4.6
Z Ai/glm 4.6 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z Ai/glm 4.6:exacto
Z Ai/glm 4.6:exacto is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understandi...
LangMart: Z Ai/glm 4.6v
Z Ai/glm 4.6v is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z Ai/glm 4.7
Z Ai/glm 4.7 is a capable language model from LangMart for general-purpose text generation and analysis tasks. This model supports multimodal capabilities including vision and image understanding.
LangMart: Z.AI: GLM 4 32B
GLM 4 32B is a cost-effective foundation language model.
Perplexity PPLX 7B Online
**Model ID**: `perplexity/pplx-7b-online`
Perplexity Sonar Reasoning
**Model ID**: `perplexity/sonar-reasoning`
Perplexity: Sonar Pro
Perplexity Sonar Pro is an advanced search-augmented language model designed for in-depth, multi-step queries with added extensibility. It offers approximately double the citations per search compared...
Qwen 1.5 14B Chat
**Model ID**: `qwen/qwen-1.5-14b-chat`
Qwen 2.5 72B Instruct
**Model ID**: `qwen/qwen-2.5-72b-instruct`
Qwen 2.5 7B Instruct
**Model ID**: `qwen/qwen-2.5-7b-instruct`
Qwen: Qwen3 235B A22B
Qwen3-235B-A22B is a 235 billion parameter mixture-of-experts (MoE) model developed by Qwen (Alibaba Cloud), activating 22 billion parameters per forward pass. This architecture allows the model to de...
Qwen: Qwen3 VL 8B Instruct
**Model ID:** `qwen/qwen3-vl-8b-instruct`
Qwen2.5 Coder 32B Instruct
Qwen2.5 Coder 32B Instruct is a code-focused large language model representing the latest iteration in the Qwen coding series. It replaces the earlier CodeQwen1.5 with significantly enhanced capabilit...
Qwen2.5 VL 72B Instruct
Qwen2.5 VL 72B Instruct is a state-of-the-art multimodal model that excels at visual understanding and reasoning tasks. The model demonstrates exceptional capabilities in:
Reka Flash
**Model ID**: `reka/reka-flash`
Stable Diffusion 3.5 Large
Stable Diffusion 3.5 Large is the most powerful model in the Stable Diffusion family, featuring superior quality and prompt adherence. It is a Multimodal Diffusion Transformer (MMDiT) text-to-image ge...
Together AI: Arcee AI Agent
Arcee AI Agent is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Arcee AI Caller Agent
Arcee AI Caller Agent is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Arcee AI Spotlight
Arcee AI Spotlight is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Arcee AI Virtuoso
Arcee AI Virtuoso is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: BGE Base EN v1.5
BGE Base EN v1.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: BGE Large EN v1.5
BGE Large EN v1.5 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Cogito v1 Preview 32B
Cogito v1 Preview 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Cogito v1 Preview 70B
Cogito v1 Preview 70B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Cogito v1 Preview 8B
Cogito v1 Preview 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: DeepSeek R1
DeepSeek R1 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: DeepSeek V3
DeepSeek V3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: DeepSeek V3 0324
DeepSeek V3 0324 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: FLUX.1 Dev
FLUX.1 Dev is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: FLUX.1 Pro
FLUX.1 Pro is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: FLUX.1.1 Pro
FLUX.1.1 Pro is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: FLUX.1.1 Pro Ultra
FLUX.1.1 Pro Ultra is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: GLM-4 9B
GLM-4 9B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: GLM-Z1 9B
GLM-Z1 9B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: GTE ModernBERT Base
GTE ModernBERT Base is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Kimi K2 Instruct
Kimi K2 Instruct is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama 3.1 405B
Llama 3.1 405B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama 3.1 70B
Llama 3.1 70B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama 3.1 8B
Llama 3.1 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama 3.3 70B
Llama 3.3 70B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama 4 Maverick
Llama 4 Maverick is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama 4 Scout
Llama 4 Scout is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama Guard 2 8B
Llama Guard 2 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama Guard 3 11B Vision Turbo
Llama Guard 3 11B Vision Turbo is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applica...
Together AI: Llama Guard 3 8B
Llama Guard 3 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Llama Guard 4 12B
Llama Guard 4 12B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: M2-BERT 80M 32K Retrieval
M2-BERT 80M 32K Retrieval is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications...
Together AI: Mixtral 8x22B
Mixtral 8x22B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Multilingual e5 Large Instruct
Multilingual e5 Large Instruct is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applica...
Together AI: Mxbai Rerank Large V2
Mxbai Rerank Large V2 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen 2.5 72B
Qwen 2.5 72B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen 2.5 7B
Qwen 2.5 7B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen 2.5 Coder 32B
Qwen 2.5 Coder 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen QwQ 32B
Qwen QwQ 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen3 235B A22B
Qwen3 235B A22B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen3 30B A3B
Qwen3 30B A3B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen3 32B
Qwen3 32B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Qwen3 8B
Qwen3 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Salesforce Llama Rank V1 8B
Salesforce Llama Rank V1 8B is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applicatio...
Together AI: Stable Diffusion 3.5 Large Turbo
Stable Diffusion 3.5 Large Turbo is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse appli...
Together AI: Stable Diffusion XL
Stable Diffusion XL is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: VirtueGuard Text Lite
VirtueGuard Text Lite is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Together AI: Whisper Large v3
Whisper Large v3 is a conversational AI model designed for multi-turn dialogue and interactive tasks. Developed by Together AI, this model offers reliable performance for diverse applications.
Toppy M 7B
Toppy M 7B is a wild 7B parameter model that merges several models using the new `task_arithmetic` merge method from [mergekit](https://github.com/cg123/mergekit). This model combines multiple fine-tu...
SOLAR-10.7B-Instruct-v1.0 Model Documentation
tokenizer = AutoTokenizer.from_pretrained("upstage/SOLAR-10.7B-Instruct-v1.0")
xAI Grok 3 Beta
Grok 3 Beta is xAI's flagship reasoning model, described as their "most advanced model" showcasing superior reasoning capabilities and extensive pretraining knowledge. It excels at enterprise use case...
MiMo-V2-Flash (Free)
MiMo-V2-Flash is an open-source language model developed by Xiaomi featuring a Mixture-of-Experts (MoE) architecture with 309B total parameters and 15B active parameters. It employs hybrid attention m...
Z.AI: GLM 4.7
GLM-4.7 is Z.AI's latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution.