-
1
Claude Max
Anthropic
Unleash limitless potential with superior AI collaboration power.
The Max Plan from Claude offers high-powered usage for those who depend heavily on Claude for daily work and large-scale tasks. This plan provides users with up to 20 times more usage than the standard Pro plan, making it ideal for individuals or teams that require consistent and intensive AI collaboration. Whether for long, ongoing conversations, data-heavy tasks, or quick, high-stakes decision-making, the Max Plan ensures Claude is available without disruption. It includes priority access to new features, automated task support, and the ability to scale your usage based on fluctuating project demands, making it a flexible solution for professionals who rely on AI-powered assistance.
-
2
Qwen3
Alibaba
Unleashing groundbreaking AI with unparalleled global language support.
Qwen3, the latest large language model from the Qwen family, introduces a new level of flexibility and power for developers and researchers. With models ranging from the high-performance Qwen3-235B-A22B to the smaller Qwen3-4B, Qwen3 is engineered to excel across a variety of tasks, including coding, math, and natural language processing. The unique hybrid thinking modes allow users to switch between deep reasoning for complex tasks and fast, efficient responses for simpler ones. Additionally, Qwen3 supports 119 languages, making it ideal for global applications. The model has been trained on an unprecedented 36 trillion tokens and leverages cutting-edge reinforcement learning techniques to continually improve its capabilities. Available on multiple platforms, including Hugging Face and ModelScope, Qwen3 is an essential tool for those seeking advanced AI-powered solutions for their projects.
-
3
Mistral Medium 3
Mistral AI
Revolutionary AI: Unmatched performance, unbeatable affordability, seamless deployment.
Mistral Medium 3 is a breakthrough in AI technology, offering the perfect balance of cutting-edge performance and significantly reduced costs. This model introduces a new era of enterprise AI, with a focus on simplifying deployments while still providing exceptional performance. Its ability to deliver high-level results at just a fraction of the cost of its competitors makes it a game-changer in industries that rely on complex AI tasks. Mistral Medium 3 is particularly strong in professional use cases like coding, where it competes closely with larger models that are typically more expensive and slower. The model supports hybrid and on-premises deployments, offering enterprise users full control over customization and integration into their systems. Businesses can leverage Mistral Medium 3 for both large-scale deployments and fine-tuned, domain-specific training, allowing for enhanced efficiency in industries such as healthcare, financial services, and energy. The addition of continuous learning and the ability to integrate with enterprise knowledge bases makes it a flexible, future-proof solution. Customers in beta are already using Mistral Medium 3 to enrich customer service, personalize business processes, and analyze complex datasets, demonstrating its real-world value. Available through various cloud platforms like Amazon Sagemaker, IBM WatsonX, and Google Cloud Vertex, Mistral Medium 3 is now ready to be deployed for custom use cases across a range of industries.
-
4
Qwen3-Coder
Qwen
Revolutionizing code generation with advanced AI-driven capabilities.
Qwen3-Coder is a multifaceted coding model available in different sizes, prominently showcasing the 480B-parameter Mixture-of-Experts variant with 35B active parameters, which adeptly manages 256K-token contexts that can be scaled up to 1 million tokens. It demonstrates remarkable performance comparable to Claude Sonnet 4, having been pre-trained on a staggering 7.5 trillion tokens, with 70% of that data comprising code, and it employs synthetic data fine-tuned through Qwen2.5-Coder to bolster both coding proficiency and overall effectiveness. Additionally, the model utilizes advanced post-training techniques that incorporate substantial, execution-guided reinforcement learning, enabling it to generate a wide array of test cases across 20,000 parallel environments, thus excelling in multi-turn software engineering tasks like SWE-Bench Verified without requiring test-time scaling. Beyond the model itself, the open-source Qwen Code CLI, inspired by Gemini Code, equips users to implement Qwen3-Coder within dynamic workflows by utilizing customized prompts and function calling protocols while ensuring seamless integration with Node.js, OpenAI SDKs, and environment variables. This robust ecosystem not only aids developers in enhancing their coding projects efficiently but also fosters innovation by providing tools that adapt to various programming needs. Ultimately, Qwen3-Coder stands out as a powerful resource for developers seeking to improve their software development processes.
-
5
GPT-5 mini
OpenAI
Streamlined AI for fast, precise, and cost-effective tasks.
GPT-5 mini is a faster, more affordable variant of OpenAI’s advanced GPT-5 language model, specifically tailored for well-defined and precise tasks that benefit from high reasoning ability. It accepts both text and image inputs (image input only), and generates high-quality text outputs, supported by a large 400,000-token context window and a maximum of 128,000 tokens in output, enabling complex multi-step reasoning and detailed responses. The model excels in providing rapid response times, making it ideal for use cases where speed and efficiency are critical, such as chatbots, customer service, or real-time analytics. GPT-5 mini’s pricing structure significantly reduces costs, with input tokens priced at $0.25 per million and output tokens at $2 per million, offering a more economical option compared to the flagship GPT-5. While it supports advanced features like streaming, function calling, structured output generation, and fine-tuning, it does not currently support audio input or image generation capabilities. GPT-5 mini integrates seamlessly with multiple API endpoints including chat completions, responses, embeddings, and batch processing, providing versatility for a wide array of applications. Rate limits are tier-based, scaling from 500 requests per minute up to 30,000 per minute for higher tiers, accommodating small to large scale deployments. The model also supports snapshots to lock in performance and behavior, ensuring consistency across applications. GPT-5 mini is ideal for developers and businesses seeking a cost-effective solution with high reasoning power and fast throughput. It balances cutting-edge AI capabilities with efficiency, making it a practical choice for applications demanding speed, precision, and scalability.
-
6
GPT-5 nano
OpenAI
Lightning-fast, budget-friendly AI for text and images!
GPT-5 nano is OpenAI’s fastest and most cost-efficient version of the GPT-5 model, engineered to handle high-speed text and image input processing for tasks such as summarization, classification, and content generation. It features an extensive 400,000-token context window and can output up to 128,000 tokens, allowing for complex, multi-step language understanding despite its focus on speed. With ultra-low pricing—$0.05 per million input tokens and $0.40 per million output tokens—GPT-5 nano makes advanced AI accessible to budget-conscious users and developers working at scale. The model supports a variety of advanced API features, including streaming output, function calling for interactive applications, structured outputs for precise control, and fine-tuning for customization. While it lacks support for audio input and web search, GPT-5 nano supports image input, code interpretation, and file search, broadening its utility. Developers benefit from tiered rate limits that scale from 500 to 30,000 requests per minute and up to 180 million tokens per minute, supporting everything from small projects to enterprise workloads. The model also offers snapshots to lock performance and behavior, ensuring consistent results over time. GPT-5 nano strikes a practical balance between speed, cost, and capability, making it ideal for fast, efficient AI implementations where rapid turnaround and budget are critical. It fits well for applications requiring real-time summarization, classification, chatbots, or lightweight natural language processing tasks. Overall, GPT-5 nano expands the accessibility of OpenAI’s powerful AI technology to a broader user base.
-
7
NVIDIA Cosmos
NVIDIA
Empowering developers with cutting-edge tools for AI innovation.
NVIDIA Cosmos is an innovative platform designed specifically for developers, featuring state-of-the-art generative World Foundation Models (WFMs), sophisticated video tokenizers, robust safety measures, and an efficient data processing and curation system that enhances the development of physical AI technologies. This platform equips developers engaged in fields like autonomous vehicles, robotics, and video analytics AI agents with the tools needed to generate highly realistic, physics-informed synthetic video data, drawing from a vast dataset that includes 20 million hours of both real and simulated footage. As a result, it allows for the quick simulation of future scenarios, the training of world models, and the customization of particular behaviors. The architecture of the platform consists of three main types of WFMs: Cosmos Predict, capable of generating up to 30 seconds of continuous video from diverse input modalities; Cosmos Transfer, which adapts simulations to function effectively across varying environments and lighting conditions, enhancing domain augmentation; and Cosmos Reason, a vision-language model that applies structured reasoning to interpret spatial-temporal data for effective planning and decision-making. Through these advanced capabilities, NVIDIA Cosmos not only accelerates the innovation cycle in physical AI applications but also promotes significant advancements across a wide range of industries, ultimately contributing to the evolution of intelligent technologies.
-
8
NVIDIA Isaac GR00T
NVIDIA
Revolutionizing humanoid robotics with advanced, adaptive technology solutions.
NVIDIA has developed Isaac GR00T (Generalist Robot 00 Technology) as a pioneering research initiative designed to facilitate the development of adaptable humanoid robot foundation models and the relevant data processes. Among its offerings is the Isaac GR00T-N model, which is supplemented by synthetic motion templates, GR00T-Mimic for refining demonstrations, and GR00T-Dreams, a feature that produces new synthetic pathways to advance humanoid robotics swiftly. A notable recent advancement is the release of the open-source Isaac GR00T N1 foundation model, which boasts a dual cognitive architecture encompassing a quick-acting “System 1” model and a language-capable, analytical “System 2” model for reasoning. The upgraded GR00T N1.5 version incorporates substantial enhancements, such as better vision-language grounding, superior execution of language directives, heightened adaptability through few-shot learning, and compatibility with various robot forms. By leveraging tools like Isaac Sim, Lab, and Omniverse, the GR00T platform empowers developers to train, simulate, post-train, and deploy flexible humanoid agents that utilize both real and synthetic data effectively. This holistic strategy not only accelerates advancements in robotics research but also paves the way for groundbreaking innovations in the realm of humanoid robotic applications, promising to reshape the landscape of the industry.
-
9
Qwen3-Max
Alibaba
Unleash limitless potential with advanced multi-modal reasoning capabilities.
Qwen3-Max is Alibaba's state-of-the-art large language model, boasting an impressive trillion parameters designed to enhance performance in tasks that demand agency, coding, reasoning, and the management of long contexts. As a progression of the Qwen3 series, this model utilizes improved architecture, training techniques, and inference methods; it features both thinker and non-thinker modes, introduces a distinctive “thinking budget” approach, and offers the flexibility to switch modes according to the complexity of the tasks. With its capability to process extremely long inputs and manage hundreds of thousands of tokens, it also enables the invocation of tools and showcases remarkable outcomes across various benchmarks, including evaluations related to coding, multi-step reasoning, and agent assessments like Tau2-Bench. Although the initial iteration primarily focuses on following instructions within a non-thinking framework, Alibaba plans to roll out reasoning features that will empower autonomous agent functionalities in the near future. Furthermore, with its robust multilingual support and comprehensive training on trillions of tokens, Qwen3-Max is available through API interfaces that integrate well with OpenAI-style functionalities, guaranteeing extensive applicability across a range of applications. This extensive and innovative framework positions Qwen3-Max as a significant competitor in the field of advanced artificial intelligence language models, making it a pivotal tool for developers and researchers alike.
-
10
GLM-4.6
Zhipu AI
Empower your projects with enhanced reasoning and coding capabilities.
GLM-4.6 builds on the groundwork established by its predecessor, offering improved reasoning, coding, and agent functionalities that lead to significant improvements in inferential precision, better tool application during reasoning exercises, and a smoother incorporation into agent architectures. In extensive benchmark assessments evaluating reasoning, coding, and agent performance, GLM-4.6 outperforms GLM-4.5 and holds its own against competitive models such as DeepSeek-V3.2-Exp and Claude Sonnet 4, though it still trails Claude Sonnet 4.5 regarding coding proficiency. Additionally, when evaluated through practical testing using a comprehensive “CC-Bench” suite, which encompasses tasks related to front-end development, tool creation, data analysis, and algorithmic challenges, GLM-4.6 shows superior performance compared to GLM-4.5, achieving a nearly equal standing with Claude Sonnet 4, winning around 48.6% of direct matchups while exhibiting an approximate 15% boost in token efficiency. This newest iteration is available via the Z.ai API, allowing developers to utilize it either as a backend for an LLM or as the fundamental component in an agent within the platform's API ecosystem. Moreover, the enhancements in GLM-4.6 promise to significantly elevate productivity across diverse application areas, making it a compelling choice for developers eager to adopt the latest advancements in AI technology. Consequently, the model's versatility and performance improvements position it as a key player in the ongoing evolution of AI-driven solutions.
-
11
DeepSeek-V3.2-Exp
DeepSeek
Experience lightning-fast efficiency with cutting-edge AI technology!
We are excited to present DeepSeek-V3.2-Exp, our latest experimental model that evolves from V3.1-Terminus, incorporating the cutting-edge DeepSeek Sparse Attention (DSA) technology designed to significantly improve both training and inference speeds for longer contexts. This innovative DSA framework enables accurate sparse attention while preserving the quality of outputs, resulting in enhanced performance for long-context tasks alongside reduced computational costs. Benchmark evaluations demonstrate that V3.2-Exp delivers performance on par with V3.1-Terminus, all while benefiting from these efficiency gains. The model is fully functional across various platforms, including app, web, and API. In addition, to promote wider accessibility, we have reduced DeepSeek API pricing by more than 50% starting now. During this transition phase, users will have access to V3.1-Terminus through a temporary API endpoint until October 15, 2025. DeepSeek invites feedback on DSA from users via our dedicated feedback portal, encouraging community engagement. To further support this initiative, DeepSeek-V3.2-Exp is now available as open-source, with model weights and key technologies—including essential GPU kernels in TileLang and CUDA—published on Hugging Face, and we are eager to observe how the community will leverage this significant technological advancement. As we unveil this new chapter, we anticipate fruitful interactions and innovative applications arising from the collective contributions of our user base.
-
12
Hunyuan-Vision-1.5
Tencent
Revolutionizing vision-language tasks with deep multimodal reasoning.
HunyuanVision, a cutting-edge vision-language model developed by Tencent's Hunyuan team, utilizes a unique mamba-transformer hybrid architecture that significantly enhances performance while ensuring efficient inference for various multimodal reasoning tasks. The most recent version, Hunyuan-Vision-1.5, emphasizes the notion of "thinking on images," which empowers it to understand the interactions between visual and textual elements and perform complex reasoning tasks such as cropping, zooming, pointing, box drawing, and annotating images to improve comprehension. This adaptable model caters to a wide range of vision-related tasks, including image and video recognition, optical character recognition (OCR), and diagram analysis, while also promoting visual reasoning and 3D spatial understanding, all within a unified multilingual framework. With a design that accommodates multiple languages and tasks, HunyuanVision intends to be open-sourced, offering access to various checkpoints, a detailed technical report, and inference support to encourage community involvement and experimentation. This initiative not only seeks to empower researchers and developers to tap into the model's potential for diverse applications but also aims to foster collaboration among users to drive innovation within the field. By making these resources available, HunyuanVision aspires to create a vibrant ecosystem for further advancements in multimodal AI.
-
13
DeepSeek-V3.2
DeepSeek
Revolutionize reasoning with advanced, efficient, next-gen AI.
DeepSeek-V3.2 represents one of the most advanced open-source LLMs available, delivering exceptional reasoning accuracy, long-context performance, and agent-oriented design. The model introduces DeepSeek Sparse Attention (DSA), a breakthrough attention mechanism that maintains high-quality output while significantly lowering compute requirements—particularly valuable for long-input workloads. DeepSeek-V3.2 was trained with a large-scale reinforcement learning framework capable of scaling post-training compute to the level required to rival frontier proprietary systems. Its Speciale variant surpasses GPT-5 on reasoning benchmarks and achieves performance comparable to Gemini-3.0-Pro, including gold-medal scores in the IMO and IOI 2025 competitions. The model also features a fully redesigned agentic training pipeline that synthesizes tool-use tasks and multi-step reasoning data at scale. A new chat template architecture introduces explicit thinking blocks, robust tool-interaction formatting, and a specialized developer role designed exclusively for search-powered agents. To support developers, the repository includes encoding utilities that translate OpenAI-style prompts into DeepSeek-formatted input strings and parse model output safely. DeepSeek-V3.2 supports inference using safetensors and fp8/bf16 precision, with recommendations for ideal sampling settings when deployed locally. The model is released under the MIT license, ensuring maximal openness for commercial and research applications. Together, these innovations make DeepSeek-V3.2 a powerful choice for building next-generation reasoning applications, agentic systems, research assistants, and AI infrastructures.
-
14
DeepSeek-V3.2-Speciale represents the pinnacle of DeepSeek’s open-source reasoning models, engineered to deliver elite performance on complex analytical tasks. It introduces DeepSeek Sparse Attention (DSA), a highly efficient long-context attention design that reduces the computational burden while maintaining deep comprehension and logical consistency. The model is trained with an expanded reinforcement learning framework capable of leveraging massive post-training compute, enabling performance not only comparable to GPT-5 but demonstrably surpassing it in internal tests. Its reasoning capabilities have been validated through gold-winning solutions across major global competitions, including IMO 2025 and IOI 2025, with official submissions released for transparency and peer assessment. DeepSeek-V3.2-Speciale is intentionally designed without tool-calling features, focusing every parameter on pure reasoning, multi-step logic, and structured problem solving. It introduces a reworked chat template featuring explicit thought-delimited sections and a structured message format optimized for agentic-style reasoning workflows. The repository includes Python-based utilities for encoding and parsing messages, illustrating how to format prompts correctly for the model. Supporting multiple tensor types (BF16, FP32, FP8_E4M3), it is built for both research experimentation and high-performance local deployment. Users are encouraged to use temperature = 1.0 and top_p = 0.95 for best results when running the model locally. With its open MIT license and transparent development process, DeepSeek-V3.2-Speciale stands as a breakthrough option for anyone requiring industry-leading reasoning capacity in an open LLM.
-
15
Lux
OpenAGI Foundation
Revolutionizing AI: Empowering agents to operate like humans.
Lux marks a major leap in AI capability by giving models the ability to operate real software environments—moving a cursor, pressing buttons, filling forms, navigating dashboards, and performing full computer workflows autonomously. It combines three powerful execution modes: Tasker for strict step-by-step reliability, Actor for rapid-response actions, and Thinker for extended reasoning across complex tasks that may take minutes or hours. These modes allow Lux to support a diverse set of use cases such as Amazon marketplace data extraction, automated QA test execution in developer environments, and instant retrieval of insider trading information from Nasdaq. Developers can begin building production-grade agents in under 20 minutes using Lux’s SDKs, frameworks, and ready-made UX templates. Unlike traditional AI models that only generate outputs, Lux operates inside real interfaces, enabling automation for businesses that rely on human-facing applications. The system understands both simple instructions and vague requests, planning its actions and executing long chains of behavior with high stability. This capability unlocks new possibilities for software automation, from enterprise workflows to gaming, analytics, and back-office operations. Lux represents a broader paradigm shift in AI—from information generation to direct action—making machines capable of using computers as humans do. By democratizing a skill previously limited to the world’s largest AI labs, Lux empowers developers everywhere to build advanced computer-use agents. With Lux, AI becomes not just a tool for insights, but a workforce capable of performing digital tasks at scale.
-
16
Devstral 2
Mistral AI
Revolutionizing software engineering with intelligent, context-aware code solutions.
Devstral 2 is an innovative, open-source AI model tailored for software engineering, transcending simple code suggestions to fully understand and manipulate entire codebases; this advanced functionality enables it to execute tasks such as multi-file edits, bug fixes, refactoring, managing dependencies, and generating code that is aware of its context. The suite includes a powerful 123-billion-parameter model alongside a streamlined 24-billion-parameter variant called “Devstral Small 2,” offering flexibility for teams; the larger model excels in handling intricate coding tasks that necessitate a deep contextual understanding, whereas the smaller model is optimized for use on less robust hardware. With a remarkable context window capable of processing up to 256 K tokens, Devstral 2 is adept at analyzing extensive repositories, tracking project histories, and maintaining a comprehensive understanding of large files, which is especially advantageous for addressing the challenges of real-world software projects. Additionally, the command-line interface (CLI) further enhances the model’s functionality by monitoring project metadata, Git statuses, and directory structures, thereby enriching the AI’s context and making “vibe-coding” even more impactful. This powerful blend of features solidifies Devstral 2's role as a revolutionary tool within the software development ecosystem, offering unprecedented support for engineers. As the landscape of software engineering continues to evolve, tools like Devstral 2 promise to redefine the way developers approach coding tasks.
-
17
Devstral Small 2
Mistral AI
Empower coding efficiency with a compact, powerful AI.
Devstral Small 2 is a condensed, 24 billion-parameter variant of Mistral AI's groundbreaking coding-focused models, made available under the adaptable Apache 2.0 license to support both local use and API access. Alongside its more extensive sibling, Devstral 2, it offers "agentic coding" capabilities tailored for low-computational environments, featuring a substantial 256K-token context window that enables it to understand and alter entire codebases with ease. With a performance score nearing 68.0% on the widely recognized SWE-Bench Verified code-generation benchmark, Devstral Small 2 distinguishes itself within the realm of open-weight models that are much larger. Its compact structure and efficient design allow it to function effectively on a single GPU or even in CPU-only setups, making it an excellent option for developers, small teams, or hobbyists who may lack access to extensive data-center facilities. Moreover, despite being smaller, Devstral Small 2 retains critical functionalities found in its larger counterparts, such as the capability to reason through multiple files and adeptly manage dependencies, ensuring that users enjoy substantial coding support. This combination of efficiency and high performance positions it as an indispensable asset for the coding community. Additionally, its user-friendly approach ensures that both novice and experienced programmers can leverage its capabilities without significant barriers.
-
18
DeepCoder
Agentica Project
Unleash coding potential with advanced open-source reasoning model.
DeepCoder, a fully open-source initiative for code reasoning and generation, has been created through a collaboration between the Agentica Project and Together AI. Built on the foundation of DeepSeek-R1-Distilled-Qwen-14B, it has been fine-tuned using distributed reinforcement learning techniques, achieving an impressive accuracy of 60.6% on LiveCodeBench, which represents an 8% improvement compared to its predecessor. This remarkable performance positions it competitively alongside proprietary models such as o3-mini (2025-01-031 Low) and o1, all while operating with a streamlined 14 billion parameters. The training process was intensive, lasting 2.5 weeks on a fleet of 32 H100 GPUs and utilizing a meticulously curated dataset comprising around 24,000 coding challenges obtained from reliable sources such as TACO-Verified, PrimeIntellect SYNTHETIC-1, and submissions to LiveCodeBench. Each coding challenge was required to include a valid solution paired with at least five unit tests to ensure robustness during the reinforcement learning phase. Additionally, DeepCoder employs innovative methods like iterative context lengthening and overlong filtering to effectively handle long-range contextual dependencies, allowing it to tackle complex coding tasks with proficiency. This distinctive approach not only enhances DeepCoder's accuracy and reliability in code generation but also positions it as a significant player in the landscape of code generation models. As a result, developers can rely on its capabilities for diverse programming challenges.
-
19
DeepSWE
Agentica Project
Revolutionizing coding with intelligent, adaptive, open-source solutions.
DeepSWE represents a groundbreaking advancement in open-source coding agents, harnessing the Qwen3-32B foundation model trained exclusively through reinforcement learning (RL) without the aid of supervised fine-tuning or proprietary model distillation. Developed using rLLM, which is Agentica's open-source RL framework tailored for language-driven agents, DeepSWE functions effectively within a simulated development environment provided by the R2E-Gym framework. This setup equips it with a range of tools, such as a file editor, search functions, shell execution, and submission capabilities, allowing the agent to adeptly navigate extensive codebases, modify multiple files, compile code, execute tests, and iteratively generate patches or fulfill intricate engineering tasks. In addition to mere code generation, DeepSWE exhibits sophisticated emergent behaviors; when confronted with bugs or feature requests, it engages in critical reasoning regarding edge cases, searches for existing tests in the codebase, proposes patches, creates additional tests to avert regressions, and adapts its cognitive strategies based on the specific challenges presented. This remarkable adaptability and efficiency position DeepSWE as a formidable asset in the software development landscape, empowering developers to tackle complex projects with greater ease and confidence. Its ability to learn from each interaction further enhances its performance, ensuring continuous improvement over time.
-
20
DeepScaleR
Agentica Project
Unlock mathematical mastery with cutting-edge AI reasoning power!
DeepScaleR is an advanced language model featuring 1.5 billion parameters, developed from DeepSeek-R1-Distilled-Qwen-1.5B through a unique blend of distributed reinforcement learning and a novel technique that gradually increases its context window from 8,000 to 24,000 tokens throughout training. The model was constructed using around 40,000 carefully curated mathematical problems taken from prestigious competition datasets, such as AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. With an impressive accuracy rate of 43.1% on the AIME 2024 exam, DeepScaleR exhibits a remarkable improvement of approximately 14.3 percentage points over its base version, surpassing even the significantly larger proprietary O1-Preview model. Furthermore, its outstanding performance on various mathematical benchmarks, including MATH-500, AMC 2023, Minerva Math, and OlympiadBench, illustrates that smaller, finely-tuned models enhanced by reinforcement learning can compete with or exceed the performance of larger counterparts in complex reasoning challenges. This breakthrough highlights the promising potential of streamlined modeling techniques in advancing mathematical problem-solving capabilities, encouraging further exploration in the field. Moreover, it opens doors for developing more efficient models that can tackle increasingly challenging problems with great efficacy.
-
21
GLM-4.6V
Zhipu AI
Empowering seamless vision-language interactions with advanced reasoning capabilities.
The GLM-4.6V is a sophisticated, open-source multimodal vision-language model that is part of the Z.ai (GLM-V) series, specifically designed for tasks that involve reasoning, perception, and actionable outcomes. It comes in two distinct configurations: a full-featured version boasting 106 billion parameters, ideal for cloud-based systems or high-performance computing setups, and a more efficient “Flash” version with 9 billion parameters, optimized for local use or scenarios that demand minimal latency. With an impressive native context window capable of handling up to 128,000 tokens during its training, GLM-4.6V excels in managing large documents and various multimodal data inputs. A key highlight of this model is its integrated Function Calling feature, which allows it to directly accept different types of visual media, including images, screenshots, and documents, without the need for manual text conversion. This capability not only streamlines the reasoning process regarding visual content but also empowers the model to make tool calls, effectively bridging visual perception with practical applications. The adaptability of GLM-4.6V paves the way for numerous applications, such as generating combined image-and-text content that enhances document understanding with text summarization or crafting responses that incorporate image annotations, significantly improving user engagement and output quality. Moreover, its architecture encourages exploration into innovative uses across diverse fields, making it a valuable asset in the realm of AI.
-
22
GLM-4.1V
Zhipu AI
"Unleashing powerful multimodal reasoning for diverse applications."
GLM-4.1V represents a cutting-edge vision-language model that provides a powerful and efficient multimodal ability for interpreting and reasoning through different types of media, such as images, text, and documents. The 9-billion-parameter variant, referred to as GLM-4.1V-9B-Thinking, is built on the GLM-4-9B foundation and has been refined using a distinctive training method called Reinforcement Learning with Curriculum Sampling (RLCS). With a context window that accommodates 64k tokens, this model can handle high-resolution inputs, supporting images with a resolution of up to 4K and any aspect ratio, enabling it to perform complex tasks like optical character recognition, image captioning, chart and document parsing, video analysis, scene understanding, and GUI-agent workflows, which include interpreting screenshots and identifying UI components. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved remarkable results, securing the top performance in 23 of the 28 tasks assessed. These advancements mark a significant progression in the fusion of visual and textual information, establishing a new benchmark for multimodal models across a variety of applications, and indicating the potential for future innovations in this field. This model not only enhances existing workflows but also opens up new possibilities for applications in diverse domains.
-
23
GLM-4.5V-Flash
Zhipu AI
Efficient, versatile vision-language model for real-world tasks.
GLM-4.5V-Flash is an open-source vision-language model designed to seamlessly integrate powerful multimodal capabilities into a streamlined and deployable format. This versatile model supports a variety of input types including images, videos, documents, and graphical user interfaces, enabling it to perform numerous functions such as scene comprehension, chart and document analysis, screen reading, and image evaluation. Unlike larger models, GLM-4.5V-Flash boasts a smaller size yet retains crucial features typical of visual language models, including visual reasoning, video analysis, GUI task management, and intricate document parsing. Its application within "GUI agent" frameworks allows the model to analyze screenshots or desktop captures, recognize icons or UI elements, and facilitate both automated desktop and web activities. Although it may not reach the performance levels of the most extensive models, GLM-4.5V-Flash offers remarkable adaptability for real-world multimodal tasks where efficiency, lower resource demands, and broad modality support are vital. Ultimately, its innovative design empowers users to leverage sophisticated capabilities while ensuring optimal speed and easy access for various applications. This combination makes it an appealing choice for developers seeking to implement multimodal solutions without the overhead of larger systems.
-
24
GLM-4.5V
Zhipu AI
Revolutionizing multimodal intelligence with unparalleled performance and versatility.
The GLM-4.5V model emerges as a significant advancement over its predecessor, the GLM-4.5-Air, featuring a sophisticated Mixture-of-Experts (MoE) architecture that includes an impressive total of 106 billion parameters, with 12 billion allocated specifically for activation purposes. This model is distinguished by its superior performance among open-source vision-language models (VLMs) of similar scale, excelling in 42 public benchmarks across a wide range of applications, including images, videos, documents, and GUI interactions. It offers a comprehensive suite of multimodal capabilities, tackling image reasoning tasks like scene understanding, spatial recognition, and multi-image analysis, while also addressing video comprehension challenges such as segmentation and event recognition. In addition, it demonstrates remarkable proficiency in deciphering intricate charts and lengthy documents, which supports GUI-agent workflows through functionalities like screen reading and desktop automation, along with providing precise visual grounding by identifying objects and creating bounding boxes. The introduction of a unique "Thinking Mode" switch further enhances the user experience, enabling users to choose between quick responses or more deliberate reasoning tailored to specific situations. This innovative addition not only underscores the versatility of GLM-4.5V but also highlights its adaptability to meet diverse user requirements, making it a powerful tool in the realm of multimodal AI solutions. Furthermore, the model’s ability to seamlessly integrate into various applications signifies its potential for widespread adoption in both research and practical environments.
-
25
GLM-4.7
Zhipu AI
Elevate your coding and reasoning with unmatched performance!
GLM-4.7 is an advanced AI model engineered to push the boundaries of coding, reasoning, and agent-based workflows. It delivers clear performance gains across software engineering benchmarks, terminal automation, and multilingual coding tasks. GLM-4.7 enhances stability through interleaved, preserved, and turn-level thinking, enabling better long-horizon task execution. The model is optimized for use in modern coding agents, making it suitable for real-world development environments. GLM-4.7 also improves creative and frontend output, generating cleaner user interfaces and more visually accurate slides. Its tool-using abilities have been significantly strengthened, allowing it to interact with browsers, APIs, and automation systems more reliably. Advanced reasoning improvements enable better performance on mathematical and logic-heavy tasks. GLM-4.7 supports flexible deployment, including cloud APIs and local inference. The model is compatible with popular inference frameworks such as vLLM and SGLang. Developers can integrate GLM-4.7 into existing workflows with minimal configuration changes. Its pricing model offers high performance at a fraction of comparable coding models. GLM-4.7 is designed to feel like a dependable coding partner rather than just a benchmark-optimized model.