-
1
Qwen
Alibaba
Unlock creativity and productivity with versatile AI assistance!
Qwen is an advanced AI assistant and development platform powered by Alibaba Cloud’s cutting-edge Qwen model family, offering powerful multimodal reasoning and creativity tools for users at all skill levels. It provides a free and accessible interface through Qwen Chat, where anyone can generate images, analyze content, perform deep multi-step research, and build fully coded web pages simply by describing what they want. Using its VLo model, Qwen transforms ideas into detailed visuals and supports editing, style transfer, and complex multi-element image creation. Deep Research acts like an automated research partner, gathering information online, synthesizing insights, and generating structured reports in minutes. The Web Dev feature empowers users to create modern, ready-to-deploy websites with clean code using only natural language instructions. Qwen’s enhanced “Thinking” capabilities provide stronger logic, structured problem-solving, and real-time internet-aware analysis. Its Search tool retrieves precise results with contextual understanding, while multimodal intelligence enables Qwen to process images, audio, video, and text together for deeper comprehension. For developers, the Qwen API offers OpenAI-compatible endpoints, allowing seamless integration of Qwen’s reasoning, generation, and multimodal abilities into any application or product. This makes Qwen not only an AI assistant but also a versatile platform for builders and engineers. Across web, desktop, and mobile environments, Qwen delivers a unified, high-performance AI experience.
-
2
Qwen-7B
Alibaba
Powerful AI model for unmatched adaptability and efficiency.
Qwen-7B represents the seventh iteration in Alibaba Cloud's Qwen language model lineup, also referred to as Tongyi Qianwen, featuring 7 billion parameters. This advanced language model employs a Transformer architecture and has undergone pretraining on a vast array of data, including web content, literature, programming code, and more. In addition, we have launched Qwen-7B-Chat, an AI assistant that enhances the pretrained Qwen-7B model by integrating sophisticated alignment techniques. The Qwen-7B series includes several remarkable attributes:
Its training was conducted on a premium dataset encompassing over 2.2 trillion tokens collected from a custom assembly of high-quality texts and codes across diverse fields, covering both general and specialized areas of knowledge. Moreover, the model excels in performance, outshining similarly-sized competitors on various benchmark datasets that evaluate skills in natural language comprehension, mathematical reasoning, and programming challenges. This establishes Qwen-7B as a prominent contender in the AI language model landscape. In summary, its intricate training regimen and solid architecture contribute significantly to its outstanding adaptability and efficiency in a wide range of applications.
-
3
Codestral Mamba
Mistral AI
Unleash coding potential with innovative, efficient language generation!
In tribute to Cleopatra, whose dramatic story ended with the fateful encounter with a snake, we proudly present Codestral Mamba, a Mamba2 language model tailored for code generation and made available under an Apache 2.0 license. Codestral Mamba marks a pivotal step forward in our commitment to pioneering and refining innovative architectures. This model is available for free use, modification, and distribution, and we hope it will pave the way for new discoveries in architectural research. The Mamba models stand out due to their linear time inference capabilities, coupled with a theoretical ability to manage sequences of infinite length. This unique characteristic allows users to engage with the model seamlessly, delivering quick responses irrespective of the input size. Such remarkable efficiency is especially beneficial for boosting coding productivity; hence, we have integrated advanced coding and reasoning abilities into this model, ensuring it can compete with top-tier transformer-based models. As we push the boundaries of innovation, we are confident that Codestral Mamba will not only advance coding practices but also inspire new generations of developers. This exciting release underscores our dedication to fostering creativity and productivity within the tech community.
-
4
Mathstral
Mistral AI
Revolutionizing mathematical reasoning for innovative scientific breakthroughs!
This year marks the 2311th anniversary of Archimedes, and in his honor, we are thrilled to unveil our first Mathstral model, a dedicated 7B architecture crafted specifically for mathematical reasoning and scientific inquiry. With a context window of 32k, this model is made available under the Apache 2.0 license. Our goal in sharing Mathstral with the scientific community is to facilitate the tackling of complex mathematical problems that require sophisticated, multi-step logical reasoning. The introduction of Mathstral aligns with our broader initiative to bolster academic efforts, developed alongside Project Numina. Much like Isaac Newton's contributions during his lifetime, Mathstral builds upon the groundwork established by Mistral 7B, with a keen focus on STEM fields. It showcases exceptional reasoning abilities within its domain, achieving impressive results across numerous industry-standard benchmarks. Specifically, it registers a score of 56.6% on the MATH benchmark and 63.47% on the MMLU benchmark, highlighting the performance enhancements in comparison to its predecessor, Mistral 7B, and underscoring the strides made in mathematical modeling. In addition to advancing individual research, this initiative seeks to inspire greater innovation and foster collaboration within the mathematical community as a whole.
-
5
Qwen2.5
Alibaba
Revolutionizing AI with precision, creativity, and personalized solutions.
Qwen2.5 is an advanced multimodal AI system designed to provide highly accurate and context-aware responses across a wide range of applications. This iteration builds on previous models by integrating sophisticated natural language understanding with enhanced reasoning capabilities, creativity, and the ability to handle various forms of media. With its adeptness in analyzing and generating text, interpreting visual information, and managing complex datasets, Qwen2.5 delivers timely and precise solutions. Its architecture emphasizes flexibility, making it particularly effective in personalized assistance, thorough data analysis, creative content generation, and academic research, thus becoming an essential tool for both experts and everyday users. Additionally, the model is developed with a commitment to user engagement, prioritizing transparency, efficiency, and ethical AI practices, ultimately fostering a rewarding experience for those who utilize it. As technology continues to evolve, the ongoing refinement of Qwen2.5 ensures that it remains at the forefront of AI innovation.
-
6
LTXV
Lightricks
Empower your creativity with cutting-edge AI video tools.
LTXV offers an extensive selection of AI-driven creative tools designed to support content creators across various platforms. Among its features are sophisticated AI-powered video generation capabilities that allow users to intricately craft video sequences while retaining full control over the entire production workflow. By leveraging Lightricks' proprietary AI algorithms, LTX guarantees a superior, efficient, and user-friendly editing experience. The cutting-edge LTX Video utilizes an innovative technology called multiscale rendering, which begins with quick, low-resolution passes that capture crucial motion and lighting, and then enhances those aspects with high-resolution precision. Unlike traditional upscalers, LTXV-13B assesses motion over time, performing complex calculations in advance to achieve rendering speeds that can reach up to 30 times faster while still upholding remarkable quality. This unique blend of rapidity and excellence positions LTXV as an invaluable resource for creators looking to enhance their content production. Additionally, the suite's versatile features cater to both novice and experienced users, making it accessible to a wide audience.
-
7
Kimi K2 Thinking
Moonshot AI
Unleash powerful reasoning for complex, autonomous workflows.
Kimi K2 Thinking is an advanced open-source reasoning model developed by Moonshot AI, specifically designed for complex, multi-step workflows where it adeptly merges chain-of-thought reasoning with the use of tools across various sequential tasks. It utilizes a state-of-the-art mixture-of-experts architecture, encompassing an impressive total of 1 trillion parameters, though only approximately 32 billion parameters are engaged during each inference, which boosts efficiency while retaining substantial capability. The model supports a context window of up to 256,000 tokens, enabling it to handle extraordinarily lengthy inputs and reasoning sequences without losing coherence. Furthermore, it incorporates native INT4 quantization, which dramatically reduces inference latency and memory usage while maintaining high performance. Tailored for agentic workflows, Kimi K2 Thinking can autonomously trigger external tools, managing sequential logic steps that typically involve around 200-300 tool calls in a single chain while ensuring consistent reasoning throughout the entire process. Its strong architecture positions it as an optimal solution for intricate reasoning challenges that demand both depth and efficiency, making it a valuable asset in various applications. Overall, Kimi K2 Thinking stands out for its ability to integrate complex reasoning and tool use seamlessly.
-
8
CodeQwen
Alibaba
Empower your coding with seamless, intelligent generation capabilities.
CodeQwen acts as the programming equivalent of Qwen, a collection of large language models developed by the Qwen team at Alibaba Cloud. This model, which is based on a transformer architecture that operates purely as a decoder, has been rigorously pre-trained on an extensive dataset of code. It is known for its strong capabilities in code generation and has achieved remarkable results on various benchmarking assessments. CodeQwen can understand and generate long contexts of up to 64,000 tokens and supports 92 programming languages, excelling in tasks such as text-to-SQL queries and debugging operations. Interacting with CodeQwen is uncomplicated; users can start a dialogue with just a few lines of code leveraging transformers. The interaction is rooted in creating the tokenizer and model using pre-existing methods, utilizing the generate function to foster communication through the chat template specified by the tokenizer. Adhering to our established guidelines, we adopt the ChatML template specifically designed for chat models. This model efficiently completes code snippets according to the prompts it receives, providing responses that require no additional formatting changes, thereby significantly enhancing the user experience. The smooth integration of these components highlights the adaptability and effectiveness of CodeQwen in addressing a wide range of programming challenges, making it an invaluable tool for developers.
-
9
Qwen2
Alibaba
Unleashing advanced language models for limitless AI possibilities.
Qwen2 is a comprehensive array of advanced language models developed by the Qwen team at Alibaba Cloud. This collection includes various models that range from base to instruction-tuned versions, with parameters from 0.5 billion up to an impressive 72 billion, demonstrating both dense configurations and a Mixture-of-Experts architecture. The Qwen2 lineup is designed to surpass many earlier open-weight models, including its predecessor Qwen1.5, while also competing effectively against proprietary models across several benchmarks in domains such as language understanding, text generation, multilingual capabilities, programming, mathematics, and logical reasoning. Additionally, this cutting-edge series is set to significantly influence the artificial intelligence landscape, providing enhanced functionalities that cater to a wide array of applications. As such, the Qwen2 models not only represent a leap in technological advancement but also pave the way for future innovations in the field.
-
10
Qwen2-VL
Alibaba
Revolutionizing vision-language understanding for advanced global applications.
Qwen2-VL stands as the latest and most sophisticated version of vision-language models in the Qwen lineup, enhancing the groundwork laid by Qwen-VL. This upgraded model demonstrates exceptional abilities, including:
Delivering top-tier performance in understanding images of various resolutions and aspect ratios, with Qwen2-VL particularly shining in visual comprehension challenges such as MathVista, DocVQA, RealWorldQA, and MTVQA, among others.
Handling videos longer than 20 minutes, which allows for high-quality video question answering, engaging conversations, and innovative content generation.
Operating as an intelligent agent that can control devices such as smartphones and robots, Qwen2-VL employs its advanced reasoning abilities and decision-making capabilities to execute automated tasks triggered by visual elements and written instructions.
Offering multilingual capabilities to serve a worldwide audience, Qwen2-VL is now adept at interpreting text in several languages present in images, broadening its usability and accessibility for users from diverse linguistic backgrounds. Furthermore, this extensive functionality positions Qwen2-VL as an adaptable resource for a wide array of applications across various sectors.
-
11
Marco-o1
AIDC-AI
Revolutionizing AI with precision, adaptability, and seamless interaction.
Marco-o1 is a cutting-edge AI framework developed for advanced natural language comprehension and prompt problem-solving. It is carefully engineered to deliver precise and contextually relevant responses, blending deep linguistic knowledge with an optimized system that boosts speed and efficiency. This model excels in various environments, including interactive chat systems, content creation, technical support, and intricate decision-making tasks, adapting seamlessly to diverse user needs. With a strong emphasis on providing smooth, user-centric experiences, reliability, and compliance with ethical AI principles, Marco-o1 stands out as a premier tool for individuals and businesses seeking intelligent, adaptable, and scalable AI solutions. Furthermore, the incorporation of the MCTS technique allows for the exploration of multiple reasoning paths by leveraging confidence scores derived from the softmax-adjusted log probabilities of the top-k alternative tokens. This approach guides the model towards the most effective solutions while ensuring a high degree of accuracy. As a result, these features not only bolster the model’s performance but also play a crucial role in enhancing user satisfaction and engagement, making it a valuable asset in the evolving landscape of AI technology.
-
12
Teuken 7B
OpenGPT-X
Empowering communication across Europe’s diverse linguistic landscape.
Teuken-7B is a cutting-edge multilingual language model designed to address the diverse linguistic landscape of Europe, emerging from the OpenGPT-X initiative. This model has been trained on a dataset where more than half comprises non-English content, effectively encompassing all 24 official languages of the European Union to ensure robust performance across these tongues. One of the standout features of Teuken-7B is its specially crafted multilingual tokenizer, which has been optimized for European languages, resulting in improved training efficiency and reduced inference costs compared to standard monolingual tokenizers. Users can choose between two distinct versions of the model: Teuken-7B-Base, which offers a foundational pre-trained experience, and Teuken-7B-Instruct, fine-tuned to enhance its responsiveness to user inquiries. Both variations are easily accessible on Hugging Face, promoting transparency and collaboration in the artificial intelligence sector while stimulating further advancements. The development of Teuken-7B not only showcases a commitment to fostering AI solutions but also underlines the importance of inclusivity and representation of Europe's rich cultural tapestry in technology. This initiative ultimately aims to bridge communication gaps and facilitate understanding among diverse populations across the continent.
-
13
Qwen2.5-Max
Alibaba
Revolutionary AI model unlocking new pathways for innovation.
Qwen2.5-Max is a cutting-edge Mixture-of-Experts (MoE) model developed by the Qwen team, trained on a vast dataset of over 20 trillion tokens and improved through techniques such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). It outperforms models like DeepSeek V3 in various evaluations, excelling in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, and also achieving impressive results in tests like MMLU-Pro. Users can access this model via an API on Alibaba Cloud, which facilitates easy integration into various applications, and they can also engage with it directly on Qwen Chat for a more interactive experience. Furthermore, Qwen2.5-Max's advanced features and high performance mark a remarkable step forward in the evolution of AI technology. It not only enhances productivity but also opens new avenues for innovation in the field.
-
14
Qwen2.5-VL
Alibaba
Next-level visual assistant transforming interaction with data.
The Qwen2.5-VL represents a significant advancement in the Qwen vision-language model series, offering substantial enhancements over the earlier version, Qwen2-VL. This sophisticated model showcases remarkable skills in visual interpretation, capable of recognizing a wide variety of elements in images, including text, charts, and numerous graphical components. Acting as an interactive visual assistant, it possesses the ability to reason and adeptly utilize tools, making it ideal for applications that require interaction on both computers and mobile devices. Additionally, Qwen2.5-VL excels in analyzing lengthy videos, being able to pinpoint relevant segments within those that exceed one hour in duration. It also specializes in precisely identifying objects in images, providing bounding boxes or point annotations, and generates well-organized JSON outputs detailing coordinates and attributes. The model is designed to output structured data for various document types, such as scanned invoices, forms, and tables, which proves especially beneficial for sectors like finance and commerce. Available in both base and instruct configurations across 3B, 7B, and 72B models, Qwen2.5-VL is accessible on platforms like Hugging Face and ModelScope, broadening its availability for developers and researchers. Furthermore, this model not only enhances the realm of vision-language processing but also establishes a new benchmark for future innovations in this area, paving the way for even more sophisticated applications.
-
15
Zyphra Zonos
Zyphra
Revolutionary text-to-speech models redefining audio quality standards!
Zyphra is excited to announce the beta launch of Zonos-v0.1, featuring two advanced and real-time text-to-speech models that incorporate high-fidelity voice cloning technology. This release includes a 1.6B transformer model and a 1.6B hybrid model, both distributed under the Apache 2.0 license. Considering the difficulties in measuring audio quality quantitatively, we assert that the quality of output generated by Zonos matches or exceeds that of leading proprietary TTS systems currently on the market. Moreover, we believe that providing access to such high-quality models will significantly enhance progress in TTS research. The model weights for Zonos are readily available on Huggingface, along with sample inference code hosted in our GitHub repository. In addition, Zonos can be accessed through our model playground and API, which offers simple and competitive flat-rate pricing options for users. To showcase Zonos's performance, we have compiled a series of sample comparisons against existing proprietary models that illustrate its exceptional capabilities. This project underscores our dedication to promoting innovation within the text-to-speech technology sector, and we anticipate that it will inspire further advancements in the field.
-
16
SmolLM2
Hugging Face
Compact language models delivering high performance on any device.
SmolLM2 features a sophisticated range of compact language models designed for effective on-device operations. This assortment includes models with various parameter counts, such as a substantial 1.7 billion, alongside more efficient iterations at 360 million and 135 million parameters, which guarantees optimal functionality on devices with limited resources. The models are particularly adept at text generation and have been fine-tuned for scenarios that demand quick responses and low latency, ensuring they deliver exceptional results in diverse applications, including content creation, programming assistance, and understanding natural language. The adaptability of SmolLM2 makes it a prime choice for developers who wish to embed powerful AI functionalities into mobile devices, edge computing platforms, and other environments where resource availability is restricted. Its thoughtful design exemplifies a dedication to achieving a balance between high performance and user accessibility, thus broadening the reach of advanced AI technologies. Furthermore, the ongoing development of such models signals a promising future for AI integration in everyday technology.
-
17
Mistral Small 3.1
Mistral
Unleash advanced AI versatility with unmatched processing power.
Mistral Small 3.1 is an advanced, multimodal, and multilingual AI model that has been made available under the Apache 2.0 license. Building upon the previous Mistral Small 3, this updated version showcases improved text processing abilities and enhanced multimodal understanding, with the capacity to handle an extensive context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, reaching remarkable inference rates of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in various applications, including instruction adherence, conversational interaction, visual data interpretation, and executing functions, making it suitable for both commercial and individual AI uses. Its efficient architecture allows it to run smoothly on hardware configurations such as a single RTX 4090 or a Mac with 32GB of RAM, enabling on-device operations. Users have the option to download the model from Hugging Face and explore its features via Mistral AI's developer playground, while it is also embedded in services like Google Cloud Vertex AI and accessible on platforms like NVIDIA NIM. This extensive flexibility empowers developers to utilize its advanced capabilities across a wide range of environments and applications, thereby maximizing its potential impact in the AI landscape. Furthermore, Mistral Small 3.1's innovative design ensures that it remains adaptable to future technological advancements.
-
18
Qwen3
Alibaba
Unleashing groundbreaking AI with unparalleled global language support.
Qwen3, the latest large language model from the Qwen family, introduces a new level of flexibility and power for developers and researchers. With models ranging from the high-performance Qwen3-235B-A22B to the smaller Qwen3-4B, Qwen3 is engineered to excel across a variety of tasks, including coding, math, and natural language processing. The unique hybrid thinking modes allow users to switch between deep reasoning for complex tasks and fast, efficient responses for simpler ones. Additionally, Qwen3 supports 119 languages, making it ideal for global applications. The model has been trained on an unprecedented 36 trillion tokens and leverages cutting-edge reinforcement learning techniques to continually improve its capabilities. Available on multiple platforms, including Hugging Face and ModelScope, Qwen3 is an essential tool for those seeking advanced AI-powered solutions for their projects.
-
19
Devstral
Mistral AI
Unleash coding potential with the ultimate open-source LLM!
Devstral represents a joint initiative by Mistral AI and All Hands AI, creating an open-source large language model designed explicitly for the field of software engineering. This innovative model exhibits exceptional skill in navigating complex codebases, efficiently managing edits across multiple files, and tackling real-world issues, achieving an impressive 46.8% score on the SWE-Bench Verified benchmark, which positions it ahead of all other open-source models. Built upon the foundation of Mistral-Small-3.1, Devstral features a vast context window that accommodates up to 128,000 tokens. It is optimized for peak performance on advanced hardware configurations, such as Macs with 32GB of RAM or Nvidia RTX 4090 GPUs, and is compatible with several inference frameworks, including vLLM, Transformers, and Ollama. Released under the Apache 2.0 license, Devstral is readily available on various platforms, including Hugging Face, Ollama, Kaggle, Unsloth, and LM Studio, enabling developers to effortlessly incorporate its features into their applications. This model not only boosts efficiency for software engineers but also acts as a crucial tool for anyone engaged in coding tasks, thereby broadening its utility and appeal across the tech community. Furthermore, its open-source nature encourages continuous improvement and collaboration among developers worldwide.
-
20
ZenCtrl
Fotographer AI
Revolutionize creativity with instant, precise image regeneration!
ZenCtrl, developed by Fotographer AI, is a groundbreaking open-source toolkit designed for AI image generation, enabling the creation of high-quality visuals from a single input image without necessitating any prior training. This innovative tool facilitates accurate regeneration of objects and subjects from multiple viewpoints and backgrounds, providing real-time element regeneration that enhances both stability and flexibility during the creative process. Users can effortlessly regenerate subjects from various angles, swap backgrounds or outfits with just a click, and begin producing results immediately, bypassing the need for extensive training. Leveraging advanced image processing techniques, ZenCtrl ensures high precision while reducing the dependency on large training datasets. Its architecture comprises streamlined sub-models, each finely tuned for specific tasks, leading to a lightweight system that yields sharper and more controllable results. The latest version of ZenCtrl brings substantial enhancements to the generation of both subjects and backgrounds, guaranteeing that the final images are not only coherent but also visually captivating. This ongoing improvement demonstrates a dedication to equipping users with the most effective and efficient tools for their creative projects, ensuring that they can achieve their desired outcomes with ease. As the toolkit evolves, users can expect even more features and capabilities that will further streamline their creative workflows.
-
21
Qwen-Image
Alibaba
Transform your ideas into stunning visuals effortlessly.
Qwen-Image is a state-of-the-art multimodal diffusion transformer (MMDiT) foundation model that excels in generating images, rendering text, editing, and understanding visual content. This model is particularly noted for its ability to seamlessly integrate intricate text elements, utilizing both alphabetic and logographic scripts in images while ensuring precision in typography. It accommodates a diverse array of artistic expressions, ranging from photorealistic imagery to impressionism, anime, and minimalist aesthetics. Beyond mere creation, Qwen-Image boasts sophisticated editing capabilities such as style transfer, object addition or removal, enhancement of details, in-image text adjustments, and the manipulation of human poses with straightforward prompts. Additionally, the model’s built-in vision comprehension functions—like object detection, semantic segmentation, depth and edge estimation, novel view synthesis, and super-resolution—significantly bolster its capacity for intelligent visual analysis. Accessible via well-known libraries such as Hugging Face Diffusers, it is also equipped with tools for prompt enhancement, supporting multiple languages and thereby broadening its utility for creators in various disciplines. Overall, Qwen-Image’s extensive functionalities render it an invaluable resource for both artists and developers eager to delve into the confluence of visual art and technological innovation, making it a transformative tool in the creative landscape.
-
22
NVIDIA Cosmos
NVIDIA
Empowering developers with cutting-edge tools for AI innovation.
NVIDIA Cosmos is an innovative platform designed specifically for developers, featuring state-of-the-art generative World Foundation Models (WFMs), sophisticated video tokenizers, robust safety measures, and an efficient data processing and curation system that enhances the development of physical AI technologies. This platform equips developers engaged in fields like autonomous vehicles, robotics, and video analytics AI agents with the tools needed to generate highly realistic, physics-informed synthetic video data, drawing from a vast dataset that includes 20 million hours of both real and simulated footage. As a result, it allows for the quick simulation of future scenarios, the training of world models, and the customization of particular behaviors. The architecture of the platform consists of three main types of WFMs: Cosmos Predict, capable of generating up to 30 seconds of continuous video from diverse input modalities; Cosmos Transfer, which adapts simulations to function effectively across varying environments and lighting conditions, enhancing domain augmentation; and Cosmos Reason, a vision-language model that applies structured reasoning to interpret spatial-temporal data for effective planning and decision-making. Through these advanced capabilities, NVIDIA Cosmos not only accelerates the innovation cycle in physical AI applications but also promotes significant advancements across a wide range of industries, ultimately contributing to the evolution of intelligent technologies.
-
23
DeepSeek V3.1
DeepSeek
Revolutionizing AI with unmatched power and flexibility.
DeepSeek V3.1 emerges as a groundbreaking open-weight large language model, featuring an astounding 685-billion parameters and an extensive 128,000-token context window that enables it to process lengthy documents similar to 400-page novels in a single run. This model encompasses integrated capabilities for conversation, reasoning, and code generation within a unified hybrid framework that effectively blends these varied functionalities. Additionally, V3.1 supports multiple tensor formats, allowing developers to optimize performance across different hardware configurations. Initial benchmark tests indicate impressive outcomes, with a notable score of 71.6% on the Aider coding benchmark, placing it on par with or even outperforming competitors like Claude Opus 4, all while maintaining a significantly lower cost. Launched under an open-source license on Hugging Face with minimal promotion, DeepSeek V3.1 aims to transform the availability of advanced AI solutions, potentially challenging the traditional landscape dominated by proprietary models. The model's innovative features and affordability are likely to attract a diverse array of developers eager to implement state-of-the-art AI technologies in their applications, thus fostering a new wave of creativity and efficiency in the tech industry.
-
24
DeepSeek has introduced DeepSeek-V3.1-Terminus, an enhanced version of the V3.1 architecture that incorporates user feedback to improve output reliability, uniformity, and overall performance of the agent. This upgrade notably reduces the frequency of mixed Chinese and English text as well as unintended anomalies, resulting in a more polished and cohesive language generation experience. Furthermore, the update overhauls both the code agent and search agent subsystems, yielding better and more consistent performance across a range of benchmarks. DeepSeek-V3.1-Terminus is released as an open-source model, with its weights made available on Hugging Face, thereby facilitating easier access for the community to utilize its functionalities. The model's architecture stays consistent with that of DeepSeek-V3, ensuring compatibility with existing deployment strategies, while updated inference demonstrations are provided for users to investigate its capabilities. Impressively, the model functions at a massive scale of 685 billion parameters and accommodates various tensor formats, such as FP8, BF16, and F32, which enhances its adaptability in diverse environments. This versatility empowers developers to select the most appropriate format tailored to their specific requirements and resource limitations, thereby optimizing performance in their respective applications.
-
25
DeepSeek-V3.2-Exp
DeepSeek
Experience lightning-fast efficiency with cutting-edge AI technology!
We are excited to present DeepSeek-V3.2-Exp, our latest experimental model that evolves from V3.1-Terminus, incorporating the cutting-edge DeepSeek Sparse Attention (DSA) technology designed to significantly improve both training and inference speeds for longer contexts. This innovative DSA framework enables accurate sparse attention while preserving the quality of outputs, resulting in enhanced performance for long-context tasks alongside reduced computational costs. Benchmark evaluations demonstrate that V3.2-Exp delivers performance on par with V3.1-Terminus, all while benefiting from these efficiency gains. The model is fully functional across various platforms, including app, web, and API. In addition, to promote wider accessibility, we have reduced DeepSeek API pricing by more than 50% starting now. During this transition phase, users will have access to V3.1-Terminus through a temporary API endpoint until October 15, 2025. DeepSeek invites feedback on DSA from users via our dedicated feedback portal, encouraging community engagement. To further support this initiative, DeepSeek-V3.2-Exp is now available as open-source, with model weights and key technologies—including essential GPU kernels in TileLang and CUDA—published on Hugging Face, and we are eager to observe how the community will leverage this significant technological advancement. As we unveil this new chapter, we anticipate fruitful interactions and innovative applications arising from the collective contributions of our user base.