-
1
DeepSeek-V2
DeepSeek
Revolutionizing AI with unmatched efficiency and superior language understanding.
DeepSeek-V2 represents an advanced Mixture-of-Experts (MoE) language model created by DeepSeek-AI, recognized for its economical training and superior inference efficiency. This model features a staggering 236 billion parameters, engaging only 21 billion for each token, and can manage a context length stretching up to 128K tokens. It employs sophisticated architectures like Multi-head Latent Attention (MLA) to enhance inference by reducing the Key-Value (KV) cache and utilizes DeepSeekMoE for cost-effective training through sparse computations. When compared to its earlier version, DeepSeek 67B, this model exhibits substantial advancements, boasting a 42.5% decrease in training costs, a 93.3% reduction in KV cache size, and a remarkable 5.76-fold increase in generation speed. With training based on an extensive dataset of 8.1 trillion tokens, DeepSeek-V2 showcases outstanding proficiency in language understanding, programming, and reasoning tasks, thereby establishing itself as a premier open-source model in the current landscape. Its groundbreaking methodology not only enhances performance but also sets unprecedented standards in the realm of artificial intelligence, inspiring future innovations in the field.
-
2
Falcon Mamba 7B
Technology Innovation Institute (TII)
Revolutionary open-source model redefining efficiency in AI.
The Falcon Mamba 7B represents a groundbreaking advancement as the first open-source State Space Language Model (SSLM), introducing an innovative architecture as part of the Falcon model series. Recognized as the leading open-source SSLM worldwide by Hugging Face, it sets a new benchmark for efficiency in the realm of artificial intelligence. Unlike traditional transformer models, SSLMs utilize considerably less memory and can generate extended text sequences smoothly without additional resource requirements. Falcon Mamba 7B surpasses other prominent transformer models, including Meta’s Llama 3.1 8B and Mistral’s 7B, showcasing superior performance and capabilities. This innovation underscores Abu Dhabi’s commitment to advancing AI research and solidifies the region's role as a key contributor in the global AI sector. Such technological progress is essential not only for driving innovation but also for enhancing collaborative efforts across various fields. Furthermore, it opens up new avenues for research and development that could greatly influence future AI applications.
-
3
Falcon 2
Technology Innovation Institute (TII)
Elevate your AI experience with groundbreaking multimodal capabilities!
Falcon 2 11B is an adaptable open-source AI model that boasts support for various languages and integrates multimodal capabilities, particularly excelling in tasks that connect vision and language. It surpasses Meta’s Llama 3 8B and matches the performance of Google’s Gemma 7B, as confirmed by the Hugging Face Leaderboard. Looking ahead, the development strategy involves implementing a 'Mixture of Experts' approach designed to significantly enhance the model's capabilities, pushing the boundaries of AI technology even further. This anticipated growth is expected to yield groundbreaking innovations, reinforcing Falcon 2's status within the competitive realm of artificial intelligence. Furthermore, such advancements could pave the way for novel applications that redefine how we interact with AI systems.
-
4
Falcon 3
Technology Innovation Institute (TII)
Empowering innovation with efficient, accessible AI for everyone.
Falcon 3 is an open-source large language model introduced by the Technology Innovation Institute (TII), with the goal of expanding access to cutting-edge AI technologies. It is engineered for optimal efficiency, making it suitable for use on lightweight devices such as laptops while still delivering impressive performance. The Falcon 3 collection consists of four scalable models, each tailored for specific uses and capable of supporting a variety of languages while keeping resource use to a minimum. This latest edition in TII's lineup of language models establishes a new standard for reasoning, language understanding, following instructions, coding, and solving mathematical problems. By combining strong performance with resource efficiency, Falcon 3 aims to make advanced AI more accessible, enabling users from diverse fields to take advantage of sophisticated technology without the need for significant computational resources. Additionally, this initiative not only enhances the skills of individual users but also promotes innovation across various industries by providing easy access to advanced AI tools, ultimately transforming how technology is utilized in everyday practices.
-
5
Qwen2.5-Max
Alibaba
Revolutionary AI model unlocking new pathways for innovation.
Qwen2.5-Max is a cutting-edge Mixture-of-Experts (MoE) model developed by the Qwen team, trained on a vast dataset of over 20 trillion tokens and improved through techniques such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). It outperforms models like DeepSeek V3 in various evaluations, excelling in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, and also achieving impressive results in tests like MMLU-Pro. Users can access this model via an API on Alibaba Cloud, which facilitates easy integration into various applications, and they can also engage with it directly on Qwen Chat for a more interactive experience. Furthermore, Qwen2.5-Max's advanced features and high performance mark a remarkable step forward in the evolution of AI technology. It not only enhances productivity but also opens new avenues for innovation in the field.
-
6
Qwen2.5-VL
Alibaba
Next-level visual assistant transforming interaction with data.
The Qwen2.5-VL represents a significant advancement in the Qwen vision-language model series, offering substantial enhancements over the earlier version, Qwen2-VL. This sophisticated model showcases remarkable skills in visual interpretation, capable of recognizing a wide variety of elements in images, including text, charts, and numerous graphical components. Acting as an interactive visual assistant, it possesses the ability to reason and adeptly utilize tools, making it ideal for applications that require interaction on both computers and mobile devices. Additionally, Qwen2.5-VL excels in analyzing lengthy videos, being able to pinpoint relevant segments within those that exceed one hour in duration. It also specializes in precisely identifying objects in images, providing bounding boxes or point annotations, and generates well-organized JSON outputs detailing coordinates and attributes. The model is designed to output structured data for various document types, such as scanned invoices, forms, and tables, which proves especially beneficial for sectors like finance and commerce. Available in both base and instruct configurations across 3B, 7B, and 72B models, Qwen2.5-VL is accessible on platforms like Hugging Face and ModelScope, broadening its availability for developers and researchers. Furthermore, this model not only enhances the realm of vision-language processing but also establishes a new benchmark for future innovations in this area, paving the way for even more sophisticated applications.
-
7
Ray2
Luma AI
Transform your ideas into stunning, cinematic visual stories.
Ray2 is an innovative video generation model that stands out for its ability to create hyper-realistic visuals alongside seamless, logical motion. Its talent for understanding text prompts is remarkable, and it is also capable of processing images and videos as input. Developed with Luma’s cutting-edge multi-modal architecture, Ray2 possesses ten times the computational power of its predecessor, Ray1, marking a significant technological leap. The arrival of Ray2 signifies a transformative epoch in video generation, where swift, coherent movements and intricate details coalesce with a well-structured narrative. These advancements greatly enhance the practicality of the generated content, yielding videos that are increasingly suitable for professional production. At present, Ray2 specializes in text-to-video generation, and future expansions will include features for image-to-video, video-to-video, and editing capabilities. This model raises the bar for motion fidelity, producing smooth, cinematic results that leave a lasting impression. By utilizing Ray2, creators can bring their imaginative ideas to life, crafting captivating visual stories with precise camera movements that enhance their narrative. Thus, Ray2 not only serves as a powerful tool but also inspires users to unleash their artistic potential in unprecedented ways. With each creation, the boundaries of visual storytelling are pushed further, allowing for a richer and more immersive viewer experience.
-
8
Zyphra Zonos
Zyphra
Revolutionary text-to-speech models redefining audio quality standards!
Zyphra is excited to announce the beta launch of Zonos-v0.1, featuring two advanced and real-time text-to-speech models that incorporate high-fidelity voice cloning technology. This release includes a 1.6B transformer model and a 1.6B hybrid model, both distributed under the Apache 2.0 license. Considering the difficulties in measuring audio quality quantitatively, we assert that the quality of output generated by Zonos matches or exceeds that of leading proprietary TTS systems currently on the market. Moreover, we believe that providing access to such high-quality models will significantly enhance progress in TTS research. The model weights for Zonos are readily available on Huggingface, along with sample inference code hosted in our GitHub repository. In addition, Zonos can be accessed through our model playground and API, which offers simple and competitive flat-rate pricing options for users. To showcase Zonos's performance, we have compiled a series of sample comparisons against existing proprietary models that illustrate its exceptional capabilities. This project underscores our dedication to promoting innovation within the text-to-speech technology sector, and we anticipate that it will inspire further advancements in the field.
-
9
Perplexity Deep Research is an advanced AI-driven platform designed for thorough investigations into a wide array of complex subjects. By emulating human research techniques, it methodically examines, analyzes, and interprets different documents while constantly enhancing its approach to achieve a profound understanding of the topic at hand. Once the research process is complete, Deep Research organizes the amassed information into detailed reports, which users can easily export as PDFs or share online. This innovative tool is particularly valuable across various sectors, including finance, marketing, technology, health, and travel planning, empowering users to conduct professional-level research with exceptional efficiency. At present, Deep Research is accessible online, with plans to broaden its availability to iOS, Android, and Mac platforms in the future, providing free access with unlimited queries for Pro subscribers and imposing daily limits for non-subscribers. Furthermore, the intuitive interface is designed to ensure that even those with little experience can navigate the platform effortlessly and take advantage of its sophisticated capabilities. The versatility and user-centric design of Deep Research make it an indispensable asset for anyone looking to dive deep into research across multiple domains.
-
10
Sonar
Perplexity
Revolutionizing search with precise, clear answers instantly.
Perplexity has introduced an enhanced AI search engine named Sonar, built on the Llama 3.3 70B model. This latest version of Sonar has undergone additional training to increase the precision of information and improve the clarity of responses within Perplexity's standard search functionality. These upgrades aim to offer users answers that are not only accurate but also easier to understand, all while maintaining the platform's well-known speed and efficiency. Moreover, Sonar is equipped with the ability to conduct real-time, extensive web research and provide answers to questions, enabling developers to easily integrate these features into their applications through a lightweight and budget-friendly API. In addition, the Sonar API supports advanced models such as sonar-reasoning-pro and sonar-pro, which are specifically tailored for complex tasks that require deep contextual understanding and retention. These advanced models can provide more detailed answers, resulting in an average of double the citations compared to previous iterations, thereby greatly enhancing the transparency and reliability of the information offered. With these significant advancements, Sonar aims to set a new standard in delivering exceptional search experiences to its users, ensuring they receive the best possible information available.
-
11
SuperGrok
xAI
Elevate your AI experience with superior features and affordability.
SuperGrok is an upgraded version of xAI's AI, Grok, boasting enhanced features such as access to Grok 3, unlimited image generation capabilities, improved reasoning abilities, and the option to perform research inquiries. This service is positioned as a potentially better and more cost-effective alternative to other premium AI platforms on the market. Furthermore, SuperGrok is designed to appeal to users who seek a well-rounded AI experience that balances both high quality and affordability, ensuring that they have all the tools they need at their fingertips. Ultimately, it represents a significant step forward for those interested in leveraging advanced AI technology.
-
12
Mistral Saba
Mistral AI
"Empowering regional applications with speed, precision, and flexibility."
Mistral Saba is a sophisticated model featuring 24 billion parameters, developed from meticulously curated datasets originating from the Middle East and South Asia. It surpasses the performance of larger models—those exceeding five times its parameter count—by providing accurate and relevant responses while being remarkably faster and more economical. Moreover, it acts as a solid foundation for the development of highly tailored regional applications. Users can access this model via an API, and it can also be deployed locally, addressing specific security needs of customers. Like the newly launched Mistral Small 3, it is designed to be lightweight enough for operation on single-GPU systems, achieving impressive response rates of over 150 tokens per second. Mistral Saba embodies the rich cultural interconnections between the Middle East and South Asia, offering support for Arabic as well as a variety of Indian languages, with particular expertise in South Indian dialects such as Tamil. This broad linguistic capability enhances its flexibility for multinational use in these interconnected regions. Furthermore, the architecture of the model promotes seamless integration into a wide array of platforms, significantly improving its applicability across various sectors and ensuring that it meets the diverse needs of its users.
-
13
R1 1776
Perplexity AI
Empowering innovation through open-source AI for all.
Perplexity AI has unveiled R1 1776 as an open-source large language model (LLM) constructed on the DeepSeek R1 framework, aimed at promoting transparency and facilitating collaborative endeavors in AI development. This release allows researchers and developers to delve into the model's architecture and source code, enabling them to refine and adapt it for various applications. Through the public availability of R1 1776, Perplexity AI aspires to stimulate innovation while maintaining ethical principles within the AI industry. This initiative not only empowers the community but also cultivates a culture of shared knowledge and accountability among those working in AI. Furthermore, it represents a significant step towards democratizing access to advanced AI technologies.
-
14
Ai2 OLMoE
The Allen Institute for Artificial Intelligence
Unlock innovative AI solutions with secure, on-device exploration.
Ai2 OLMoE is a completely open-source language model that utilizes a mixture-of-experts approach, designed to operate fully on-device, which allows users to explore its capabilities in a secure and private environment. The primary goal of this application is to aid researchers in enhancing on-device intelligence while enabling developers to rapidly prototype innovative AI applications without relying on cloud services. As a highly efficient version within the Ai2 OLMo model family, OLMoE empowers users to engage with advanced local models in practical situations, explore strategies to improve smaller AI systems, and locally test their models using the provided open-source framework. Furthermore, OLMoE can be smoothly integrated into a variety of iOS applications, prioritizing user privacy and security by functioning entirely on-device. Users can easily share the results of their conversations with friends or colleagues, enjoying the benefits of a completely open-source model and application code. This makes Ai2 OLMoE an outstanding resource for personal experimentation and collaborative research, offering extensive opportunities for innovation and discovery in the field of artificial intelligence. By leveraging OLMoE, users can contribute to a growing ecosystem of on-device AI solutions that respect user privacy while facilitating cutting-edge advancements.
-
15
SmolLM2
Hugging Face
Compact language models delivering high performance on any device.
SmolLM2 features a sophisticated range of compact language models designed for effective on-device operations. This assortment includes models with various parameter counts, such as a substantial 1.7 billion, alongside more efficient iterations at 360 million and 135 million parameters, which guarantees optimal functionality on devices with limited resources. The models are particularly adept at text generation and have been fine-tuned for scenarios that demand quick responses and low latency, ensuring they deliver exceptional results in diverse applications, including content creation, programming assistance, and understanding natural language. The adaptability of SmolLM2 makes it a prime choice for developers who wish to embed powerful AI functionalities into mobile devices, edge computing platforms, and other environments where resource availability is restricted. Its thoughtful design exemplifies a dedication to achieving a balance between high performance and user accessibility, thus broadening the reach of advanced AI technologies. Furthermore, the ongoing development of such models signals a promising future for AI integration in everyday technology.
-
16
QwQ-Max-Preview
Alibaba
Unleashing advanced AI for complex challenges and collaboration.
QwQ-Max-Preview represents an advanced AI model built on the Qwen2.5-Max architecture, designed to demonstrate exceptional abilities in areas such as intricate reasoning, mathematical challenges, programming tasks, and agent-based activities. This preview highlights its improved functionalities across various general-domain applications, showcasing a strong capability to handle complex workflows effectively. Set to be launched as open-source software under the Apache 2.0 license, QwQ-Max-Preview is expected to feature substantial enhancements and refinements in its final version. In addition to its technical advancements, the model plays a vital role in fostering a more inclusive AI landscape, which is further supported by the upcoming release of the Qwen Chat application and streamlined model options like QwQ-32B, aimed at developers seeking local deployment alternatives. This initiative not only enhances accessibility for a broader audience but also stimulates creativity and progress within the AI community, ensuring that diverse voices can contribute to the field's evolution. The commitment to open-source principles is likely to inspire further exploration and collaboration among developers.
-
17
Octave TTS
Hume AI
Revolutionize storytelling with expressive, customizable, human-like voices.
Hume AI has introduced Octave, a groundbreaking text-to-speech platform that leverages cutting-edge language model technology to deeply grasp and interpret the context of words, enabling it to generate speech that embodies the appropriate emotions, rhythm, and cadence. In contrast to traditional TTS systems that merely vocalize text, Octave emulates the artistry of a human performer, delivering dialogues with rich expressiveness tailored to the specific content being conveyed. Users can create a diverse range of unique AI voices by providing descriptive prompts like "a skeptical medieval peasant," which allows for personalized voice generation that captures specific character nuances or situational contexts. Additionally, Octave enables users to modify emotional tone and speaking style using simple natural language commands, making it easy to request changes such as "speak with more enthusiasm" or "whisper in fear" for precise customization of the output. This high level of interactivity significantly enhances the user experience, creating a more captivating and immersive auditory journey for listeners. As a result, Octave not only revolutionizes text-to-speech technology but also opens new avenues for creative expression and storytelling.
-
18
Scribe
ElevenLabs
Transforming transcription with unparalleled accuracy and adaptability!
ElevenLabs has introduced Scribe, an advanced Automatic Speech Recognition (ASR) model designed to deliver highly accurate transcriptions in a remarkable 99 languages. This pioneering system is specifically engineered to adeptly handle a diverse array of real-world audio scenarios, incorporating features like word-level timestamps, speaker identification, and audio-event tagging. In benchmark tests such as FLEURS and Common Voice, Scribe has surpassed top competitors, including Gemini 2.0 Flash, Whisper Large V3, and Deepgram Nova-3, achieving outstanding word error rates of 98.7% for Italian and 96.7% for English. Moreover, Scribe significantly minimizes errors for languages that have historically presented difficulties, such as Serbian, Cantonese, and Malayalam, where rival models often report error rates exceeding 40%. The ease of integration is also noteworthy, as developers can seamlessly add Scribe to their applications through ElevenLabs' speech-to-text API, which delivers structured JSON transcripts complete with detailed annotations. This combination of accessibility, performance, and adaptability promises to transform the transcription landscape and significantly improve user experiences across a multitude of applications. As a result, Scribe’s introduction could lead to a new era of efficiency and precision in speech recognition technology.
-
19
QwQ-32B
Alibaba
Revolutionizing AI reasoning with efficiency and innovation.
The QwQ-32B model, developed by the Qwen team at Alibaba Cloud, marks a notable leap forward in AI reasoning, specifically designed to enhance problem-solving capabilities. With an impressive 32 billion parameters, it competes with top-tier models like DeepSeek's R1, which boasts a staggering 671 billion parameters. This exceptional efficiency arises from its streamlined parameter usage, allowing QwQ-32B to effectively address intricate challenges, including mathematical reasoning, programming, and various problem-solving tasks, all while using fewer resources. It can manage a context length of up to 32,000 tokens, demonstrating its proficiency in processing extensive input data. Furthermore, QwQ-32B is accessible via Alibaba's Qwen Chat service and is released under the Apache 2.0 license, encouraging collaboration and innovation within the AI development community. As it combines advanced features with efficient processing, QwQ-32B has the potential to significantly influence advancements in artificial intelligence technology. Its unique capabilities position it as a valuable tool for developers and researchers alike.
-
20
Command A
Cohere AI
Maximize efficiency, minimize costs, transform your enterprise today!
Cohere has introduced Command A, a cutting-edge AI model designed to maximize efficiency while utilizing minimal computational power. This innovative model not only rivals but also exceeds the performance of other top contenders like GPT-4 and DeepSeek-V3 in numerous enterprise tasks that necessitate advanced agentic abilities, all while significantly reducing computing costs. Tailored for scenarios that require quick and effective AI responses, Command A empowers organizations to tackle intricate tasks across various sectors without sacrificing performance or resource efficiency. Its sophisticated architecture enables companies to effectively leverage AI capabilities, optimizing workflows and enhancing overall productivity in the process. As businesses increasingly seek to integrate AI into their operations, Command A stands out as a transformative solution that meets the demands of modern enterprises.
-
21
Mistral Large 2
Mistral AI
Unleash innovation with advanced AI for limitless potential.
Mistral AI has unveiled the Mistral Large 2, an advanced AI model engineered to perform exceptionally well across various fields, including code generation, multilingual comprehension, and complex reasoning tasks. Boasting a remarkable 128k context window, this model supports a vast selection of languages such as English, French, Spanish, and Arabic, as well as more than 80 programming languages. Tailored for high-throughput single-node inference, Mistral Large 2 is ideal for applications that demand substantial context management. Its outstanding performance on benchmarks like MMLU, alongside enhanced abilities in code generation and reasoning, ensures both precision and effectiveness in outcomes. Moreover, the model is equipped with improved function calling and retrieval functionalities, which are especially advantageous for intricate business applications. This versatility positions Mistral Large 2 as a formidable asset for developers and enterprises eager to harness cutting-edge AI technologies for innovative solutions, ultimately driving efficiency and productivity in their operations.
-
22
Nova-3
Deepgram
Revolutionizing speech recognition for seamless, multilingual communication solutions.
Deepgram's Nova-3 signifies a revolutionary step forward in speech-to-text technology, achieving new heights of accuracy and efficiency designed specifically for demanding, real-world scenarios. Its advanced ability for real-time multilingual transcription allows for seamless interactions that incorporate various languages, presenting a major advancement for industries such as global customer support and emergency services. Users benefit from the model's self-serve customization option, dubbed Keyterm Prompting, which enables them to swiftly adjust up to 100 key terms pertinent to their sector without needing to undergo extensive retraining of the entire model. This flexibility not only enhances the recognition of industry-specific language and terminology but also expands its usefulness across multiple sectors. Furthermore, Nova-3 exhibits impressive performance enhancements, featuring a 54.3% reduction in word error rate for streaming applications and a 47.4% decrease for batch processing when compared to rival models. Such remarkable progress establishes Nova-3 as an outstanding solution for organizations looking to improve their speech recognition capabilities across a diverse array of applications, helping them maintain a strong competitive edge in an ever-changing market. Consequently, businesses can look forward to heightened communication effectiveness and greater operational productivity, ultimately fostering growth and innovation.
-
23
Mistral Small 3.1
Mistral
Unleash advanced AI versatility with unmatched processing power.
Mistral Small 3.1 is an advanced, multimodal, and multilingual AI model that has been made available under the Apache 2.0 license. Building upon the previous Mistral Small 3, this updated version showcases improved text processing abilities and enhanced multimodal understanding, with the capacity to handle an extensive context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, reaching remarkable inference rates of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in various applications, including instruction adherence, conversational interaction, visual data interpretation, and executing functions, making it suitable for both commercial and individual AI uses. Its efficient architecture allows it to run smoothly on hardware configurations such as a single RTX 4090 or a Mac with 32GB of RAM, enabling on-device operations. Users have the option to download the model from Hugging Face and explore its features via Mistral AI's developer playground, while it is also embedded in services like Google Cloud Vertex AI and accessible on platforms like NVIDIA NIM. This extensive flexibility empowers developers to utilize its advanced capabilities across a wide range of environments and applications, thereby maximizing its potential impact in the AI landscape. Furthermore, Mistral Small 3.1's innovative design ensures that it remains adaptable to future technological advancements.
-
24
EXAONE Deep
LG
Unleash potent language models for advanced reasoning tasks.
EXAONE Deep is a suite of sophisticated language models developed by LG AI Research, featuring configurations of 2.4 billion, 7.8 billion, and 32 billion parameters. These models are particularly adept at tackling a range of reasoning tasks, excelling in domains like mathematics and programming evaluations. Notably, the 2.4B variant stands out among its peers of comparable size, while the 7.8B model surpasses both open-weight counterparts and the proprietary model OpenAI o1-mini. Additionally, the 32B variant competes strongly with leading open-weight models in the industry. The accompanying repository not only provides comprehensive documentation, including performance metrics and quick-start guides for utilizing EXAONE Deep models with the Transformers library, but also offers in-depth explanations of quantized EXAONE Deep weights structured in AWQ and GGUF formats. Users will also find instructions on how to operate these models locally using tools like llama.cpp and Ollama, thereby broadening their understanding of the EXAONE Deep models' potential and ensuring easier access to their powerful capabilities. This resource aims to empower users by facilitating a deeper engagement with the advanced functionalities of the models.
-
25
QVQ-Max
Alibaba
Revolutionizing visual understanding for smarter decision-making and creativity.
QVQ-Max is a cutting-edge visual reasoning AI that merges detailed observation with sophisticated reasoning to understand and analyze images, videos, and diagrams. This AI can identify objects, read textual labels, and interpret visual data for solving complex math problems or predicting future events in videos. Furthermore, it excels at flexible applications, such as designing illustrations, creating video scripts, and enhancing creative projects. It also assists users in educational contexts by helping with math and physics problems that involve diagrams, offering intuitive explanations of challenging concepts. In daily life, QVQ-Max can guide decision-making, such as suggesting outfits based on wardrobe photos or providing step-by-step cooking advice. As the platform develops, its ability to handle even more complex tasks, like operating devices or playing games, will expand, making it an increasingly valuable tool in various aspects of life and work.