-
1
Falcon 2
Technology Innovation Institute (TII)
Elevate your AI experience with groundbreaking multimodal capabilities!
Falcon 2 11B is an adaptable open-source AI model that boasts support for various languages and integrates multimodal capabilities, particularly excelling in tasks that connect vision and language. It surpasses Meta’s Llama 3 8B and matches the performance of Google’s Gemma 7B, as confirmed by the Hugging Face Leaderboard. Looking ahead, the development strategy involves implementing a 'Mixture of Experts' approach designed to significantly enhance the model's capabilities, pushing the boundaries of AI technology even further. This anticipated growth is expected to yield groundbreaking innovations, reinforcing Falcon 2's status within the competitive realm of artificial intelligence. Furthermore, such advancements could pave the way for novel applications that redefine how we interact with AI systems.
-
2
Falcon 3
Technology Innovation Institute (TII)
Empowering innovation with efficient, accessible AI for everyone.
Falcon 3 is an open-source large language model introduced by the Technology Innovation Institute (TII), with the goal of expanding access to cutting-edge AI technologies. It is engineered for optimal efficiency, making it suitable for use on lightweight devices such as laptops while still delivering impressive performance. The Falcon 3 collection consists of four scalable models, each tailored for specific uses and capable of supporting a variety of languages while keeping resource use to a minimum. This latest edition in TII's lineup of language models establishes a new standard for reasoning, language understanding, following instructions, coding, and solving mathematical problems. By combining strong performance with resource efficiency, Falcon 3 aims to make advanced AI more accessible, enabling users from diverse fields to take advantage of sophisticated technology without the need for significant computational resources. Additionally, this initiative not only enhances the skills of individual users but also promotes innovation across various industries by providing easy access to advanced AI tools, ultimately transforming how technology is utilized in everyday practices.
-
3
Qwen2.5-VL
Alibaba
Next-level visual assistant transforming interaction with data.
The Qwen2.5-VL represents a significant advancement in the Qwen vision-language model series, offering substantial enhancements over the earlier version, Qwen2-VL. This sophisticated model showcases remarkable skills in visual interpretation, capable of recognizing a wide variety of elements in images, including text, charts, and numerous graphical components. Acting as an interactive visual assistant, it possesses the ability to reason and adeptly utilize tools, making it ideal for applications that require interaction on both computers and mobile devices. Additionally, Qwen2.5-VL excels in analyzing lengthy videos, being able to pinpoint relevant segments within those that exceed one hour in duration. It also specializes in precisely identifying objects in images, providing bounding boxes or point annotations, and generates well-organized JSON outputs detailing coordinates and attributes. The model is designed to output structured data for various document types, such as scanned invoices, forms, and tables, which proves especially beneficial for sectors like finance and commerce. Available in both base and instruct configurations across 3B, 7B, and 72B models, Qwen2.5-VL is accessible on platforms like Hugging Face and ModelScope, broadening its availability for developers and researchers. Furthermore, this model not only enhances the realm of vision-language processing but also establishes a new benchmark for future innovations in this area, paving the way for even more sophisticated applications.
-
4
R1 1776
Perplexity AI
Empowering innovation through open-source AI for all.
Perplexity AI has unveiled R1 1776 as an open-source large language model (LLM) constructed on the DeepSeek R1 framework, aimed at promoting transparency and facilitating collaborative endeavors in AI development. This release allows researchers and developers to delve into the model's architecture and source code, enabling them to refine and adapt it for various applications. Through the public availability of R1 1776, Perplexity AI aspires to stimulate innovation while maintaining ethical principles within the AI industry. This initiative not only empowers the community but also cultivates a culture of shared knowledge and accountability among those working in AI. Furthermore, it represents a significant step towards democratizing access to advanced AI technologies.
-
5
SmolLM2
Hugging Face
Compact language models delivering high performance on any device.
SmolLM2 features a sophisticated range of compact language models designed for effective on-device operations. This assortment includes models with various parameter counts, such as a substantial 1.7 billion, alongside more efficient iterations at 360 million and 135 million parameters, which guarantees optimal functionality on devices with limited resources. The models are particularly adept at text generation and have been fine-tuned for scenarios that demand quick responses and low latency, ensuring they deliver exceptional results in diverse applications, including content creation, programming assistance, and understanding natural language. The adaptability of SmolLM2 makes it a prime choice for developers who wish to embed powerful AI functionalities into mobile devices, edge computing platforms, and other environments where resource availability is restricted. Its thoughtful design exemplifies a dedication to achieving a balance between high performance and user accessibility, thus broadening the reach of advanced AI technologies. Furthermore, the ongoing development of such models signals a promising future for AI integration in everyday technology.
-
6
QwQ-Max-Preview
Alibaba
Unleashing advanced AI for complex challenges and collaboration.
QwQ-Max-Preview represents an advanced AI model built on the Qwen2.5-Max architecture, designed to demonstrate exceptional abilities in areas such as intricate reasoning, mathematical challenges, programming tasks, and agent-based activities. This preview highlights its improved functionalities across various general-domain applications, showcasing a strong capability to handle complex workflows effectively. Set to be launched as open-source software under the Apache 2.0 license, QwQ-Max-Preview is expected to feature substantial enhancements and refinements in its final version. In addition to its technical advancements, the model plays a vital role in fostering a more inclusive AI landscape, which is further supported by the upcoming release of the Qwen Chat application and streamlined model options like QwQ-32B, aimed at developers seeking local deployment alternatives. This initiative not only enhances accessibility for a broader audience but also stimulates creativity and progress within the AI community, ensuring that diverse voices can contribute to the field's evolution. The commitment to open-source principles is likely to inspire further exploration and collaboration among developers.
-
7
Mistral Large 2
Mistral AI
Unleash innovation with advanced AI for limitless potential.
Mistral AI has unveiled the Mistral Large 2, an advanced AI model engineered to perform exceptionally well across various fields, including code generation, multilingual comprehension, and complex reasoning tasks. Boasting a remarkable 128k context window, this model supports a vast selection of languages such as English, French, Spanish, and Arabic, as well as more than 80 programming languages. Tailored for high-throughput single-node inference, Mistral Large 2 is ideal for applications that demand substantial context management. Its outstanding performance on benchmarks like MMLU, alongside enhanced abilities in code generation and reasoning, ensures both precision and effectiveness in outcomes. Moreover, the model is equipped with improved function calling and retrieval functionalities, which are especially advantageous for intricate business applications. This versatility positions Mistral Large 2 as a formidable asset for developers and enterprises eager to harness cutting-edge AI technologies for innovative solutions, ultimately driving efficiency and productivity in their operations.
-
8
QVQ-Max
Alibaba
Revolutionizing visual understanding for smarter decision-making and creativity.
QVQ-Max is a cutting-edge visual reasoning AI that merges detailed observation with sophisticated reasoning to understand and analyze images, videos, and diagrams. This AI can identify objects, read textual labels, and interpret visual data for solving complex math problems or predicting future events in videos. Furthermore, it excels at flexible applications, such as designing illustrations, creating video scripts, and enhancing creative projects. It also assists users in educational contexts by helping with math and physics problems that involve diagrams, offering intuitive explanations of challenging concepts. In daily life, QVQ-Max can guide decision-making, such as suggesting outfits based on wardrobe photos or providing step-by-step cooking advice. As the platform develops, its ability to handle even more complex tasks, like operating devices or playing games, will expand, making it an increasingly valuable tool in various aspects of life and work.
-
9
Meta’s Llama 4 Behemoth is an advanced multimodal AI model that boasts 288 billion active parameters, making it one of the most powerful models in the world. It outperforms other leading models like GPT-4.5 and Gemini 2.0 Pro on numerous STEM-focused benchmarks, showcasing exceptional skills in math, reasoning, and image understanding. As the teacher model behind Llama 4 Scout and Llama 4 Maverick, Llama 4 Behemoth drives major advancements in model distillation, improving both efficiency and performance. Currently still in training, Behemoth is expected to redefine AI intelligence and multimodal processing once fully deployed.
-
10
Meta’s Llama 4 Maverick is a state-of-the-art multimodal AI model that packs 17 billion active parameters and 128 experts into a high-performance solution. Its performance surpasses other top models, including GPT-4o and Gemini 2.0 Flash, particularly in reasoning, coding, and image processing benchmarks. Llama 4 Maverick excels at understanding and generating text while grounding its responses in visual data, making it perfect for applications that require both types of information. This model strikes a balance between power and efficiency, offering top-tier AI capabilities at a fraction of the parameter size compared to larger models, making it a versatile tool for developers and enterprises alike.
-
11
Llama 4 Scout
Meta
Smaller model with 17B active parameters, 16 experts, 109B total parameters
Llama 4 Scout represents a leap forward in multimodal AI, featuring 17 billion active parameters and a groundbreaking 10 million token context length. With its ability to integrate both text and image data, Llama 4 Scout excels at tasks like multi-document summarization, complex reasoning, and image grounding. It delivers superior performance across various benchmarks and is particularly effective in applications requiring both language and visual comprehension. Scout's efficiency and advanced capabilities make it an ideal solution for developers and businesses looking for a versatile and powerful model to enhance their AI-driven projects.
-
12
ESMFold
Meta
Unlocking life's mysteries through AI's transformative insights.
ESMFold exemplifies how artificial intelligence can provide us with groundbreaking tools to investigate the natural world, similar to how the microscope transformed our ability to see the intricate details of life. By leveraging AI, we can achieve new insights into the rich tapestry of biological diversity, thus deepening our understanding of life sciences. A considerable amount of AI research focuses on teaching machines to perceive the world in ways that parallel human cognition. However, the intricate language of proteins remains difficult for humans to interpret and has posed challenges for even the most sophisticated computational models. Despite these hurdles, AI has the potential to decode this complex language, thereby enhancing our understanding of biological mechanisms. Investigating AI's role in biology not only broadens our comprehension of life sciences but also illuminates the wider implications of artificial intelligence as a whole. Our research underscores the interconnected nature of various disciplines: the large language models that drive advancements in machine translation, natural language processing, speech recognition, and image generation also have the potential to uncover valuable insights into biological systems. This interdisciplinary strategy may lead to groundbreaking discoveries in both the fields of AI and biology, fostering collaboration that could yield transformative advancements. As we continue to explore these synergies, the future holds great promise for expanding our knowledge and capabilities in understanding life itself.
-
13
XLNet
XLNet
Revolutionizing language processing with state-of-the-art performance.
XLNet presents a groundbreaking method for unsupervised language representation learning through its distinct generalized permutation language modeling objective. In addition, it employs the Transformer-XL architecture, which excels in managing language tasks that necessitate the analysis of longer contexts. Consequently, XLNet achieves remarkable results, establishing new benchmarks with its state-of-the-art (SOTA) performance in various downstream language applications like question answering, natural language inference, sentiment analysis, and document ranking. This innovative model not only enhances the capabilities of natural language processing but also opens new avenues for further research in the field. Its impact is expected to influence future developments and methodologies in language understanding.
-
14
CodeGen
Salesforce
Revolutionize coding with powerful, efficient, open-source synthesis.
CodeGen is an innovative open-source framework aimed at producing code via program synthesis, employing TPU-v4 in its training process. It distinguishes itself as a formidable competitor to OpenAI Codex in the field of code generation tools, showcasing its potential to enhance developer productivity and streamline coding tasks.
-
15
StarCoder
BigCode
Transforming coding challenges into seamless solutions with innovation.
StarCoder and StarCoderBase are sophisticated Large Language Models crafted for coding tasks, built from freely available data sourced from GitHub, which includes an extensive array of over 80 programming languages, along with Git commits, GitHub issues, and Jupyter notebooks. Similarly to LLaMA, these models were developed with around 15 billion parameters trained on an astonishing 1 trillion tokens. Additionally, StarCoderBase was specifically optimized with 35 billion Python tokens, culminating in the evolution of what we now recognize as StarCoder.
Our assessments revealed that StarCoderBase outperforms other open-source Code LLMs when evaluated against well-known programming benchmarks, matching or even exceeding the performance of proprietary models like OpenAI's code-cushman-001 and the original Codex, which was instrumental in the early development of GitHub Copilot. With a remarkable context length surpassing 8,000 tokens, the StarCoder models can manage more data than any other open LLM available, thus unlocking a plethora of possibilities for innovative applications. This adaptability is further showcased by our ability to engage with the StarCoder models through a series of interactive dialogues, effectively transforming them into versatile technical aides capable of assisting with a wide range of programming challenges. Furthermore, this interactive capability enhances user experience, making it easier for developers to obtain immediate support and insights on complex coding issues.
-
16
Baichuan-13B
Baichuan Intelligent Technology
Unlock limitless potential with cutting-edge bilingual language technology.
Baichuan-13B is a powerful language model featuring 13 billion parameters, created by Baichuan Intelligent as both an open-source and commercially accessible option, and it builds on the previous Baichuan-7B model. This new iteration has excelled in key benchmarks for both Chinese and English, surpassing other similarly sized models in performance. It offers two different pre-training configurations: Baichuan-13B-Base and Baichuan-13B-Chat.
Significantly, Baichuan-13B increases its parameter count to 13 billion, utilizing the groundwork established by Baichuan-7B, and has been trained on an impressive 1.4 trillion tokens sourced from high-quality datasets, achieving a 40% increase in training data compared to LLaMA-13B. It stands out as the most comprehensively trained open-source model within the 13B parameter range. Furthermore, it is designed to be bilingual, supporting both Chinese and English, employs ALiBi positional encoding, and features a context window size of 4096 tokens, which provides it with the flexibility needed for a wide range of natural language processing tasks. This model's advancements mark a significant step forward in the capabilities of large language models.
-
17
Llama 2
Meta
Revolutionizing AI collaboration with powerful, open-source language models.
We are excited to unveil the latest version of our open-source large language model, which includes model weights and initial code for the pretrained and fine-tuned Llama language models, ranging from 7 billion to 70 billion parameters. The Llama 2 pretrained models have been crafted using a remarkable 2 trillion tokens and boast double the context length compared to the first iteration, Llama 1. Additionally, the fine-tuned models have been refined through the insights gained from over 1 million human annotations. Llama 2 showcases outstanding performance compared to various other open-source language models across a wide array of external benchmarks, particularly excelling in reasoning, coding abilities, proficiency, and knowledge assessments. For its training, Llama 2 leveraged publicly available online data sources, while the fine-tuned variant, Llama-2-chat, integrates publicly accessible instruction datasets alongside the extensive human annotations mentioned earlier. Our project is backed by a robust coalition of global stakeholders who are passionate about our open approach to AI, including companies that have offered valuable early feedback and are eager to collaborate with us on Llama 2. The enthusiasm surrounding Llama 2 not only highlights its advancements but also marks a significant transformation in the collaborative development and application of AI technologies. This collective effort underscores the potential for innovation that can emerge when the community comes together to share resources and insights.
-
18
Code Llama
Meta
Transforming coding challenges into seamless solutions for everyone.
Code Llama is a sophisticated language model engineered to produce code from text prompts, setting itself apart as a premier choice among publicly available models for coding applications. This groundbreaking model not only enhances productivity for seasoned developers but also supports newcomers in tackling the complexities of learning programming. Its adaptability allows Code Llama to serve as both an effective productivity tool and a pedagogical resource, enabling programmers to develop more efficient and well-documented software. Furthermore, users can generate code alongside natural language explanations by inputting either format, which contributes to its flexibility for various programming tasks. Offered for free for both research and commercial use, Code Llama is based on the Llama 2 architecture and is available in three specific versions: the core Code Llama model, Code Llama - Python designed exclusively for Python development, and Code Llama - Instruct, which is fine-tuned to understand and execute natural language commands accurately. As a result, Code Llama stands out not just for its technical capabilities but also for its accessibility and relevance to diverse coding scenarios.
-
19
Medical LLM
John Snow Labs
Revolutionizing healthcare with AI-driven language understanding solutions.
John Snow Labs has introduced an advanced large language model tailored specifically for the healthcare industry, with the intention of revolutionizing how medical organizations harness the power of artificial intelligence. This innovative platform is crafted solely for healthcare practitioners, fusing cutting-edge natural language processing capabilities with a profound understanding of medical terminology, clinical workflows, and compliance frameworks. As a result, it acts as a vital asset that enables healthcare providers, researchers, and administrators to extract crucial insights, improve patient care, and boost operational efficiency. At the heart of the Healthcare LLM lies its comprehensive training on a wide range of healthcare-related content, which encompasses clinical documentation, scholarly articles, and regulatory guidelines. This specialized training empowers the model to adeptly interpret and generate medical language, establishing it as an indispensable resource for multiple functions such as clinical documentation, automated coding, and medical research projects. Moreover, its functionalities contribute to optimizing workflows, allowing healthcare professionals to dedicate more time to patient care instead of administrative responsibilities. Ultimately, the integration of this advanced model into healthcare settings could significantly enhance overall service delivery and patient outcomes.
-
20
Pixtral Large
Mistral AI
Unleash innovation with a powerful multimodal AI solution.
Pixtral Large is a comprehensive multimodal model developed by Mistral AI, boasting an impressive 124 billion parameters that build upon their earlier Mistral Large 2 framework. The architecture consists of a 123-billion-parameter multimodal decoder paired with a 1-billion-parameter vision encoder, which empowers the model to adeptly interpret diverse content such as documents, graphs, and natural images while maintaining excellent text understanding. Furthermore, Pixtral Large can accommodate a substantial context window of 128,000 tokens, enabling it to process at least 30 high-definition images simultaneously with impressive efficiency. Its performance has been validated through exceptional results in benchmarks like MathVista, DocVQA, and VQAv2, surpassing competitors like GPT-4o and Gemini-1.5 Pro. The model is made available for research and educational use under the Mistral Research License, while also offering a separate Mistral Commercial License for businesses. This dual licensing approach enhances its appeal, making Pixtral Large not only a powerful asset for academic research but also a significant contributor to advancements in commercial applications. As a result, the model stands out as a multifaceted tool capable of driving innovation across various fields.
-
21
Liquid AI
Liquid AI
Empowering seamless, transparent AI solutions for everyone’s needs.
At Liquid, our goal is to create sophisticated AI systems capable of tackling a wide range of challenges, allowing users to effectively build, use, and oversee their own AI solutions. This dedication ensures the integration of AI into all businesses is done in a seamless, reliable, and efficient manner. Looking ahead, Liquid seeks to design and deploy state-of-the-art AI solutions that are available to everyone, promoting inclusivity in technology. Our methodology emphasizes the development of transparent models in organizations that prioritize openness and clarity. We hold the conviction that such transparency cultivates trust and spurs innovation within the realm of AI, ultimately benefiting society as a whole. By fostering an environment of collaboration and shared knowledge, we believe we can unlock the full potential of AI for diverse applications.
-
22
Qwen2.5-1M
Alibaba
Revolutionizing long context processing with lightning-fast efficiency!
The Qwen2.5-1M language model, developed by the Qwen team, is an open-source innovation designed to handle extraordinarily long context lengths of up to one million tokens. This release features two model variations: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, marking a groundbreaking milestone as the first Qwen models optimized for such extensive token context. Moreover, the team has introduced an inference framework utilizing vLLM along with sparse attention mechanisms, which significantly boosts processing speeds for inputs of 1 million tokens, achieving speed enhancements ranging from three to seven times. Accompanying this model is a comprehensive technical report that delves into the design decisions and outcomes of various ablation studies. This thorough documentation ensures that users gain a deep understanding of the models' capabilities and the technology that powers them. Additionally, the improvements in processing efficiency are expected to open new avenues for applications needing extensive context management.
-
23
DeepSeek R2
DeepSeek
Unleashing next-level AI reasoning for global innovation.
DeepSeek R2 is the much-anticipated successor to the original DeepSeek R1, an AI reasoning model that garnered significant attention upon its launch in January 2025 by the Chinese startup DeepSeek. This latest iteration enhances the impressive groundwork laid by R1, which transformed the AI domain by delivering cost-effective capabilities that rival top-tier models such as OpenAI's o1. R2 is poised to deliver a notable enhancement in performance, promising rapid processing and reasoning skills that closely mimic human capabilities, especially in demanding fields like intricate coding and higher-level mathematics. By leveraging DeepSeek's advanced Mixture-of-Experts framework alongside refined training methodologies, R2 aims to exceed the benchmarks set by its predecessor while maintaining a low computational footprint. Furthermore, there is a strong expectation that this model will expand its reasoning prowess to include additional languages beyond English, potentially enhancing its applicability on a global scale. The excitement surrounding R2 underscores the continuous advancement of AI technology and its potential to impact a variety of sectors significantly, paving the way for innovations that could redefine how we interact with machines.
-
24
PaLM
Google
Unlock innovative potential with powerful, secure language models.
The PaLM API provides a simple and secure avenue for utilizing our cutting-edge language models. We are thrilled to unveil an exceptionally efficient model that strikes a balance between size and performance, with intentions to roll out additional model sizes soon. In tandem with this API, MakerSuite is introduced as an intuitive tool for quickly prototyping concepts, which will ultimately offer features such as prompt engineering, synthetic data generation, and custom model modifications, all underpinned by robust safety protocols. Presently, a limited group of developers has access to the PaLM API and MakerSuite in Private Preview, and we urge everyone to watch for our forthcoming waitlist. This initiative marks a pivotal advancement in enabling developers to push the boundaries of innovation with language models, paving the way for groundbreaking applications in various fields. The combination of powerful tools and advanced models is sure to inspire creativity and efficiency among users.
-
25
PaLM 2
Google
Revolutionizing AI with advanced reasoning and ethical practices.
PaLM 2 marks a significant advancement in the realm of large language models, furthering Google's legacy of leading innovations in machine learning and ethical AI initiatives.
This model showcases remarkable skills in intricate reasoning tasks, including coding, mathematics, classification, question answering, multilingual translation, and natural language generation, outperforming earlier models, including its predecessor, PaLM. Its superior performance stems from a groundbreaking design that optimizes computational scalability, incorporates a carefully curated mixture of datasets, and implements advancements in the model's architecture.
Moreover, PaLM 2 embodies Google’s dedication to responsible AI practices, as it has undergone thorough evaluations to uncover any potential risks, biases, and its usability in both research and commercial contexts. As a cornerstone for other innovative applications like Med-PaLM 2 and Sec-PaLM, it also drives sophisticated AI functionalities and tools within Google, such as Bard and the PaLM API. Its adaptability positions it as a crucial resource across numerous domains, demonstrating AI's capacity to boost both productivity and creative solutions, ultimately paving the way for future advancements in the field.