-
1
NLP Cloud
NLP Cloud
Unleash AI potential with seamless deployment and customization.
We provide rapid and accurate AI models tailored for effective use in production settings. Our inference API is engineered for maximum uptime, harnessing the latest NVIDIA GPUs to deliver peak performance. Additionally, we have compiled a diverse array of high-quality open-source natural language processing (NLP) models sourced from the community, making them easily accessible for your projects. You can also customize your own models, including GPT-J, or upload your proprietary models for smooth integration into production. Through a user-friendly dashboard, you can swiftly upload or fine-tune AI models, enabling immediate deployment without the complexities of managing factors like memory constraints, uptime, or scalability. You have the freedom to upload an unlimited number of models and deploy them as necessary, fostering a culture of continuous innovation and adaptability to meet your dynamic needs. This comprehensive approach provides a solid foundation for utilizing AI technologies effectively in your initiatives, promoting growth and efficiency in your workflows.
-
2
AI21 Studio
AI21 Studio
Unlock powerful text generation and comprehension with ease.
AI21 Studio offers API access to its Jurassic-1 large language models, which are utilized for text generation and comprehension in countless applications. With our advanced models, you can address any language-related task. The Jurassic-1 models excel at following natural language instructions and require only a handful of examples to adapt to new challenges. Our APIs are ideally suited for standard tasks, including paraphrasing and summarization, providing exceptional results at competitive prices without the need for extensive reworking. If you're looking to fine-tune a personalized model, achieving that is just a few clicks away. The training process is swift and cost-effective, allowing for immediate deployment of the models. By integrating an AI co-writer into your application, you can empower your users with enhanced features. Capabilities such as paraphrasing, long-form draft creation, content repurposing, and tailored auto-complete options can significantly boost user engagement, paving the way for your success and growth in the industry. Ultimately, our tools are designed to streamline your workflows and elevate the overall user experience.
-
3
Gen-2
Runway
Revolutionizing video creation through innovative generative AI technology.
Gen-2: Pushing the Boundaries of Generative AI Innovation.
This cutting-edge multi-modal AI platform excels at generating original videos from a variety of inputs, including text, images, or pre-existing video clips. It can reliably and accurately create new video content by either transforming the style and composition of a source image or text prompt to fit within the structure of an existing video (Video to Video) or by relying solely on textual descriptions (Text to Video). This innovative approach enables the crafting of entirely new visual stories without the necessity of physical filming. Research involving user feedback reveals that Gen-2's results are preferred over conventional methods for both image-to-image and video-to-video transformations, highlighting its excellence in this domain. Additionally, its remarkable ability to harmonize creativity with technology signifies a substantial advancement in the capabilities of generative AI, paving the way for future innovations in the field. As such, Gen-2 represents a transformative step in how visual content can be conceptualized and produced.
-
4
Jurassic-2
AI21
Unleash limitless innovation with groundbreaking AI capabilities today!
We are thrilled to announce the arrival of Jurassic-2, the latest version of AI21 Studio's foundation models, which marks a significant leap in the realm of artificial intelligence with its outstanding quality and groundbreaking capabilities. Alongside this, we are also launching our customized APIs that provide smooth reading and writing functionalities, outshining those of our competitors. At AI21 Studio, our goal is to enable developers and businesses to tap into the potential of reading and writing AI, thereby fostering the development of meaningful real-world applications. The launch of Jurassic-2 and our Task-Specific APIs today marks an important milestone, allowing for the effective integration of generative AI in production environments. Commonly referred to as J2, Jurassic-2 displays impressive improvements in quality, such as enhanced zero-shot instruction-following, reduced latency, and support for various languages. Additionally, our dedicated APIs are crafted to equip developers with superior tools that excel in performing targeted reading and writing tasks with ease, ensuring you are well-prepared to achieve success in your endeavors. Collectively, these innovations redefine the standards in the AI field, opening avenues for creative solutions and inspiring future developments. As we step into this new era of AI capabilities, the possibilities for innovation are truly limitless.
-
5
FLAN-T5
Google
"Unlock superior language understanding for diverse applications effortlessly."
FLAN-T5, as presented in the publication "Scaling Instruction-Finetuned Language Models," marks a significant enhancement of the T5 model, having been fine-tuned on a wide array of tasks to bolster its effectiveness. This refinement equips it with a superior ability to comprehend and react to a variety of instructional cues, ultimately leading to improved performance across multiple applications. The model's versatility makes it a valuable tool in fields requiring nuanced language understanding.
-
6
GPT-NeoX
EleutherAI
Empowering large language model training with innovative GPU techniques.
This repository presents an implementation of model parallel autoregressive transformers that harness the power of GPUs through the DeepSpeed library. It acts as a documentation of EleutherAI's framework aimed at training large language models specifically for GPU environments. At this time, it expands upon NVIDIA's Megatron Language Model, integrating sophisticated techniques from DeepSpeed along with various innovative optimizations. Our objective is to establish a centralized resource for compiling methodologies essential for training large-scale autoregressive language models, which will ultimately stimulate faster research and development in the expansive domain of large-scale training. By making these resources available, we aspire to make a substantial impact on the advancement of language model research while encouraging collaboration among researchers in the field.
-
7
GPT-J
EleutherAI
Unleash advanced language capabilities with unmatched code generation prowess.
GPT-J is an advanced language model created by EleutherAI, recognized for its remarkable abilities. In terms of performance, GPT-J demonstrates a level of proficiency that competes with OpenAI's renowned GPT-3 across a range of zero-shot tasks. Impressively, it has surpassed GPT-3 in certain aspects, particularly in code generation. The latest iteration, named GPT-J-6B, is built on an extensive linguistic dataset known as The Pile, which is publicly available and comprises a massive 825 gibibytes of language data organized into 22 distinct subsets. While GPT-J shares some characteristics with ChatGPT, it is essential to note that its primary focus is on text prediction rather than serving as a chatbot. Additionally, a significant development occurred in March 2023 when Databricks introduced Dolly, a model designed to follow instructions and operating under an Apache license, which further enhances the array of available language models. This ongoing progression in AI technology is instrumental in expanding the possibilities within the realm of natural language processing. As these models evolve, they continue to reshape how we interact with and utilize language in various applications.
-
8
Pythia
EleutherAI
Unlocking knowledge evolution in autoregressive transformer models.
Pythia combines the analysis of interpretability and scaling concepts to enhance our understanding of how knowledge evolves and transforms during the training process of autoregressive transformer models. This methodology not only fosters a more profound comprehension of the learning mechanisms involved but also sheds light on how these models adapt over time. By investigating these elements, Pythia aims to unveil the intricate relationships between data and model performance.
-
9
Stable LM
Stability AI
Revolutionizing language models for efficiency and accessibility globally.
Stable LM signifies a notable progression in the language model domain, building upon prior open-source experiences, especially through collaboration with EleutherAI, a nonprofit research group. This evolution has included the creation of prominent models like GPT-J, GPT-NeoX, and the Pythia suite, all trained on The Pile open-source dataset, with several recent models such as Cerebras-GPT and Dolly-2 taking cues from this foundational work. In contrast to earlier models, Stable LM utilizes a groundbreaking dataset that is three times as extensive as The Pile, comprising an impressive 1.5 trillion tokens. More details regarding this dataset will be disclosed soon. The vast scale of this dataset allows Stable LM to perform exceptionally well in conversational and programming tasks, even though it has a relatively compact parameter size of 3 to 7 billion compared to larger models like GPT-3, which features 175 billion parameters. Built for adaptability, Stable LM 3B is a streamlined model designed to operate efficiently on portable devices, including laptops and mobile gadgets, which excites us about its potential for practical usage and portability. This innovation has the potential to bridge the gap for users seeking advanced language capabilities in accessible formats, thus broadening the reach and impact of language technologies. Overall, the launch of Stable LM represents a crucial advancement toward developing more efficient and widely available language models for diverse users.
-
10
Dolly
Databricks
Unlock the potential of legacy models with innovative instruction.
Dolly stands out as a cost-effective large language model, showcasing an impressive capability for following instructions akin to that of ChatGPT. The research conducted by the Alpaca team has shown that advanced models can be trained to significantly improve their adherence to high-quality instructions; however, our research suggests that even earlier open-source models can exhibit exceptional behavior when fine-tuned with a limited amount of instructional data. By making slight modifications to an existing open-source model containing 6 billion parameters from EleutherAI, Dolly has been enhanced to better follow instructions, demonstrating skills such as brainstorming and text generation that were previously lacking. This strategy not only emphasizes the untapped potential of older models but also invites exploration into new and innovative uses of established technologies. Furthermore, the success of Dolly encourages further investigation into how legacy models can be repurposed to meet contemporary needs effectively.
-
11
mT5
Google
Unlock limitless multilingual potential with an adaptable text transformer!
The multilingual T5 (mT5) is an exceptionally adaptable pretrained text-to-text transformer model, created using a methodology similar to that of the original T5. This repository provides essential resources for reproducing the results detailed in the mT5 research publication.
mT5 has undergone training on the vast mC4 corpus, which includes a remarkable 101 languages, such as Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, and many more. This extensive language coverage renders mT5 an invaluable asset for multilingual applications in diverse sectors, enhancing its usefulness for researchers and developers alike.
-
12
Cerebras-GPT
Cerebras
Empowering innovation with open-source, efficient language models.
Developing advanced language models poses considerable hurdles, requiring immense computational power, sophisticated distributed computing methods, and a deep understanding of machine learning. As a result, only a select few organizations undertake the complex endeavor of creating large language models (LLMs) independently. Additionally, many entities equipped with the requisite expertise and resources have started to limit the accessibility of their discoveries, reflecting a significant change from the more open practices observed in recent months.
At Cerebras, we prioritize the importance of open access to leading-edge models, which is why we proudly introduce Cerebras-GPT to the open-source community. This initiative features a lineup of seven GPT models, with parameter sizes varying from 111 million to 13 billion. By employing the Chinchilla training formula, these models achieve remarkable accuracy while maintaining computational efficiency. Importantly, Cerebras-GPT is designed to offer faster training times, lower costs, and reduced energy use compared to any other model currently available to the public. Through the release of these models, we aspire to encourage further innovation and foster collaborative efforts within the machine learning community, ultimately pushing the boundaries of what is possible in this rapidly evolving field.
-
13
Falcon-40B
Technology Innovation Institute (TII)
Unlock powerful AI capabilities with this leading open-source model.
Falcon-40B is a decoder-only model boasting 40 billion parameters, created by TII and trained on a massive dataset of 1 trillion tokens from RefinedWeb, along with other carefully chosen datasets. It is shared under the Apache 2.0 license, making it accessible for various uses.
Why should you consider utilizing Falcon-40B?
This model distinguishes itself as the premier open-source choice currently available, outpacing rivals such as LLaMA, StableLM, RedPajama, and MPT, as highlighted by its position on the OpenLLM Leaderboard.
Its architecture is optimized for efficient inference and incorporates advanced features like FlashAttention and multiquery functionality, enhancing its performance.
Additionally, the flexible Apache 2.0 license allows for commercial utilization without the burden of royalties or limitations.
It's essential to recognize that this model is in its raw, pretrained state and is typically recommended to be fine-tuned to achieve the best results for most applications. For those seeking a version that excels in managing general instructions within a conversational context, Falcon-40B-Instruct might serve as a suitable alternative worth considering.
Overall, Falcon-40B represents a formidable tool for developers looking to leverage cutting-edge AI technology in their projects.
-
14
Falcon-7B
Technology Innovation Institute (TII)
Unmatched performance and flexibility for advanced machine learning.
The Falcon-7B model is a causal decoder-only architecture with a total of 7 billion parameters, created by TII, and trained on a vast dataset consisting of 1,500 billion tokens from RefinedWeb, along with additional carefully curated corpora, all under the Apache 2.0 license.
What are the benefits of using Falcon-7B?
This model excels compared to other open-source options like MPT-7B, StableLM, and RedPajama, primarily because of its extensive training on an unimaginably large dataset of 1,500 billion tokens from RefinedWeb, supplemented by thoughtfully selected content, which is clearly reflected in its performance ranking on the OpenLLM Leaderboard.
Furthermore, it features an architecture optimized for rapid inference, utilizing advanced technologies such as FlashAttention and multiquery strategies.
In addition, the flexibility offered by the Apache 2.0 license allows users to pursue commercial ventures without worrying about royalties or stringent constraints.
This unique blend of high performance and operational freedom positions Falcon-7B as an excellent option for developers in search of sophisticated modeling capabilities.
Ultimately, the model's design and resourcefulness make it a compelling choice in the rapidly evolving landscape of machine learning.
-
15
RedPajama
RedPajama
Empowering innovation through fully open-source AI technology.
Foundation models, such as GPT-4, have propelled the field of artificial intelligence forward at an unprecedented pace; however, the most sophisticated models continue to be either restricted or only partially available to the public. To counteract this issue, the RedPajama initiative is focused on creating a suite of high-quality, completely open-source models. We are excited to share that we have successfully finished the first stage of this project: the recreation of the LLaMA training dataset, which encompasses over 1.2 trillion tokens.
At present, a significant portion of leading foundation models is confined within commercial APIs, which limits opportunities for research and customization, especially when dealing with sensitive data. The pursuit of fully open-source models may offer a viable remedy to these constraints, on the condition that the open-source community can enhance the quality of these models to compete with their closed counterparts. Recent developments have indicated that there is encouraging progress in this domain, hinting that the AI sector may be on the brink of a revolutionary shift similar to what was seen with the introduction of Linux. The success of Stable Diffusion highlights that open-source alternatives can not only compete with high-end commercial products like DALL-E but also foster extraordinary creativity through the collaborative input of various communities. By nurturing a thriving open-source ecosystem, we can pave the way for new avenues of innovation and ensure that access to state-of-the-art AI technology is more widely available, ultimately democratizing the capabilities of artificial intelligence for all users.
-
16
Vicuna
lmsys.org
Revolutionary AI model: Affordable, high-performing, and open-source innovation.
Vicuna-13B is a conversational AI created by fine-tuning LLaMA on a collection of user dialogues sourced from ShareGPT. Early evaluations, using GPT-4 as a benchmark, suggest that Vicuna-13B reaches over 90% of the performance level found in OpenAI's ChatGPT and Google Bard, while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of tested cases. The estimated cost to train Vicuna-13B is around $300, which is quite economical for a model of its caliber. Furthermore, the model's source code and weights are publicly accessible under non-commercial licenses, promoting a spirit of collaboration and further development. This level of transparency not only fosters innovation but also allows users to delve into the model's functionalities across various applications, paving the way for new ideas and enhancements. Ultimately, such initiatives can significantly contribute to the advancement of conversational AI technologies.
-
17
MPT-7B
MosaicML
Unlock limitless AI potential with cutting-edge transformer technology!
We are thrilled to introduce MPT-7B, the latest model in the MosaicML Foundation Series. This transformer model has been carefully developed from scratch, utilizing 1 trillion tokens of varied text and code during its training. It is accessible as open-source software, making it suitable for commercial use and achieving performance levels comparable to LLaMA-7B. The entire training process was completed in just 9.5 days on the MosaicML platform, with no human intervention, and incurred an estimated cost of $200,000.
With MPT-7B, users can train, customize, and deploy their own versions of MPT models, whether they opt to start from one of our existing checkpoints or initiate a new project. Additionally, we are excited to unveil three specialized variants alongside the core MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, with the latter featuring an exceptional context length of 65,000 tokens for generating extensive content. These new offerings greatly expand the horizons for developers and researchers eager to harness the capabilities of transformer models in their innovative initiatives. Furthermore, the flexibility and scalability of MPT-7B are designed to cater to a wide range of application needs, fostering creativity and efficiency in developing advanced AI solutions.
-
18
OpenLLaMA
OpenLLaMA
Versatile AI models tailored for your unique needs.
OpenLLaMA is a freely available version of Meta AI's LLaMA 7B, crafted using the RedPajama dataset. The model weights provided can easily substitute the LLaMA 7B in existing applications. Furthermore, we have also developed a streamlined 3B variant of the LLaMA model, catering to users who prefer a more compact option. This initiative enhances user flexibility by allowing them to select the most suitable model according to their particular requirements, thus accommodating a wider range of applications and use cases.
-
19
GPT4All
Nomic AI
Empowering innovation through accessible, community-driven AI solutions.
GPT4All is an all-encompassing system aimed at the training and deployment of sophisticated large language models that can function effectively on typical consumer-grade CPUs. Its main goal is clear: to position itself as the premier instruction-tuned assistant language model available for individuals and businesses, allowing them to access, share, and build upon it without limitations. The models within GPT4All vary in size from 3GB to 8GB, making them easily downloadable and integrable into the open-source GPT4All ecosystem. Nomic AI is instrumental in sustaining and supporting this ecosystem, ensuring high quality and security while enhancing accessibility for both individuals and organizations wishing to train and deploy their own edge-based language models. The importance of data is paramount, serving as a fundamental element in developing a strong, general-purpose large language model. To support this, the GPT4All community has created an open-source data lake, acting as a collaborative space for users to contribute important instruction and assistant tuning data, which ultimately improves future training for models within the GPT4All framework. This initiative not only stimulates innovation but also encourages active participation from users in the development process, creating a vibrant community focused on enhancing language technologies. By fostering such an environment, GPT4All aims to redefine the landscape of accessible AI.
-
20
Baichuan-13B
Baichuan Intelligent Technology
Unlock limitless potential with cutting-edge bilingual language technology.
Baichuan-13B is a powerful language model featuring 13 billion parameters, created by Baichuan Intelligent as both an open-source and commercially accessible option, and it builds on the previous Baichuan-7B model. This new iteration has excelled in key benchmarks for both Chinese and English, surpassing other similarly sized models in performance. It offers two different pre-training configurations: Baichuan-13B-Base and Baichuan-13B-Chat.
Significantly, Baichuan-13B increases its parameter count to 13 billion, utilizing the groundwork established by Baichuan-7B, and has been trained on an impressive 1.4 trillion tokens sourced from high-quality datasets, achieving a 40% increase in training data compared to LLaMA-13B. It stands out as the most comprehensively trained open-source model within the 13B parameter range. Furthermore, it is designed to be bilingual, supporting both Chinese and English, employs ALiBi positional encoding, and features a context window size of 4096 tokens, which provides it with the flexibility needed for a wide range of natural language processing tasks. This model's advancements mark a significant step forward in the capabilities of large language models.
-
21
Stable Beluga
Stability AI
Unleash powerful reasoning with cutting-edge, open access AI.
Stability AI, in collaboration with its CarperAI lab, proudly introduces Stable Beluga 1 and its enhanced version, Stable Beluga 2, formerly called FreeWilly, both of which are powerful new Large Language Models (LLMs) now accessible to the public. These innovations demonstrate exceptional reasoning abilities across a diverse array of benchmarks, highlighting their adaptability and robustness. Stable Beluga 1 is constructed upon the foundational LLaMA 65B model and has been carefully fine-tuned using a cutting-edge synthetically-generated dataset through Supervised Fine-Tune (SFT) in the traditional Alpaca format. Similarly, Stable Beluga 2 is based on the LLaMA 2 70B model, further advancing performance standards in the field. The introduction of these models signifies a major advancement in the progression of open access AI technology, paving the way for future developments in the sector. With their release, users can expect enhanced capabilities that could revolutionize various applications.
-
22
ChatGLM
Zhipu AI
Empowering seamless bilingual dialogues with cutting-edge AI technology.
ChatGLM-6B is a dialogue model that operates in both Chinese and English, constructed on the General Language Model (GLM) architecture, featuring a robust 6.2 billion parameters. Utilizing advanced model quantization methods, it can efficiently function on typical consumer graphics cards, needing just 6GB of video memory at the INT4 quantization tier. This model incorporates techniques similar to those utilized in ChatGPT but is specifically optimized to improve interactions and dialogues in Chinese. After undergoing rigorous training with around 1 trillion identifiers across both languages, it has also benefited from enhanced supervision, fine-tuning, self-guided feedback, and reinforcement learning driven by human input. As a result, ChatGLM-6B has shown remarkable proficiency in generating responses that resonate effectively with users. Its versatility and high performance render it an essential asset for facilitating bilingual communication, making it an invaluable resource in multilingual environments.
-
23
PygmalionAI
PygmalionAI
Empower your dialogues with cutting-edge, open-source AI!
PygmalionAI is a dynamic community dedicated to advancing open-source projects that leverage EleutherAI's GPT-J 6B and Meta's LLaMA models. In essence, Pygmalion focuses on creating AI designed for interactive dialogues and roleplaying experiences. The Pygmalion AI model is actively maintained and currently showcases the 7B variant, which is based on Meta AI's LLaMA framework. With a minimal requirement of just 18GB (or even less) of VRAM, Pygmalion provides exceptional chat capabilities that surpass those of much larger language models, all while being resource-efficient. Our carefully curated dataset, filled with high-quality roleplaying material, ensures that your AI companion will excel in various roleplaying contexts. Both the model weights and the training code are fully open-source, granting you the liberty to modify and share them as you wish. Typically, language models like Pygmalion are designed to run on GPUs, as they need rapid memory access and significant computational power to produce coherent text effectively. Consequently, users can anticipate a fluid and engaging interaction experience when utilizing Pygmalion's features. This commitment to both performance and community collaboration makes Pygmalion a standout choice in the realm of conversational AI.
-
24
LongLLaMA
LongLLaMA
Revolutionizing long-context tasks with groundbreaking language model innovation.
This repository presents the research preview for LongLLaMA, an innovative large language model capable of handling extensive contexts, reaching up to 256,000 tokens or potentially even more. Built on the OpenLLaMA framework, LongLLaMA has been fine-tuned using the Focused Transformer (FoT) methodology. The foundational code for this model comes from Code Llama. We are excited to introduce a smaller 3B base version of the LongLLaMA model, which is not instruction-tuned, and it will be released under an open license (Apache 2.0). Accompanying this release is inference code that supports longer contexts, available on Hugging Face. The model's weights are designed to effortlessly integrate with existing systems tailored for shorter contexts, particularly those that accommodate up to 2048 tokens. In addition to these features, we provide evaluation results and comparisons to the original OpenLLaMA models, thus offering a thorough insight into LongLLaMA's effectiveness in managing long-context tasks. This advancement marks a significant step forward in the field of language models, enabling more sophisticated applications and research opportunities.
-
25
Grok
xAI
"Engage your mind with witty, real-time AI insights!"
Grok is an innovative artificial intelligence that draws inspiration from the Hitchhiker’s Guide to the Galaxy, designed to handle a diverse range of questions while also encouraging users to think critically through stimulating inquiries. Its talent for providing responses that incorporate humor and a touch of irreverence makes Grok unsuitable for individuals who prefer a more serious tone in their interactions. A notable characteristic of Grok is its ability to access live data via the 𝕏 platform, enabling it to address daring and unconventional queries that other AI systems may avoid. This feature not only broadens its adaptability but also guarantees that users receive answers that are both immediate and captivating. As a result, Grok stands out as a unique option for those seeking a blend of entertainment and information in their AI interactions.