List of Roo Code Integrations

This is a list of platforms and tools that integrate with Roo Code. This list is updated as of January 2026.

  • 1
    Leader badge
    Visual Studio Code Reviews & Ratings

    Visual Studio Code

    Microsoft

    Empower your coding experience with seamless integration and customization.
    Visual Studio Code (VS Code) is an advanced, open-source AI-integrated development environment (IDE) built for productivity, creativity, and collaboration. Designed by Microsoft, it unites code editing, debugging, version control, and AI automation in a single cross-platform workspace. Developers can code in nearly any language—including JavaScript, TypeScript, Python, C++, C#, Java, PHP, and more—using built-in tools or specialized extensions. Its AI features, such as Agent Mode and Next Edit Suggestions, transform VS Code into an autonomous assistant capable of reading entire codebases, suggesting edits, and resolving compile or test issues automatically. With MCP (Model Context Protocol) and multi-model support, developers can use AI models like Claude Sonnet, GPT-4, or Gemini—or connect their own keys via OpenRouter or Azure OpenAI. Deep GitHub integration, remote development, and cloud portability make collaboration effortless across global teams. VS Code’s extensive marketplace offers tens of thousands of extensions—from Docker to Jupyter and Stripe—empowering developers to build and deploy in any environment. Custom profiles, themes, and settings sync allow for a personalized workspace across devices. Its integrated terminal, local history, and debugging tools ensure smooth workflows from prototype to production. Whether building enterprise software, web apps, or AI agents, VS Code redefines what a modern coding experience should feel like: intelligent, fast, and universally accessible.
  • 2
    OpenAI Reviews & Ratings

    OpenAI

    OpenAI

    Empowering innovation through advanced, safe language-based AI solutions.
    OpenAI is committed to ensuring that artificial general intelligence (AGI)—characterized by its ability to perform most tasks that are economically important with a level of autonomy that surpasses human capabilities—benefits all of humanity. Our primary goal is to create AGI that is both safe and beneficial; however, we also view our mission as a success if we empower others to reach this same objective. You can take advantage of our API for numerous language-based functions, such as semantic search, summarization, sentiment analysis, content generation, translation, and much more, all achievable with just a few examples or a clear instruction in English. A simple integration gives you access to our ever-evolving AI technology, enabling you to test the API's features through these sample completions and uncover a wide array of potential uses. As you explore, you may find innovative ways to harness this technology for your projects or business needs.
  • 3
    Grok Code Fast 1 Reviews & Ratings

    Grok Code Fast 1

    xAI

    "Experience lightning-fast coding efficiency at unbeatable prices!"
    Grok Code Fast 1 is the latest model in the Grok family, engineered to deliver fast, economical, and developer-friendly performance for agentic coding. Recognizing the inefficiencies of slower reasoning models, the team at xAI built it from the ground up with a fresh architecture and a dataset tailored to software engineering. Its training corpus combines programming-heavy pre-training with real-world code reviews and pull requests, ensuring strong alignment with actual developer workflows. The model demonstrates versatility across the development stack, excelling at TypeScript, Python, Java, Rust, C++, and Go. In performance tests, it consistently outpaces competitors with up to 190 tokens per second, backed by caching optimizations that achieve over 90% hit rates. Integration with launch partners like GitHub Copilot, Cursor, Cline, and Roo Code makes it instantly accessible for everyday coding tasks. Grok Code Fast 1 supports everything from building new applications to answering complex codebase questions, automating repetitive edits, and resolving bugs in record time. The cost structure is intentionally designed to maximize accessibility, at just $0.20 per million input tokens and $1.50 per million outputs. Real-world human evaluations complement benchmark scores, confirming that the model performs reliably in day-to-day software engineering. For developers, teams, and platforms, Grok Code Fast 1 offers a future-ready solution that blends speed, affordability, and practical coding intelligence.
  • 4
    GLM-4.7 Reviews & Ratings

    GLM-4.7

    Zhipu AI

    Elevate your coding and reasoning with unmatched performance!
    GLM-4.7 is an advanced AI model engineered to push the boundaries of coding, reasoning, and agent-based workflows. It delivers clear performance gains across software engineering benchmarks, terminal automation, and multilingual coding tasks. GLM-4.7 enhances stability through interleaved, preserved, and turn-level thinking, enabling better long-horizon task execution. The model is optimized for use in modern coding agents, making it suitable for real-world development environments. GLM-4.7 also improves creative and frontend output, generating cleaner user interfaces and more visually accurate slides. Its tool-using abilities have been significantly strengthened, allowing it to interact with browsers, APIs, and automation systems more reliably. Advanced reasoning improvements enable better performance on mathematical and logic-heavy tasks. GLM-4.7 supports flexible deployment, including cloud APIs and local inference. The model is compatible with popular inference frameworks such as vLLM and SGLang. Developers can integrate GLM-4.7 into existing workflows with minimal configuration changes. Its pricing model offers high performance at a fraction of comparable coding models. GLM-4.7 is designed to feel like a dependable coding partner rather than just a benchmark-optimized model.
  • 5
    Model Context Protocol (MCP) Reviews & Ratings

    Model Context Protocol (MCP)

    Anthropic

    Seamless integration for powerful AI workflows and data management.
    The Model Context Protocol (MCP) serves as a versatile and open-source framework designed to enhance the interaction between artificial intelligence models and various external data sources. By facilitating the creation of intricate workflows, it allows developers to connect large language models (LLMs) with databases, files, and web services, thereby providing a standardized methodology for AI application development. With its client-server architecture, MCP guarantees smooth integration, and its continually expanding array of integrations simplifies the process of linking to different LLM providers. This protocol is particularly advantageous for developers aiming to construct scalable AI agents while prioritizing robust data security measures. Additionally, MCP's flexibility caters to a wide range of use cases across different industries, making it a valuable tool in the evolving landscape of AI technologies.
  • 6
    GLM-4.6 Reviews & Ratings

    GLM-4.6

    Zhipu AI

    Empower your projects with enhanced reasoning and coding capabilities.
    GLM-4.6 builds on the groundwork established by its predecessor, offering improved reasoning, coding, and agent functionalities that lead to significant improvements in inferential precision, better tool application during reasoning exercises, and a smoother incorporation into agent architectures. In extensive benchmark assessments evaluating reasoning, coding, and agent performance, GLM-4.6 outperforms GLM-4.5 and holds its own against competitive models such as DeepSeek-V3.2-Exp and Claude Sonnet 4, though it still trails Claude Sonnet 4.5 regarding coding proficiency. Additionally, when evaluated through practical testing using a comprehensive “CC-Bench” suite, which encompasses tasks related to front-end development, tool creation, data analysis, and algorithmic challenges, GLM-4.6 shows superior performance compared to GLM-4.5, achieving a nearly equal standing with Claude Sonnet 4, winning around 48.6% of direct matchups while exhibiting an approximate 15% boost in token efficiency. This newest iteration is available via the Z.ai API, allowing developers to utilize it either as a backend for an LLM or as the fundamental component in an agent within the platform's API ecosystem. Moreover, the enhancements in GLM-4.6 promise to significantly elevate productivity across diverse application areas, making it a compelling choice for developers eager to adopt the latest advancements in AI technology. Consequently, the model's versatility and performance improvements position it as a key player in the ongoing evolution of AI-driven solutions.
  • 7
    GLM-4.6V Reviews & Ratings

    GLM-4.6V

    Zhipu AI

    Empowering seamless vision-language interactions with advanced reasoning capabilities.
    The GLM-4.6V is a sophisticated, open-source multimodal vision-language model that is part of the Z.ai (GLM-V) series, specifically designed for tasks that involve reasoning, perception, and actionable outcomes. It comes in two distinct configurations: a full-featured version boasting 106 billion parameters, ideal for cloud-based systems or high-performance computing setups, and a more efficient “Flash” version with 9 billion parameters, optimized for local use or scenarios that demand minimal latency. With an impressive native context window capable of handling up to 128,000 tokens during its training, GLM-4.6V excels in managing large documents and various multimodal data inputs. A key highlight of this model is its integrated Function Calling feature, which allows it to directly accept different types of visual media, including images, screenshots, and documents, without the need for manual text conversion. This capability not only streamlines the reasoning process regarding visual content but also empowers the model to make tool calls, effectively bridging visual perception with practical applications. The adaptability of GLM-4.6V paves the way for numerous applications, such as generating combined image-and-text content that enhances document understanding with text summarization or crafting responses that incorporate image annotations, significantly improving user engagement and output quality. Moreover, its architecture encourages exploration into innovative uses across diverse fields, making it a valuable asset in the realm of AI.
  • 8
    GLM-4.1V Reviews & Ratings

    GLM-4.1V

    Zhipu AI

    "Unleashing powerful multimodal reasoning for diverse applications."
    GLM-4.1V represents a cutting-edge vision-language model that provides a powerful and efficient multimodal ability for interpreting and reasoning through different types of media, such as images, text, and documents. The 9-billion-parameter variant, referred to as GLM-4.1V-9B-Thinking, is built on the GLM-4-9B foundation and has been refined using a distinctive training method called Reinforcement Learning with Curriculum Sampling (RLCS). With a context window that accommodates 64k tokens, this model can handle high-resolution inputs, supporting images with a resolution of up to 4K and any aspect ratio, enabling it to perform complex tasks like optical character recognition, image captioning, chart and document parsing, video analysis, scene understanding, and GUI-agent workflows, which include interpreting screenshots and identifying UI components. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved remarkable results, securing the top performance in 23 of the 28 tasks assessed. These advancements mark a significant progression in the fusion of visual and textual information, establishing a new benchmark for multimodal models across a variety of applications, and indicating the potential for future innovations in this field. This model not only enhances existing workflows but also opens up new possibilities for applications in diverse domains.
  • 9
    GLM-4.5V-Flash Reviews & Ratings

    GLM-4.5V-Flash

    Zhipu AI

    Efficient, versatile vision-language model for real-world tasks.
    GLM-4.5V-Flash is an open-source vision-language model designed to seamlessly integrate powerful multimodal capabilities into a streamlined and deployable format. This versatile model supports a variety of input types including images, videos, documents, and graphical user interfaces, enabling it to perform numerous functions such as scene comprehension, chart and document analysis, screen reading, and image evaluation. Unlike larger models, GLM-4.5V-Flash boasts a smaller size yet retains crucial features typical of visual language models, including visual reasoning, video analysis, GUI task management, and intricate document parsing. Its application within "GUI agent" frameworks allows the model to analyze screenshots or desktop captures, recognize icons or UI elements, and facilitate both automated desktop and web activities. Although it may not reach the performance levels of the most extensive models, GLM-4.5V-Flash offers remarkable adaptability for real-world multimodal tasks where efficiency, lower resource demands, and broad modality support are vital. Ultimately, its innovative design empowers users to leverage sophisticated capabilities while ensuring optimal speed and easy access for various applications. This combination makes it an appealing choice for developers seeking to implement multimodal solutions without the overhead of larger systems.
  • 10
    GLM-4.5V Reviews & Ratings

    GLM-4.5V

    Zhipu AI

    Revolutionizing multimodal intelligence with unparalleled performance and versatility.
    The GLM-4.5V model emerges as a significant advancement over its predecessor, the GLM-4.5-Air, featuring a sophisticated Mixture-of-Experts (MoE) architecture that includes an impressive total of 106 billion parameters, with 12 billion allocated specifically for activation purposes. This model is distinguished by its superior performance among open-source vision-language models (VLMs) of similar scale, excelling in 42 public benchmarks across a wide range of applications, including images, videos, documents, and GUI interactions. It offers a comprehensive suite of multimodal capabilities, tackling image reasoning tasks like scene understanding, spatial recognition, and multi-image analysis, while also addressing video comprehension challenges such as segmentation and event recognition. In addition, it demonstrates remarkable proficiency in deciphering intricate charts and lengthy documents, which supports GUI-agent workflows through functionalities like screen reading and desktop automation, along with providing precise visual grounding by identifying objects and creating bounding boxes. The introduction of a unique "Thinking Mode" switch further enhances the user experience, enabling users to choose between quick responses or more deliberate reasoning tailored to specific situations. This innovative addition not only underscores the versatility of GLM-4.5V but also highlights its adaptability to meet diverse user requirements, making it a powerful tool in the realm of multimodal AI solutions. Furthermore, the model’s ability to seamlessly integrate into various applications signifies its potential for widespread adoption in both research and practical environments.
  • 11
    MiniMax-M2.1 Reviews & Ratings

    MiniMax-M2.1

    MiniMax

    Empowering innovation: Open-source AI for intelligent automation.
    MiniMax-M2.1 is a high-performance, open-source agentic language model designed for modern development and automation needs. It was created to challenge the idea that advanced AI agents must remain proprietary. The model is optimized for software engineering, tool usage, and long-horizon reasoning tasks. MiniMax-M2.1 performs strongly in multilingual coding and cross-platform development scenarios. It supports building autonomous agents capable of executing complex, multi-step workflows. Developers can deploy the model locally, ensuring full control over data and execution. The architecture emphasizes robustness, consistency, and instruction accuracy. MiniMax-M2.1 demonstrates competitive results across industry-standard coding and agent benchmarks. It generalizes well across different agent frameworks and inference engines. The model is suitable for full-stack application development, automation, and AI-assisted engineering. Open weights allow experimentation, fine-tuning, and research. MiniMax-M2.1 provides a powerful foundation for the next generation of intelligent agents.
  • 12
    VSCodium Reviews & Ratings

    VSCodium

    VSCodium

    Experience seamless coding freedom without telemetry or complications.
    The source code for Microsoft's Visual Studio Code (VSCode) is released as open source under the MIT license, but the version available for download operates under a different license that does not qualify as free and open-source software (FLOSS) and includes telemetry functionalities. In response to this, the VSCodium project was established, enabling users to bypass the complexities of building from source by providing ready-to-use binaries. This initiative employs unique build scripts that retrieve Microsoft's VSCode repository, perform the required build operations, and then upload the resulting binaries to GitHub releases, ensuring that they are licensed under the MIT license and come without telemetry features. For Mac OS X Mojave users, if you see the warning “App can’t be opened because Apple cannot check it for malicious software” when attempting to open VSCodium for the first time, you can simply right-click the application and choose Open, a step that is only necessary during your initial launch on that particular operating system. Furthermore, extensive documentation is accessible for those wishing to switch from Visual Studio Code, along with resources to help troubleshoot any issues that may come up while using VSCodium. This comprehensive support guarantees that users are equipped with all the necessary information for a seamless and enjoyable experience. Ultimately, this effort emphasizes the importance of user choice and transparency in software usage.
  • 13
    Requesty Reviews & Ratings

    Requesty

    Requesty

    Optimize AI workloads with intelligent routing and efficiency.
    Requesty is a cutting-edge platform designed to optimize AI workloads by intelligently routing requests to the most appropriate model for each individual task. It features advanced functionalities such as automatic fallback systems and efficient queuing mechanisms, ensuring uninterrupted service availability even when some models may be out of service temporarily. With support for a wide range of models, including GPT-4, Claude 3.5, and DeepSeek, Requesty also offers observability for AI applications, allowing users to track model performance and adjust their application usage for maximum effectiveness. By reducing API costs and enhancing operational efficiency, Requesty empowers developers with the necessary tools to build more intelligent and reliable AI solutions. This platform not only fine-tunes performance but also encourages innovation within the AI landscape, creating opportunities for the development of transformative applications. As a result, developers can push the boundaries of what AI can achieve, leading to more sophisticated and impactful technologies.
  • Previous
  • You're on page 1
  • Next