Cursor 2.0 Introduces In-House Coding Model and Enhanced Features

Cursor 2.0 launches its first in-house coding model, Composer, boasting speeds four times faster than similar models and new multi-agent capabilities.

Major Upgrade to Cursor 2.0

Cursor has received a significant upgrade with the release of version 2.0!

This time, Cursor has launched its first in-house coding model, Composer.

Composer operates at speeds four times faster than comparable models.

Cursor claims this model is designed for low-latency intelligent coding, with most tasks completed in under 30 seconds.

Image 1 In the Speed section, Composer achieves 200 Tokens/second.

In addition to the in-house model, Cursor has restructured its interaction logic, introducing a multi-agent mode that allows up to eight agents to run in parallel under a single prompt. This feature uses git worktrees or remote machines to prevent file conflicts.

Image 2 The 2.0 version also integrates a browser within the editor, making it very friendly for front-end development.

Users can directly select elements and forward DOM information to Cursor.

Image 3 In practical tests, front-end developers can select elements in the browser, and Cursor automatically identifies the corresponding code.

Image 4 This update also introduces a new code review feature, making it easier to view all changes made by agents across multiple files without switching back and forth.

Image 5 A significant update is the introduction of Voice Mode, enabling programming through voice commands.

Image 6 Additionally, improvements have been made to copy/paste prompts with marked context, and many explicit items have been removed from the context menu, including @Definitions, @Web, @Link, @Recent Changes, and @Linter Errors.

Agents can now self-collect context without needing to manually attach it in the prompt input.

Image 7 Cursor has long been seen as a shell, and despite being valued at $10 billion, it had not released its own model until now.

Previously, Cursor was constrained by Claude and its pricing model, leading to most revenue being contributed to AI model vendors like Claude.

This dependency on external models limited the company’s innovation and resulted in higher costs and lower profit margins in the market.

The release of Composer signifies Cursor’s entry into the AI battle with its own models.

As netizens have pointed out, a company valued at $10 billion cannot just be a shell app.

Image 8 At the recently opened GTC 2025 conference, Jensen Huang specifically mentioned Cursor:

At NVIDIA, every software engineer uses Cursor. It acts as everyone’s programming partner, helping generate code and significantly boosting productivity.

Building a Programming Model with Reinforcement Learning

In the world of software development, speed and intelligence are eternal pursuits.

The experience of using Cursor Tab (the in-house completion model) speaks volumes.

Many prominent figures on X have reported that their favorite feature is still Cursor Tab, as they want the model to instantly understand coding ideas and complete coding tasks quickly.

It’s a case of wanting both!

Image 9 Developers often seek models that are both intelligent and support interactive use to maintain focus and fluidity in coding.

Cursor initially experimented with a prototype model codenamed Cheetah, and Composer is a smarter upgrade of that model, providing sufficient speed to support an interactive experience, making coding enjoyable and smooth.

Composer aims to achieve the goal of wanting both speed and intelligence.

Composer is an expert mixture (MoE) language model that supports the generation and understanding of long contexts.

It has been specifically optimized for software engineering through reinforcement learning (RL) in diverse development environments.

During each training iteration, the model receives problem descriptions and is instructed to produce the best responses, whether for code modifications, planning solutions, or providing informative answers.

The model can utilize simple tools like reading and editing files, as well as more powerful capabilities such as terminal commands and semantic searches across entire codebases.

Image 10 Reinforcement learning can specifically optimize the model to better serve efficient software engineering.

Given that response speed is key to interactive development, the model is encouraged to make efficient tool choices and maximize parallel processing whenever possible.

Additionally, the model is trained to become a more helpful assistant by reducing unnecessary replies and avoiding unfounded statements.

During the RL process, the model spontaneously learns useful abilities, such as executing complex searches, fixing linter errors, and writing and running unit tests.

Image 11 Efficient training of large MoE models requires significant investments in infrastructure and system research.

Cursor has built a customized training infrastructure based on PyTorch and Ray to support asynchronous reinforcement learning in large-scale environments.

By combining MXFP8 MoE kernels with expert parallelism and mixed-shard data parallelism, the model can be trained in native low precision with extremely low communication overhead, scaling training to thousands of NVIDIA GPUs.

Moreover, using MXFP8 training enables faster inference speeds without the need for post-training quantization.

Hands-On Experience with Composer

After updating Cursor to version 2.0, we immediately tested this model.

Our first impression is that it is indeed fast; almost all prompts run in seconds, producing results in just over ten seconds.

Image 12 First, we asked it to generate a MacOS replica webpage, which, although it looked like Linux, was impressive.

Image 13 However, simulating a spacecraft traveling from Earth to Mars was less elegant.

Image 14 Nonetheless, this new Composer mode seems particularly adept at front-end UI effects, generating good-looking pages even if the logic doesn’t fully match.

Image 15 The generation speed is as shown in the recording; it’s fast, and index.html didn’t even get recorded.

Image 16 This is the speed from a user’s practical test.

Image 17 It can also create multiple agents running simultaneously.

Image 18 After this major version update, practical tests revealed that Cursor is starting to diminish its reliance on external models.

For example, under Composer, it tested Grok, DeepSeek, and K2, with all but Grok being open-source, while GPT and Claude were notably absent.

Image 19 Early testers reported that the new Composer model is very fast.

Image 20 Many developers who gained early access to Cursor 2.0 also echoed the sentiments of being fast and effective.

Image 21 From their feedback, I captured some valuable comments.

Image 22 Some users in comparative tests noted that while Composer is very fast, its intelligence does not match that of Sonnet 4.5 and GPT-5.

Image 23 Additionally, some more geeky developers expressed a preference for CLI over IDE, indicating that Cursor 2.0 did not win them over.

Image 24 Regarding the improvements in interaction logic, some noted that the new multi-agent mode in Cursor 2.0 is particularly suitable for widescreen setups.

Image 25 This major update for Cursor 2.0 marks a new milestone.

However, the AI programming field remains highly competitive, with established players like Claude Code, Codex, and various domestic programming tools.

Image 26 Cursor’s main advantage lies in its early entry, having captured the mindset of AI programming tools from the start.

With a heavily modified VSCode and shelling APIs, it quickly reached a valuation of $10 billion, which may explain the proliferation of AI programming tools today.

This time, Cursor has finally begun developing its own coding model.

But who will ultimately prevail? Perhaps only the developers will decide.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.