On Monday, Anthropic introduced a new web application for its popular AI coding assistant, Claude Code. This tool allows developers to effortlessly create and manage numerous AI coding agents directly from their web browser.
Subscribers to Anthropic’s service, especially those on the $20-per-month Pro plan and the Max plans costing $100 and $200 per month, now have access to Claude Code's web version. Users can find it by heading to claude.ai, the same platform for Anthropic's consumer chatbot, and exploring the “Code” section, or they can access it via the Claude iOS app.
This launch signifies Anthropic's ongoing efforts to transform Claude Code beyond a command-line interface (CLI) tool that developers previously accessed through a terminal. By making it available on the web, Anthropic aims to extend the use of AI coding agents across various platforms.
The landscape for AI coding tools has grown increasingly competitive. While Microsoft’s GitHub Copilot led the charge initially, companies like Cursor, Google, OpenAI, and Anthropic have since introduced powerful AI coding solutions — many of which are also accessible online. Despite this, Claude Code stands out as one of the most popular, having expanded its user base tenfold since its broader release in May. It now contributes over $500 million to the company's revenue on an annual basis.
In a notable advancement, Anthropic states that 90% of the Claude Code product is crafted by its AI models. Former engineer Wu now primarily reviews the output rather than writing code manually.
Initially, AI coding tools functioned similarly to autocomplete features, completing lines as developers typed. However, the new wave of AI coding tools, including Claude Code, enables developers to set up agents that function independently. This shift has transformed many software engineers into managers of AI coding assistants in their daily workflows.
Nevertheless, not every developer has embraced this transition. A recent study indicated that some developers worked more slowly when utilizing AI coding tools like Cursor. The study suggested that the delay could stem from the time spent on prompting and waiting for AI outputs, rather than addressing other issues. Furthermore, AI tools often struggle with complex, large-scale codebases, leading engineers to spend additional time correcting AI-generated errors.
Despite these challenges, companies like Anthropic remain dedicated to advancing AI coding agents. Anthropic CEO Dario Amodei forecasted that soon AI could possibly generate 90% of software code for engineers. While this may already be the case within Anthropic, widespread adoption may take more time.
