Anthropic has said it is working to fix a problem affecting users of Claude Code, its AI-powered coding tool, where usage limits are being hit faster than expected. The company made the announcement on Reddit, calling it the "top priority" for the team. 

Continues below advertisement

Claude Code has grown in popularity among software developers in recent months, who use it as part of their daily workflow. Any disruption to the service can directly affect their work.

Why Are Claude Code Users Running Out of Tokens So Fast?

BBC reports that customers purchase tokens to use Anthropic's AI services, but the number of tokens consumed per task is not always clear. Several users responded to Anthropic's Reddit post with their own experiences. 

Continues below advertisement

One user noted they hit the token limit "much later" on their free account compared to their $100 (£75) a month paid account. Another flagged how bugs in the generated code can spiral quickly, saying "One session in a loop can drain your daily budget in minutes." 

A third user pointed out the issue was not limited to Claude Code alone, writing: "A simple one sentence reply to a conversation just took me from 59% usage to 100%. How??"

The problem may also be connected to a change Anthropic introduced just last week, where it rolled out peak-hour throttling on Claude. This means tokens get consumed more quickly when demand for the service is higher. 

Claude Pro starts at $20 a month, with higher-usage tiers going up to $100 or $200 per month. Business pricing is also available for larger organisations.

Other Recent Issues At Anthropic

Separately, Anthropic recently released part of its internal source code for Claude Code by mistake. An internal file containing 500,000 lines of code was published on GitHub due to what the company described as "human error, not a security breach," adding that "no sensitive customer data or credentials were exposed or involved." 

Claude Code's source code was not entirely unknown, as independent developers had previously reverse-engineered it, and an earlier version had leaked in February 2025. 

Anthropic is also currently in a legal dispute with the US government over how its tools can be used by the Department of Defense.