Coding Weekly AI News
March 2 - March 10, 2026Claude Code Becomes the Top AI Coding Tool
The biggest news this week came from a new survey that looked at what tools programmers use every day. The survey, published on March 3 by The Pragmatic Engineer, asked nearly 1,000 software engineers simple questions about their work. The results were surprising to many people in the tech world. Claude Code, which came out in May 2025, is now the most-used AI coding assistant, beating popular tools like GitHub Copilot and Cursor after just eight months. At smaller companies, the numbers are even more dramatic. Three out of every four engineers said that Claude Code is their main tool for writing code.
The survey also showed how much programming has changed. An amazing 95% of programmers said they use AI tools at least once a week. Even more impressive, 75% of programmers said they use AI for half or more of their work. This means AI is no longer something only a few people use - it is now a normal part of how most programmers work.
The models that power Claude Code are also winning. Claude Sonnet 4.6 and Opus 4.6 models are preferred by programmers for coding tasks, with more votes than all other AI models combined. Most programmers also said they use two to four different AI tools at the same time. And more than half of programmers now regularly use AI agents - meaning AI programs that can work on tasks without a human telling them exactly what to do every step.
Cursor Launches Automatic AI Agent System
The company Cursor is not sitting still while Claude Code grows in popularity. On Thursday of this week, Cursor announced a new system called Automations that lets AI agents run automatically without a human starting them each time. Think of it like having a worker who can start their own tasks instead of waiting for instructions every single time. These Automations can be triggered by changes to code, messages on Slack (a messaging app that many companies use), or even just a timer.
The problem that Cursor was trying to solve is that programmers now have to keep track of too many AI agents working at once. With Automations, humans do not have to keep watching the AI - instead, the AI calls for human help only when it really needs it. One engineer at Cursor explained that "humans are not completely out of the picture. It is that they are not always starting things. They get called in at the right moments in this process."
Cursor already had a simple version of this idea called Bugbot, which automatically checks new code for bugs every time a programmer makes changes. Now they are using this same idea for bigger jobs like security checks. Cursor says it runs hundreds of these automatic tasks every single hour. The company is also using Automations for other jobs like fixing problems when they happen and writing weekly summaries of code changes on company Slack.
Google Ranks AI Models for Android Apps
Google this week published something totally new: a ranking system called Android Bench to show which AI models are best at helping people make Android apps. Google realized that the normal tests for AI models do not check the specific challenges that Android programmers face. So they created their own test with tasks that Android developers really do, like building user interfaces with Jetpack Compose, handling asynchronous programming, and managing databases.
When Google tested all the major AI models, they found that Gemini 3.1 Pro Preview got the best score at 72.4%. Claude Opus 4.6 came second with 66.6%. OpenAI's GPT 5.2 Codex came in third. This gives Android programmers a clear guide for which AI tools will work best for their specific job.
Research Questions Whether Instruction Files Help AI Coding Agents
Scientists at a university in Switzerland (called ETH Zurich) published interesting research this week about instruction files that many programmers put in their projects to help AI agents understand how they work. These files are called things like AGENTS.md or CLAUDE.md. Many people recommend creating these files, but the Swiss researchers wanted to know if they actually help.
The researchers tested this by creating their own set of real coding problems from real projects. They tested four different AI coding agents on these problems and measured three things: how many problems they solved, how many steps it took, and how much it cost to run. They tested each problem three ways: with no instruction file, with an instruction file written by an AI, and with an instruction file written by a real person.
The results were surprising. Files written by AI actually made agents slower and more expensive - reducing success by 3% compared to no file at all. Files written by real people did help a little - improving success by 4% - but they also made agents take more steps and cost more money to run. The researchers found that agents were spending more time and effort without really getting better results. However, programmers who read this research pointed out that the real value might come from instruction files in bigger, more complicated projects that are not available for testing.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.