Anthropic just released their fourth Economic Index report—2 million conversations analyzed across Claude.ai and their API. Some of this we know, but Anthropic has the data.
My top takeaways:
#1. AI Helps Your Best People the Most, Not Your Junior Staff. The Data Confirms It.
You hear this from so many top CTOs: senior engineers and experienced ICs get way more leverage from AI coding tools than junior staff. The assumption that AI would flatten the skill curve? Wrong. At least, so far
Anthropic’s data backs this up:
- Tasks requiring a high school education (12 years) → 9x speedup
- Tasks requiring a college degree (16 years) → 12x speedup
- API/enterprise tasks → even higher across the board

Yes, success rates drop slightly for harder tasks (66% vs 70% for simpler tasks). But the speedup gains far outweigh the reliability dip. Claude’s productivity impact scales more sharply with complexity than complexity correlates with decreased success.
The implication: Stop thinking about AI as a tool for automating junior work. Your $200K engineer gets more leverage from AI than your $60K coordinator. Build your AI strategy around your best people, not your entry-level roles.
#2. AI Can Handle Much Bigger Projects Than You Think—If You Use It Right. Much Bigger.
The “AI can only do small tasks” narrative is holding teams back. I’ve seen this firsthand—companies that treat AI as a micro-task tool vs. companies that redesign workflows around it get completely different results.
Researchers test AI by giving it a complete task and seeing if it finishes in one shot. By that measure, Claude succeeds about 50% of the time on 2-hour tasks.
But in the real world? Anthropic found users successfully complete tasks that would take 19 hours to do manually.
That’s nearly 10x the benchmark.
The difference? Real users:
- Break big projects into smaller steps. Write the outline, then the intro, then each section.
- Course-correct along the way. If something’s off, they fix it before moving on.
- Pick the right tasks. They’ve learned what AI handles well.
The implication: The difference between “AI doesn’t work for us” and “AI 10x’d our output” is often just workflow design. Train your team on this.
#3. “Task Coverage” Is a Misleading Metric—Here’s What Actually Matters
Everyone’s talking about what percentage of jobs AI can “cover.” But I’ve seen companies with 80% task coverage get minimal productivity gains, and companies with 30% coverage transform their operations.
Why? Because coverage doesn’t account for two things: success rates and time spent.
In January 2025, 36% of occupations had at least a quarter of their tasks being performed with Claude’s help. By November 2025? 49%. That’s a 36% increase in less than a year.
But Anthropic created a better metric—”effective AI coverage”—weighting tasks by actual time spent AND success rates. The results reshuffled everything:
Jobs MORE affected than coverage suggests:
- Data entry keyers — Only 2 of 9 tasks covered, but those 2 are what they spend most of their time doing. High success on high-time tasks = massive impact.
- Radiologists — AI can’t do the hands-on work, but it nails the core knowledge work (interpreting images, preparing reports).
Jobs LESS affected than coverage suggests:
- Software developers — High task coverage, but success rates drag down effective impact
- Teachers — Same story
- Microbiologists — Half their tasks covered, but not their most time-intensive ones (hands-on lab work)
The implication: When evaluating AI’s impact on roles, ask: “What tasks take the most time, and how reliable is AI on those specific tasks?” That’s the number that matters for headcount and process decisions.

#4. Deskilling Is the Real Story—Not Job Replacement
The “will AI take my job” debate misses the point. The more interesting question: what does the job look like after AI takes the interesting parts?
I’ve been watching this play out in real-time. AI isn’t eliminating roles wholesale—it’s changing what’s left in them. And often, what’s left is the lower-skill work.
Anthropic’s data confirms it:
- Average task across all jobs requires 13.2 years of education
- Average Claude-covered task requires 14.4 years of education
AI is taking the harder parts of jobs, not the easier ones.
Specific examples of deskilling:
- Technical writers lose “Analyze developments in specific field to determine need for revisions” (18.7 years required) and “Review published materials and recommend revisions” (16.4 years). What’s left? “Draw sketches to illustrate materials” (13.6 years).
- Travel agents lose “Plan, describe, arrange, and sell itinerary tour packages” (13.5 years). What remains? “Print transportation tickets” (12.0 years) and “Collect payment” (11.5 years).
But some jobs get UPskilled:
- Real estate managers — AI handles routine admin (maintaining records, reviewing rents). What remains is higher-level work: securing loans, negotiating contracts, stakeholder management.
The implication: This isn’t about headcount planning. It’s about role redesign. What does a “technical writer” role look like when the technical analysis is AI-assisted? Compensation models, hiring criteria, and career paths all need to evolve.

#5. The Productivity Gains From AI Are Real—But Smaller Than the Headlines
Everyone’s quoting the “1.8% annual productivity boost” number from earlier Anthropic research. I’ve been skeptical that it would hold up once you account for reliability issues.
Turns out: it doesn’t. But the gains are still significant.
When Anthropic adjusted for actual task success rates:
- Claude.ai: 1.2 percentage points (down from 1.8)
- API (harder tasks): 1.0 percentage points
That’s about a 33-45% haircut from the headline number.
But context matters. A sustained 1% annual productivity increase would return US productivity growth to late-1990s rates. That’s meaningful.
And this data was collected before Opus 4.5 shipped. The ceiling is rising.
The implication: Be realistic but optimistic. The productivity gains are real—but they won’t be evenly distributed. The gap between “uses AI” and “gets productivity gains from AI” is where competitive advantage lives.

Bonus: US Adoption Is Converging Faster Than Any Tech in History
I keep hearing “AI is only for the coasts.” The data says otherwise.
Yes, states with more tech workers (Washington, Virginia, D.C.) still use Claude more per capita. But lower-usage states are catching up fast.
Anthropic’s model: if current trends hold, Claude usage per capita would equalize across all US states within 2-5 years.
For comparison: economically significant technologies in the 20th century took about 50 years to fully diffuse across the US.
We’re looking at 10x faster adoption than anything before.
The implication: The “early adopter” window is shorter than you think. Geographic and expertise moats are disappearing.
Bonus: Prompting Skill Is Still a Huge Competitive Advantage
One more finding worth noting: there’s a near-perfect correlation (r > 0.92) between prompt sophistication and output quality.
Sophisticated prompts get sophisticated outputs. Simple prompts get simple outputs. Claude calibrates to the user. This shows up at the country level too—nations with higher educational attainment get more value from AI, independent of adoption rates.
The implication: The ROI gap between a team that knows how to prompt well and one that doesn’t is enormous. Training matters.
The Real Data On What We’ve All Been Talking About
The Anthropic Economic Index gives us real data on what’s been anecdotal until now.
The patterns I’ve seen across portfolio companies are showing up in the numbers:
- AI amplifies your best people—12x speedup on complex tasks vs 9x on simple ones
- Workflow design matters more than model capability for real-world results
- “Coverage” is misleading—effective impact depends on success rates and time spent
- Deskilling is the underappreciated story—roles are changing composition, not disappearing
- Productivity gains are real but require execution—1.0-1.2% annually, not 1.8%
- Adoption is converging 10x faster than historical tech
- Prompting skill is becoming real leverage
If you’re a B2B founder, this should inform your product roadmap, your GTM strategy, and how you structure your team.
The AI transition isn’t coming. It’s here. Stop dragging your feet. Even a bit.


