So ICONIQ Growth is back with their 2025 State of AI Report: Survey Overview
It has some great data from 300 high growth software companies building AI products.
Survey Methodology & Sample Size
- 300 executives surveyed at software companies building AI products (April 2025)
- Respondent roles: CEOs, Heads of Engineering, Heads of AI, Heads of Product
- Geographic split: 88% North America, 12% Europe
- Data collection: Anonymous external survey including some (but not all) ICONIQ portfolio companies plus non-portfolio companies
Company Revenue Distribution
- 26% of respondents: $1B+ revenue
- 13% each: <$10M, $10-25M, $100-200M revenue ranges
- 11%: $200-500M revenue
- 10%: $25-50M revenue
- 8%: $500M-$1B revenue
- 7%: $50-100M revenue
- 4%: $200-500M revenue
High-Growth Company Definition (13% of sample)
ICONIQ defined “high-growth companies” using three criteria:
- AI Product Traction: Product in General Availability or Scaling stage
- Revenue: At least $10M annual revenue
- Growth Rates: 100%+ YoY if <$25M revenue, 50%+ YoY if $25M-$250M revenue, 30%+ YoY if $250M+ revenue
AI Maturity Breakdown
- 32% AI-Native: Core product/business model is AI-driven
- 37% AI-Enabled: Creating new (non-core) AI products
- 31% AI-Enabled: Adding AI capabilities to existing products
Top 10 Learnings:
1. AI-Native Companies Are Outpacing AI-Enabled by 2X in Product Velocity
The Numbers: 47% of AI-native products have reached scaling stage vs. only 13% of AI-enabled companies. Just 1% of AI-native companies are still in pre-launch vs. 11% of AI-enabled.
Why This Matters: AI-native companies are moving through the product lifecycle 3.6X faster than companies retrofitting AI into existing products. They’re also building more aggressively—79% are building agentic workflows vs. 62% of AI-enabled companies.
The Tactical Insight: If you’re an AI-enabled company, you’re fighting organizational inertia, legacy architecture debt, and “trial-and-error phases that slow down AI-enabled companies retrofitting AI into existing workflows.” AI-native companies bypass this entirely by architecting around generative intelligence from day one.
Bottom Line: Being AI-first isn’t just about product—it’s about speed to market and achieving traction earlier.
2. Cost Becomes King for Internal AI, Accuracy Rules for Customer-Facing
The Numbers: For internal use cases, 74% prioritize cost as top consideration, 72% accuracy, 50% privacy. For customer-facing products, 74% prioritize accuracy, 57% cost, 41% customization ability.
Why This Matters: Your procurement strategy needs to flip based on use case. Internal tools can tolerate some hallucinations if they save money. Customer-facing products that hallucinate kill trust and churn users.
The Tactical Insight: Smart companies run dual AI strategies—cheaper models (like DeepSeek) for internal productivity, premium models (GPT-4, Claude) for customer-facing features. The report notes cost has jumped dramatically in importance due to “commoditization of the model layer with the rise of more cost-efficient models.”
Bottom Line: One AI strategy doesn’t fit all—segment by user, not by technology.
3. The Multi-Model Strategy Is Real – Average 2.8 Models Per Company
The Numbers: 95% use OpenAI, 54% Anthropic, 54% Google Gemini, 50% Meta LLaMA, 26% Mistral, 23% DeepSeek. Companies are “increasingly adopting a multi-model approach, leveraging different providers based on use case, performance, cost, and customer requirements.”
Why This Matters: No single model wins across all dimensions. OpenAI for general use, Anthropic for reasoning, Google for enterprise integration, open-source for cost optimization and inference speed.
The Tactical Insight: Build model-agnostic architectures from day one. The report emphasizes that “architectures are being built to support quick model swaps, with some leaning toward open-source models for cost and inference speed advantages.”
Bottom Line: Vendor lock-in is the new technical debt. Design for portability or get held hostage by pricing changes.

4. API Usage Fees Are Your Biggest Cost Blindspot
The Numbers: 70% cite API usage fees as the most challenging infrastructure cost to control, followed by inference costs (49%), model retraining (48%), training costs (47%), storage (42%).
The Spend Reality: Monthly inference costs surge from $100K pre-launch to $1.1M at scale for average companies, but $2.3M for high-growth companies. Training costs range from $163K to $1.5M monthly depending on product maturity.
Why This Matters: Unlike traditional SaaS with predictable infrastructure costs, AI costs scale exponentially with usage and are highly variable. Companies report “the most unpredictability around variable costs tied to external API consumption.”
The Tactical Insight: 41% are moving to open-source models and 37% optimizing inference efficiency to control costs. Build cost monitoring and alerting into your AI stack from day one, not as an afterthought.
Bottom Line: If you can’t predict your AI spend, you can’t scale profitably. Implement usage-based pricing or face margin compression.
5. Coding Assistance Delivers 10X More Impact Than Any Other Use Case
The Numbers: 65% report coding assistance as their highest productivity impact vs. 37% for content generation, 30% documentation, 28% product design. High-growth companies see 33% of total code written with AI vs. 27% for others. Average productivity gains: 15-30% across GenAI use cases.
The Tool Reality: GitHub Copilot dominates with 74% adoption, followed by Cursor at 50%. The long tail drops off sharply after the top two players.
Why This Matters: Coding assistance has measurable, immediate ROI. It’s not about replacing developers—it’s about 10X-ing their output. The productivity gap between AI-assisted and traditional developers is becoming unsurmountable.
The Tactical Insight: This isn’t just about individual productivity—it’s competitive advantage. Companies not AI-augmenting their engineering teams will lose the talent war and ship slower.
Bottom Line: AI coding isn’t nice-to-have—it’s table stakes. The question isn’t whether to adopt coding assistance, but how fast you can roll it out.
6. High-Growth Companies Dedicate 37% of Engineering to AI by 2026
The Numbers: High-growth companies plan 28% of engineering focused on AI in 2025, scaling to 37% by 2026. All other companies: 18% in 2025, 28% in 2026. High-growth companies are maintaining a 9-10 percentage point lead.
The Hiring Reality: 88% have AI/ML engineers (70-day average hire time), 72% data scientists (68 days), 54% AI product managers (67 days). 46% say they’re not hiring fast enough, primarily due to lack of qualified candidates (60%).
Why This Matters: AI isn’t a side project—it’s becoming the primary engineering focus. High-growth companies are betting bigger and moving faster, creating a compounding advantage.
The Tactical Insight: This represents a fundamental shift in engineering allocation. Traditional feature development is getting crowded out by AI development. Plan your hiring pipeline accordingly.
Bottom Line: If you’re not betting at least 1/3 of your engineering org on AI by 2026, you’re not betting big enough to win.
7. Hybrid Pricing Models Dominate – Pure Subscription Is Dead
The Numbers: 38% use hybrid pricing models, 36% subscription/seat-based, 19% usage-based, 6% outcome-based. For AI-enabled companies specifically: 40% include AI in premium tiers, 33% include at no extra cost, 21% have separate usage-based pricing.
The Shift Reality: 37% plan to change AI pricing in the next 12 months, with companies exploring “willingness to pay and clear connection to ROI outcomes” and “consumption and outcome-based pricing.”
Why This Matters: Pure subscription doesn’t work when your costs are variable and value delivery is exponential. The report notes AI-enabled SaaS vendors currently see AI as a “tiebreaker or upsell hook—not yet as its own profit center.”
The Tactical Insight: Start with bundled/premium pricing for adoption, then shift to usage-based as you build telemetry. One company noted: “The subscription model is not working for us. Power users tend to use a lot resulting in negative margins considering LLM API costs.”
Bottom Line: SaaS pricing orthodoxy doesn’t work for AI products. Hybrid models let you capture value while managing cost variability.
8. Hallucinations Are Still Issue #1 – Even Above Cost and Security
The Numbers: 39% cite hallucinations as their top deployment challenge, followed by explainability/trust (38%), proving ROI (34%), compute cost (32%), and security (26%). This ranks above traditional concerns like talent (16%) and latency (15%).
The Reality Check: Despite all the advances in model quality, hallucinations remain the #1 blocker for production deployments. It’s not a solved problem—it’s still the fundamental challenge preventing AI from being truly reliable.
The Vertical Insight: Explainability and trust rank even higher for companies building vertical AI applications, who “may deal with additional compliance and legal restrictions in regulated industries like healthcare.”
The Tactical Insight: Most teams are still treating hallucinations as a model problem rather than a system design problem. The companies succeeding are building guardrails, human-in-the-loop systems, and validation layers rather than hoping for perfect models.
Bottom Line: If you’re not designing your AI system to handle hallucinations gracefully, you’re designing it to fail in production. This isn’t going away with better models—it’s a fundamental architecture challenge.
9. $100M+ Revenue = Dedicated AI Leadership Threshold
The Numbers: At $100M-$200M revenue: 50% have dedicated AI/ML leadership vs. 33% under $100M. This jumps to 61% for $1B+ companies. AI spending as percentage of R&D budget: ranges from 10-25% depending on company size, with larger companies spending higher percentages.
The Organizational Reality: Companies under $100M: 59% say “AI is part of broader R&D strategy.” Over $100M: dedicated leadership becomes the norm due to “increasing operational complexity and the need to have a centralized owner for AI strategy.”
Why This Matters: At scale, AI becomes too strategic and complex to leave to general R&D leadership. It requires dedicated strategy, governance, and execution oversight.
The Tactical Insight: This isn’t just about titles—it’s about organizational commitment. Dedicated AI leadership signals to the market, employees, and investors that AI is core to your strategy, not a side experiment.
Bottom Line: AI leadership isn’t overhead—it’s competitive infrastructure. The $100M threshold is when AI moves from R&D experiment to business strategy.
10. Internal Productivity Budgets Are Doubling to 2% of Revenue
The Numbers: Internal AI productivity spend is nearly doubling from 2024 to 2025 across all revenue tiers. Companies are spending 1-8% of total revenue on internal AI productivity, with $1B+ companies spending $34.2M in 2024, scaling to $60.4M in 2025.
The Budget Reality: Budget sources: 48% from R&D (down from 59%), 39% from business units, 47% from innovation budgets, 57% from headcount budgets (up significantly), 27% net new budget.
The Access vs. Usage Gap: 70% of employees have access to AI tools, but only 50% use them regularly. This gap is worse at $1B+ companies (62% access, 44% usage).
Why This Matters: Companies are betting big on internal productivity, but many are getting disappointing returns due to poor adoption. The companies cracking the adoption puzzle are seeing 15-30% productivity gains.
The Tactical Insight: Don’t just buy AI tools—invest in change management. As one executive noted: “Just deploying tools is a recipe for disappointment. You need to pair availability with scaffolding that includes training, spotlighting champions, and relentless executive support.”
Bottom Line: Treating internal AI as an expense instead of a growth investment is a category error. But budget without adoption strategy is wasted money.

