
Introduction
The May 2025 "Trends – Artificial Intelligence" report by BOND, spearheaded by Mary Meeker, marks a significant moment in technology analysis. This blog post serves as an augmented companion to Meeker's extensive AI report, aiming to provide detailed narratives for each slide, verify its contents against external data, and enrich it with deeper insights for a general readership.
Mary Meeker's longstanding reputation, often dubbed the "Queen of the Internet" 2, lends considerable authority to this analysis. Her previous annual reports have been pivotal in shaping understanding and investment within Silicon Valley and the broader technology sector.3 This AI-centric report is a natural extension of her influential work, now directed at what many consider the most transformative technology of our time. The shift in focus itself is noteworthy. Meeker, previously known for comprehensive "Internet Trends" reports covering the entire digital landscape 4, now dedicates a substantial, 340-page document solely to "Artificial Intelligence".1 This pivot suggests that AI is no longer merely a component of internet trends but has emerged as a dominant, standalone technological force. It mirrors the broader industry's reorientation, where AI is increasingly the central engine of innovation and investment, rather than a peripheral technology. Thus, leading analysts like Meeker now consider AI the primary driver of technological and economic change, deserving the same in-depth, periodic scrutiny previously reserved for the internet as a whole.
Context
The report's origin involved compiling foundational AI trends, a task that evolved into a "beast" due to the rapid and constant flux in data, described as a "data game of whack-a-mole." It references Vint Cerf's 1999 "dog year" analogy for the internet's pace, noting that AI user and usage trending is "materially faster," to the point where "machines can outpace us." Key drivers for this acceleration include global internet infrastructure accessible to 5.5 billion citizens, vast digital datasets accumulated over three decades, and breakthrough Large Language Models (LLMs) like OpenAI's ChatGPT (launched November 2022). The landscape is further shaped by aggressive AI company founders (driving innovation, investment, and rapid capital cycles) and traditional tech companies redirecting substantial free cash flows toward AI. Finally, acute global competition, especially between China and the USA, is a defining feature.1
- Vint Cerf's "Dog Year": While the precise 1999 quote's context is not detailed in the provided material, Vint Cerf, a "Founder of the Internet," has frequently commented on the internet's dynamic and evolving nature.12 The "dog year" metaphor aptly captured the internet's accelerated pace compared to traditional industries at the time.
- Global Internet Users: Current estimates from the International Telecommunication Union (ITU) and DataReportal indicate approximately 5.5 billion internet users globally by early 2025, aligning with Meeker's figure.14
- ChatGPT's Launch: The November 2022 public launch of ChatGPT is a well-documented milestone.1
- Investment Trends: Reports from 2024 and 2025 confirm record levels of venture capital funding in AI and substantial AI-focused investments by incumbent technology firms.16
- US-China Competition: The escalating technological competition in AI between the United States and China is a widely recognized and intensifying geopolitical dynamic.18 The "whack-a-mole" analogy used to describe the process of updating AI data is particularly telling. It suggests that the field is characterized by rapid, unpredictable, and interconnected changes, where resolving one data point or understanding one trend immediately necessitates re-evaluating others.1 This is not merely about new data emerging; it signifies that the foundational metrics, definitions, and even the core characteristics of the AI field are themselves in a state of constant flux. For instance, what constitutes a "leading model," how "user engagement" is best measured, or even the precise "AI market size" can shift dramatically as new capabilities and products are released. This contrasts sharply with more mature technology sectors where key metrics and definitions possess relative stability. Consequently, analyzing AI requires not just tracking quantitative changes but also continuously reassessing the very frameworks and terminologies used for measurement. The field is so nascent that its fundamental attributes are being defined and redefined in real-time, rendering long-term forecasting exceptionally challenging. Furthermore, Meeker's assertion that AI's ramp-up is "materially faster" than the internet and that "machines can outpace us" points to a qualitative shift beyond mere speed.1 The internet's rapid growth was primarily driven by human innovation and adoption on a new global platform.12 AI, particularly generative AI and LLMs, introduces an element where machines learn and improve with a degree of autonomy, at a scale and velocity that humans cannot directly match. The phrase "machines can outpace us" encapsulates this fundamental difference. It's not just that development cycles are shorter; the technology itself possesses a capacity for self-evolution that previous General Purpose Technologies (GPTs) lacked. This is compounded by the immediate global accessibility of many AI tools 1, unlike the internet's more gradual, region-by-region rollout. This suggests the "AI era" may represent a phase shift where the rate of technological advancement is no longer solely dictated by human ingenuity cycles but by a human-machine co-evolution, with machines increasingly setting the pace. This has profound implications for societal adaptation, regulation, and the future of innovation itself.
Slide 4: Outline (Repeated on Pages 10 and 53) 1
- Meeker's Content: The report is structured into eight main sections:
- Seem Like Change Happening Faster Than Ever? Yes, It Is (Pages 9-51)
- AI User + Usage + CapEx Growth = Unprecedented (Pages 52-128)
- AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer Usage Rising (Pages 129-152)
- AI Usage + Cost + Loss Growth = Unprecedented (Pages 153-247)
- AI Monetization Threats = Rising Competition + Open-Source Momentum + China's Rise (Pages 248-298)
- AI & Physical World Ramps = Fast + Data-Driven (Pages 299-307)
- Global Internet User Ramps Powered by AI from Get-Go = Growth We Have Not Seen Likes of Before (Pages 308-322)
- AI & Work Evolution = Real + Rapid (Pages 323-336)
Slide 5: Charts Paint Thousands of Words… (Summary Charts 1) 1
- Narrative: This slide presents a visual summary of key trends detailed later in the report, offering a quick overview of AI's multifaceted impact.
- Chart 1: Developers in Leading Chipmaker's Ecosystem
- Meeker's Content: Shows growth from 0 developers in 2005 to 6 million (6MM) by 2025 in a leading chipmaker's ecosystem. Source: Leading Chipmaker. (Details on Page 38).1
- Narrative: The AI developer community is expanding rapidly, crucial for innovation and application building. NVIDIA, a leading chipmaker, reported its developer program grew from virtually none in 2005 to 2.5 million by 2021 and is projected to reach 6 million by 2025, indicating a robust talent pool emerging around AI-enabling hardware.1 This growth is foundational, as developers are the architects of AI-powered solutions.
- Verification/Augmentation: NVIDIA's CUDA platform, launched in 2007 21, has been central to GPU-accelerated computing. The company's financial reports and CEO letters frequently highlight the growth of their developer ecosystem as a key strength and indicator of AI adoption.22 The 6MM figure aligns with the company's trajectory and market position.
- Chart 2: Internet vs. Leading USA-Based LLM: Total Current Users Outside North America
- Meeker's Content: Compares the Internet (90% users outside North America @ Year 23) with a leading USA-based LLM (90% users outside North America @ Year 3). Notes LLM data is for monthly active mobile app users and app unavailability in China/Russia. Sources: UN/ITU, Sensor Tower. (Details on Page 55).1
- Narrative: AI tools like leading LLMs are achieving global reach much faster than the internet did, signifying rapid international adoption.
- Verification/Augmentation: The ITU provides historical data on internet adoption.15 Sensor Tower tracks mobile app usage.25 While ChatGPT's global spread is fast, the exclusion of major markets like China (where local alternatives are strong) from its user base means the comparison to the more universally accessible "Internet" requires careful interpretation. Data from Sensor Tower indicates India is a major download market for AI apps, though North America leads in revenue.25
- Chart 3: Leading USA-Based LLM Users
- Meeker's Content: Shows weekly active users of a leading USA-based LLM growing from 0 in October 2022 to 800MM by April 2025. Source: Company disclosures. (Details on Page 55).1
- Narrative: User adoption of flagship AI products like ChatGPT has been explosive, reaching hundreds of millions of weekly active users in a short timeframe.
- Verification/Augmentation: OpenAI CEO Sam Altman confirmed 800MM WAUs for ChatGPT in April 2025.1 This figure is widely cited as evidence of AI's rapid mainstream penetration.26
- Chart 4: Big Six USA Technology Company CapEx
- Meeker's Content: Shows CapEx for Apple, NVIDIA, Microsoft, Alphabet, Amazon (AWS), & Meta rising from $33B in 2014 to $212B in 2024 (+63% Y/Y for the most recent period). Sources: Capital IQ, Morgan Stanley. (Details on Page 97).1
- Narrative: Major technology companies are massively increasing their capital expenditures, a significant portion of which is dedicated to building out AI infrastructure.
- Verification/Augmentation: Capital IQ is a standard source for financial data.28 Morgan Stanley research frequently analyzes tech CapEx trends, noting the surge driven by AI.30 This $212B figure for 2024 reflects a substantial commitment to AI hardware and data centers.
Slide 6:…Charts Paint Thousands of Words… (Summary Charts 2) 1
- Narrative: This slide continues the visual summary, focusing on cost dynamics, competitive landscape, and physical world AI adoption.
- Chart 5: Cost of Key Technologies Relative to Launch Year
- Meeker's Content: Compares cost declines of Electric Power, Computer Memory, and AI Inference over time (indexed to Year 0). AI Inference shows a dramatically faster cost decline. Sources: Richard Hirsh, John McCallum, OpenAI. (Details on Page 138).1
- Narrative: The cost of using AI (inference) is falling at an unprecedented rate, far faster than historical cost declines for other foundational technologies like electricity or computer memory.
- Verification/Augmentation: Historical data on electricity costs 32 and computer memory costs (John McCallum's research) 34 show significant long-term declines. OpenAI and Epoch AI data confirm the rapid decrease in AI inference costs per token.36 This rapid cost reduction is a key enabler of widespread AI adoption and experimentation.
- Chart 6: Leading USA-Based AI LLM Revenue vs. Compute Expense
- Meeker's Content: Shows estimated revenue and compute expense for a leading USA-based AI LLM (OpenAI). 2022: ~$0 revenue/expense. 2023: ~$0.5B revenue, ~-$0.5B expense. 2024: +$3.7B revenue, -$5B expense. Source: The Information. (Details on Page 173).1
- Narrative: While revenues for leading AI model providers are growing rapidly, the compute expenses associated with training and running these models are also substantial, leading to significant net losses in the early stages.
- Verification/Augmentation: The Information has reported extensively on OpenAI's financials.38 These figures highlight the capital-intensive nature of developing and scaling frontier AI models. OpenAI's revenue was estimated at $3.7 billion for 2024, with net losses around $5 billion.38
- Chart 7: Leading USA LLMs vs. China LLM Desktop User Share
- Meeker's Content: Shows desktop user share for USA-LLM #1 (OpenAI's ChatGPT) declining from ~75% (2/24) to ~50% (4/25), USA-LLM #2 (likely Google's Gemini or Anthropic's Claude) rising from ~10% to ~21%, and a China LLM (likely DeepSeek) rising from ~0% to ~15%. Source: YipitData. (Details on Page 293).1
- Narrative: The competitive landscape for LLMs is dynamic, with new players, including those from China, rapidly gaining desktop user share.
- Verification/Augmentation: YipitData provides alternative data on user engagement. While OpenAI's ChatGPT has a strong lead, the emergence of competitive models from China like DeepSeek is a significant trend.40 The global AI landscape is becoming more multipolar.
- Chart 8: China vs. USA vs. Rest of World Industrial Robots Installed
- Meeker's Content: Shows industrial robot installations. China: ~50K (2014) to ~290K (2023). Rest of World (excl. China & USA): ~150K to ~200K. USA: ~20K to ~40K. Source: International Federation of Robotics. (Details on Page 289).1
- Narrative: China has become the dominant force in the adoption of industrial robots, significantly outpacing the USA and other regions. This reflects broader trends in automation and manufacturing.
- Verification/Augmentation: The International Federation of Robotics (IFR) is the primary source for global robotics statistics.42 Their 2023 data confirms China's leading position in industrial robot installations, with 276,288 units installed in 2023, representing 51% of global installations.42
Slide 7:…Charts Paint Thousands of Words… (Summary Charts 3) 1
- Narrative: The final summary slide focuses on AI's impact on physical services, global user distribution, and the job market.
- Chart 9: Ride Share vs. Autonomous Taxi Provider, San Francisco Operating Zone Market Share
- Meeker's Content: Shows autonomous taxi market share in San Francisco growing from 0% (Aug 2023) to 27% (April 2025), with traditional ride-share at 34% and 19% (for two providers). Source: YipitData. (Details on Page 302).1
- Narrative: Autonomous vehicle technology is rapidly gaining traction in real-world applications, starting to capture significant market share in early adopter cities like San Francisco.
- Verification/Augmentation: YipitData tracks this market. Companies like Waymo (Google's subsidiary) have been expanding their robotaxi services. Reports indicate Waymo achieved significant market share in San Francisco by late 2024, comparable to Lyft in its operating zones.44 The robotaxi market is projected for substantial growth.46
- Chart 10: Leading USA-Based LLM App Users by Region
- Meeker's Content: Shows Monthly Active Users (MAUs) for a leading USA-based LLM app across global regions (Sub-Saharan Africa, South Asia, North America, MENA, LatAm, Europe & Central Asia, East Asia & Pacific) for May 2023 vs. April 2025. Source: Sensor Tower. (Details on Page 315).1
- Narrative: AI application usage is growing across all global regions, indicating widespread international appeal and adoption.
- Verification/Augmentation: Sensor Tower data shows diverse global usage for apps like ChatGPT, with India being a top download market.25 However, China is a notable exclusion for many US-based LLM apps, with local alternatives dominating.49
- Chart 11: USA IT Jobs – AI vs. Non-AI
- Meeker's Content: Shows change in USA IT job postings (indexed to Jan 2018). AI IT Jobs: +448%. Non-AI IT Jobs: -9% by April 2025. Source: University of Maryland's UMD-LinkUp AIMaps. (Details on Page 332).1
- Narrative: The job market is undergoing a significant transformation, with a surge in demand for AI-related skills while traditional IT job postings may be declining or stagnating.
- Verification/Augmentation: The UMD-LinkUp AIMaps project tracks AI job trends.50 Their data indicates a strong "ChatGPT effect" with AI job postings ramping up significantly since late 2022, contrasting with declines in general IT job postings.50 Other reports also note growing demand for AI skills.54
Section 1: Seem Like Change Happening Faster Than Ever? Yes, It Is
This section delves into the overarching theme that the pace of technological change, primarily driven by Artificial Intelligence, is accelerating at a rate that is arguably unprecedented in human history.
Subsection 1.1: Overview of Accelerating Change (Meeker's Pages 8-9) 1
The world is indeed transforming at an extraordinary speed, with rapid technological innovation and adoption serving as fundamental drivers of these shifts. This evolution is also mirrored in the changing leadership dynamics among global powers.
A look back at the founding missions of previous internet-era giants provides a stark contrast to today's AI-supercharged environment. In 1998, Google set out "to organize the world's information and make it universally accessible and useful".56 Alibaba, in 1999, aimed "to make it easy to do business anywhere".58 Facebook (now Meta), founded in 2004, initially sought "to give people the power to share and make the world more open and connected".60 Today, the vast repository of organized, connected, and accessible information these companies helped build is being dynamically reshaped and amplified by artificial intelligence, ever-accelerating computing power, and increasingly fluid global capital flows, all contributing to massive, ongoing change.
The juxtaposition of these original missions with the current AI-driven landscape reveals a fundamental transformation. While the Web 2.0 giants focused on mediating human-centric activities—organizing information, facilitating commerce, and connecting people—AI introduces technology not merely as a mediator but as an active, intelligent agent capable of creating, reasoning, and acting with increasing autonomy. Consequently, these original missions are being profoundly re-interpreted, if not superseded. "Organizing information" is evolving into "generating and reasoning with information." "Making business easy" is shifting towards "automating business." "Connecting people" is expanding to "connecting people and intelligent agents." This implies that the very definition of what these technology giants do is in flux. Their established moats, built on network effects and vast data repositories, are now being augmented or challenged by AI's capacity to generate novel value from that data, potentially leading to new forms of platform power or significant disruption.
OpenAI's ChatGPT stands out as "history's biggest 'overnight' success (nine years post-founding)," judging by its user, usage, and monetization metrics.1 Unlike the first wave of the internet revolution, which began primarily in the USA and then gradually diffused globally, ChatGPT achieved worldwide impact almost instantaneously, with user growth occurring simultaneously across most global regions. This rapid global dissemination underscores a key characteristic of the AI era: new breakthroughs can leverage existing global internet infrastructure for immediate, widespread distribution.
Concurrently, both established platform incumbents and emerging challengers are fiercely competing to develop and deploy the next layers of AI infrastructure. This includes creating agentic interfaces, enterprise copilots, real-world autonomous systems, and sovereign AI models. These rapid advancements in artificial intelligence, coupled with developments in compute infrastructure and global connectivity, are fundamentally reshaping how work is performed, how capital is allocated, and how leadership is defined across both corporations and nations.
The technological and geopolitical forces are becoming increasingly intertwined. Andrew Bosworth, Meta Platforms' CTO, has characterized the current AI development landscape as "our space race," specifically referencing China's significant capabilities.62 This highlights a critical understanding: leadership in AI could directly translate into geopolitical leadership, a dynamic that differs from past technological races where geopolitical power often preceded technological dominance. The implications of this race are profound, as the nation or bloc that leads in AI could set global standards, control critical infrastructure, and wield significant economic and strategic influence.18
Despite the "tremendous uncertainty" and the "dangerous and uncertain times" this rapid evolution presents, the report maintains a stance of long-term optimism. This perspective is anchored by a quote attributed to Brian Rogers, former Chairman and CEO of T. Rowe Price: "Statistically speaking, the world doesn't end that often".1 While the exact phrasing is common investment wisdom, it aligns with Rogers' known pragmatic and risk-aware optimism.64 This optimism for AI's future is predicated on several factors: intense competition driving innovation, increasingly accessible compute power, the rapidly rising global adoption of AI-infused technology, and the potential for thoughtful and calculated leadership to foster a degree of trepidation and respect. This, in turn, could lead to a state of "Mutually Assured Deterrence" in the AI sphere.1
The concept of Mutually Assured Deterrence (MAD) in AI, sometimes referred to as Mutual Assured AI Malfunction (MAIM) by experts like Dan Hendrycks, Eric Schmidt, and Alexandr Wang 66, suggests a future where the catastrophic risks of unchecked AI development or one nation achieving unilateral AI dominance could lead to a deterrence posture. However, applying the MAD framework to AI presents unique complexities compared to its nuclear predecessor. Nuclear MAD relied on relatively clear "red lines," verifiable capabilities, and a largely bipolar global order. In contrast, AI development is diffuse, often opaque (particularly within private or state-led research), inherently dual-use, and involves a multitude of state and non-state actors.66 AI "weapons" in the form of algorithms and models can be easily copied or stolen, their destructive potential is less immediately obvious than that of a physical weapon, and attributing AI-driven attacks is significantly more challenging. Therefore, establishing stable deterrence in the AI domain is considerably more complex. The risk of miscalculation, accidental escalation, or a disruptive "first-mover advantage" breakthrough is arguably higher. While MAD offers a conceptual framework for stability, its successful application to AI necessitates unprecedented international cooperation on transparency, verification, and control mechanisms—elements currently underdeveloped amidst intense geopolitical rivalry.18 Thus, any optimism regarding AI MAD must be tempered with a robust understanding of these intricate challenges.
The "magic of watching AI do your work for you" is likened to the transformative experiences of the early days of email and web search, yet AI's impacts are perceived as "better / faster / cheaper" and materializing "even quicker".1 The speculative and frenetic forces of capitalism and creative destruction are described as "tectonic." It is undeniable that, particularly concerning the USA and China and their respective tech powerhouses, "it's 'game on'".1
Subsection 1.2: Technology Compounding – The Numbers Behind Momentum (Meeker's Pages 10-22)
This subsection explores the historical pattern of technological advancements compounding over time to drive significant progress and economic growth, positioning AI as the latest in a series of transformative General Purpose Technologies (GPTs).
Global GDP & Technological Leaps (Meeker's Pages 11-12) 1
The report features a chart illustrating "Global GDP Last 1,000+ Years, per Maddison Project".1 This chart, using a logarithmic scale, depicts the growth of global Gross Domestic Product alongside key technological innovations such as the Printing Press (circa 1440s), Steam Engines (1700s), Electrification (1880s), the Internet (1990s), and now, the emerging AI Era. The data, sourced from Microsoft's "Governing AI" report (May 2023) which in turn uses data from the Maddison Project and Our World in Data 68, establishes a long-term historical context: technological breakthroughs have consistently fueled exponential economic growth.
The report introduces the term "GKS (Gross Knowledge Dollars)," defined as an informal estimate of the potential business value of a specific insight, idea, or proprietary knowledge, reflecting its worth if applied effectively, even if it hasn't yet generated revenue.1 This definition, appearing specific to the cited Microsoft report, contrasts with standard economic measures like GDP, which track realized economic output, or established concepts of capital stock (gross, net, and productive).74 The introduction of GKS subtly shifts the narrative for AI, acknowledging that a significant portion of AI's current perceived economic impact is based on future promise rather than universally measurable current output. This is characteristic of early-stage GPTs, where a "productivity paradox"—an initial lag in measurable productivity despite high investment—is common.75 While AI's long-term economic impact is anticipated to be substantial, aligning with the historical impact of previous GPTs, its current contribution to traditional GDP might be less clear-cut and could be easily overestimated if "potential" value (GKS) is conflated with actualized economic output. This underscores the inherently speculative nature of many current AI valuations and investments.
The following table provides context on the adoption and economic impact timelines for key General Purpose Technologies, illustrating the typical lag between invention and broad economic transformation, a pattern AI might also follow despite its rapid initial adoption.
Table 1: Key General Purpose Technologies and Estimated Time to Economic Impact
| Technology | Approx. Year of Invention/Widespread Adoption | Primary Mode of Distribution | Key Impact on Knowledge Access/Productivity | Key Societal Challenges Introduced | Est. Time Lag to Significant Economic Impact |
| Printing Press | ~1440 | Physical | Mass dissemination of written information, literacy expansion | Information control, censorship, religious/political upheaval | Decades to Centuries |
| Steam Engine | ~1712 (Newcomen), ~1776 (Watt) | Mechanical Power | Revolutionized manufacturing, transportation, agriculture (Industrial Rev.) | Urbanization issues, labor displacement, pollution | Decades |
| Electricity | Late 19th Century | Networked Utility | Powered factories, homes, new industries (lighting, motors, communications) | Infrastructure cost, safety standards, monopolization | 20-40 years |
| Internal Combustion Engine / Automobile | Late 19th / Early 20th Century | Personal/Commercial Transport | Transformed transportation, urban sprawl, created new industries (oil, roads) | Pollution, traffic congestion, infrastructure demands | 20-30 years |
| Computer (Mainframe to PC) | Mid-20th Century (Mainframe) to ~1980s (PC) | Digital | Automation of calculation, data processing, rise of information economy | High initial cost, skill gaps, job displacement in clerical work | Decades (Mainframe), 10-20 years (PC) |
| Internet | ~1990s (Commercial/Public) | Digital-Interactive | Global information access, e-commerce, new communication platforms | Misinformation, cybersecurity, digital divide, privacy concerns | 10-15 years |
| Artificial Intelligence (Modern LLMs) | ~2020s (Widespread Generative AI) | Digital-Generative / Embedded | Automation of cognitive tasks, content creation, advanced analytics (Ongoing) | Job displacement, bias, misinformation at scale, ethical dilemmas, AGI risk (Ongoing) | Ongoing (Rapid adoption, economic impact TBD) |
Sources: Broadly based on historical economic analyses of GPTs 75 and specific technology timelines.
This table contextualizes AI's current phase. While AI's adoption speed is remarkable, its broad economic transformation might still follow historical patterns of GPTs, involving initial disruption and a lag before full productivity benefits are realized across the economy.
Computing Cycles Over Time (Meeker's Page 13) 1
The report presents a chart from Morgan Stanley 1 depicting "Computing Cycles Over Time – 1960s-2020s." It illustrates distinct eras: Mainframe (~1M+ units), Minicomputer (~10M+ units), PC (~300M+ units), Desktop Internet (~1B+ units/users), Mobile Internet (~4B+ units), and the current AI Era (projected at "Tens of Billions of Units"). The enabling infrastructure evolved from CPUs to Big Data/Cloud, and now to GPUs for the AI Era. Each cycle dramatically increased the number of connected devices and users, setting the stage for subsequent innovations. The AI era is projected to dwarf previous cycles in sheer scale. This progression is consistent with analyses from firms like Morgan Stanley.77
The projected scale of the AI Era, "Tens of Billions of Units" 1, suggests a significant departure from previous cycles that primarily counted distinct physical user devices. This larger number likely encompasses not just primary user devices (like PCs or smartphones) but also a vast, interconnected network of IoT sensors, embedded AI chips in everyday appliances, vehicles, industrial robots, and myriad other systems. Each of these becomes an "intelligent node" capable of data collection, localized processing, and AI-driven action. This implies that the AI Era is not merely about more powerful user-facing computers; it signifies the ambient distribution of intelligence into the very fabric of our environment and machinery. Such pervasive integration creates a vastly larger attack surface for cybersecurity, introduces more complex data privacy challenges, and enables a far more deeply embedded presence of AI in daily life than any previous computing cycle.
AI Technology Compounding (Meeker's Pages 14-19)
This series of slides details the exponential growth trends in key technological enablers of AI.
- Training Dataset Size (Meeker's Page 15) 1: A chart from Epoch AI shows the number of words in training datasets for key AI models growing at +260% per year from 1950-2025.1 Epoch AI is a recognized source for AI trend data, and their research confirms rapid expansion in the scale of training data used for AI models.79 This explosion in data is a fundamental driver of AI model capability.
- Training Compute (FLOPs) (Meeker's Page 16) 1: Another Epoch AI chart indicates that the computational power (measured in Floating Point Operations, or FLOPs) used for training key AI models has grown at +360% per year from 1950-2025.1 A FLOP is a basic unit of computation representing a single arithmetic calculation with decimal numbers; total FLOPs estimate the computational cost of training or running an AI model. Epoch AI's broader data suggests training compute for notable models has been doubling roughly every six months, which aligns with a 4.4x to 4.6x annual growth rate since 2010.79 This massive increase in compute is essential for processing larger datasets and developing more complex model architectures.
- Algorithmic Improvements (Meeker's Page 17) 1: The report highlights a +200% per year effective compute gain due to improved algorithms, based on Epoch AI data from 2014-2023.1 This means smarter algorithms are making AI more efficient, effectively multiplying the power of existing hardware. Epoch AI's research indicates that the compute needed to achieve a given level of performance in language models has been shrinking by a factor of approximately 3x each year due to algorithmic progress.83 This distinction between effective compute gain from algorithms and actual raw compute usage is crucial. While algorithmic efficiency means more can be done with the same or less compute for a specific task, the overall demand for AI capabilities is so high that these efficiency gains are often reinvested into training even larger, more capable models or tackling more complex problems. This phenomenon, akin to Jevons Paradox 84, means that cost savings from algorithmic efficiency do not necessarily translate into reduced overall spending on AI compute. Instead, they enable the AI field to pursue even more ambitious projects, further fueling the demand for hardware and energy, with significant economic and environmental consequences.
- AI Supercomputer Performance (Meeker's Page 18) 1: The performance of leading AI supercomputers (measured in FLOP/s) is shown to be growing at +150% per year from 2019-2025, per Epoch AI.1 This growth is attributed to a 1.6x annual increase in chips per cluster and a 1.6x annual increase in performance per chip. Data from Epoch AI suggests an even faster growth rate for leading AI supercomputer performance, at 2.5x per year (doubling every nine months) since 2019.85 This rapidly advancing hardware capability is critical for supporting the training of cutting-edge AI models.
- Number of New Large-Scale AI Models (Meeker's Page 18, misidentified as Page 19 in user query) 1: The report shows a +167% per year growth in the number of new large-scale AI models (defined as requiring greater than 1023 FLOPs for training, though the note on Meeker's page 19 indicates 1024 FLOPs as of 4/25) released between 2017 and 2024, according to Epoch AI.1 Epoch AI's definition of "large-scale" has evolved, but they confirm an accelerating pace of releases for models exceeding significant compute thresholds (e.g., 1023 FLOPs or 1025 FLOPs).80 This proliferation indicates a broadening of access to, and capability in developing, powerful AI systems.
These compounding factors—larger datasets, more raw compute, greater algorithmic efficiency, more powerful supercomputers, and an increasing number of sophisticated models—are not independent. They create a powerful, self-reinforcing flywheel. Advances in one area fuel progress in others, leading to a compounding effect that drives the "unprecedented" pace of AI development. This interconnectedness also implies that any significant bottleneck in one area, such as a slowdown in algorithmic improvement, chip supply constraints, or limitations in energy availability for data centers, could substantially impact the entire ecosystem's trajectory.
ChatGPT's Rapid Scale & AI as a Compounder (Meeker's Pages 20-22) 1
The report uses ChatGPT as a prime example of AI's accelerated adoption.
- User, Subscriber, and Revenue Growth (Meeker's Page 20) 1: A chart details ChatGPT's growth from October 2022 to April 2025, showing users reaching 800 million, subscribers 20 million, and estimated annual revenue approaching $4 billion by 2024. This rapid ramp-up is supported by external reports from sources like The Information, which has closely tracked OpenAI's financial and user metrics.38 The 800 million weekly active user figure by April 2025, mentioned by Meeker 1, aligns with statements from OpenAI's CEO and is echoed by market analysts.26
- Annual Searches (Meeker's Page 21) 1: A comparison with Google Search indicates ChatGPT reached 365 billion annual searches 5.5 times faster than Google. While direct "annual searches" for ChatGPT are an estimation based on daily query volumes (around 1 billion per day according to some estimates 90, though Meeker's report cites ~1B messages processed daily as of Dec 2024 for its 300M WAUs 26), its daily query volume is substantial. Google Search still handles vastly more queries overall (estimated over 5 trillion in 2024, or ~14 billion per day).91 The "5.5x faster" claim refers to the time taken to reach a specific search volume milestone, underscoring ChatGPT's rapid user engagement for information-seeking tasks.
- AI as a Compounder (Meeker's Page 22) 1: The text concludes that "AI is a compounder – on internet infrastructure, which allows for wicked-fast adoption of easy-to-use broad-interest services." ChatGPT's success is thus framed as both an AI story and an internet infrastructure story. The internet provided the global reach, established user behaviors (like search and chat), and mature developer ecosystems that AI tools could immediately leverage. ChatGPT did not need to build this foundational layer from scratch; its "extremely easy-to-use / speedy user interface" 1 was critical, but its capacity for rapid global viral adoption was fundamentally dependent on the existing internet rails. This implies that future AI breakthroughs are likely to see even faster adoption if they can effectively tap into established digital infrastructures and ingrained user behaviors. Consequently, control over dominant internet platforms (such as app stores, search engines, and social networks) becomes an increasingly critical strategic lever for the deployment and monetization of new AI services.
Subsection 1.3: Knowledge Distribution Evolution (Meeker's Pages 23-27)
This subsection traces the evolution of knowledge distribution, positioning AI as the latest paradigm shift following the printing press and the internet.
- Printing Press (Static + Physical) (Meeker's Page 24) 1: The invention of the printing press around 1440 marked an era (1440-1992, per Meeker) of static, physical knowledge distribution. This technology was revolutionary for its time, enabling the mass production and wider dissemination of written materials.
- Internet (Active + Digital) (Meeker's Page 25) 1: The public release of the World Wide Web around 1993 ushered in an era (1993-2021, per Meeker) of active, digital knowledge distribution. The internet allowed for dynamic content, global reach, and interactive access to information.
- Generative AI (Active + Digital + Generative) (Meeker's Page 26) 1: The public launch of ChatGPT in 2022 heralded a new phase (2022+, per Meeker) characterized by active, digital, and generative delivery of knowledge. AI models can now create novel content—text, images, audio, code—based on learned patterns. The slide provides examples of AI's impact, such as 7% of scientific articles published in 2023 showing signs of generative AI involvement and 6.96% of global news articles in mid-2024 being AI-generated.
- Martin H. Fischer Quote (Meeker's Page 27) 1: The subsection concludes with the quote: "Knowledge is a process of piling up facts; wisdom lies in their simplification."
The transition from "Active + Digital" distribution (characteristic of the internet) to "Active + Digital + Generative" distribution (characteristic of modern AI) is particularly profound. The printing press and the internet primarily served to distribute human-created knowledge. While verification and sourcing could be challenging, the bedrock was human authorship. Generative AI, conversely, creates new content that can be indistinguishable from human output 1, and is known to sometimes "hallucinate" or fabricate information. This fundamentally blurs the lines of authorship, originality, and factual accuracy in ways previously unseen. The example of AI generating statistics about AI-generated content 1 is itself an illustration of this recursive potential. The Fischer quote regarding "wisdom lies in their simplification" 1 takes on an ironic tone in this context. While AI can simplify complex topics, its capacity to also generate plausible-sounding misinformation means that achieving true simplification—and thus, wisdom—becomes a more complex task for the end-user. Society now faces a significant challenge in developing new forms of literacy and robust verification mechanisms to navigate a world where a substantial and growing portion of "knowledge" is machine-generated. This has far-reaching implications for trust in information, educational practices, journalism, and the integrity of scientific research. The very nature of "truth" and "fact" becomes more contested in an environment suffused with generative content.
The following table provides a comparative overview of these knowledge distribution technologies:
Table 2: Evolution of Knowledge Distribution Technologies
| Technology | Approx. Year of Impact | Primary Mode of Distribution | Key Impact on Knowledge Access | Key Societal Challenges Introduced |
| Printing Press | ~1440 | Static + Physical | Mass production of texts, increased literacy, democratization of knowledge (limited by access to printed material) | Control of information by authorities, censorship, religious/political schisms |
| Internet | ~1993 (Public WWW) | Active + Digital | Instant global access to vast information, interactive communication, user-generated content | Information overload, misinformation/disinformation, digital divide, privacy, cybersecurity threats |
| Generative AI | ~2022 (ChatGPT Launch) | Active + Digital + Generative | Automated content creation, personalized information synthesis, new forms of human-machine collaboration | Authorship ambiguity, deepfakes, algorithmic bias, potential for mass manipulation, job displacement in creative/knowledge industries, epistemic crises |
Sources: Based on Meeker's report 1 and general historical understanding of these technologies.
Subsection 1.4: AI Milestones & Future Projections (Meeker's Pages 28-36)
This subsection reviews the historical development of AI and presents AI-generated forecasts of its future capabilities.
- AI Milestone Timelines (1950-2022 and 2023-2025) (Meeker's Pages 28-30) 1: The report presents timelines detailing key AI developments. The 1950-2022 timeline 1 includes foundational moments like Alan Turing's Turing Test (1950), the Dartmouth Conference coining "Artificial Intelligence" (1956), the creation of early learning programs like Arthur Samuel's checkers player (1962), pioneering robots like Shakey (1966), the "AI Winter" (1967-1996) where progress slowed, IBM's Deep Blue defeating Garry Kasparov (1997), the launch of Roomba (2002), Stanford's Stanley winning the DARPA Grand Challenge (2005), Apple's acquisition of Siri (2010), the Goostman chatbot passing a version of the Turing Test (2014), and OpenAI's releases of GPT-1 (2018), GPT-3 (2020), and ChatGPT (November 2022).
The 2023-2025 timeline 1 highlights more recent rapid advancements, including OpenAI's GPT-4 (March 2023), Google's Bard (March 2023), Meta's Llama 3 (April 2024), OpenAI's GPT-4o (May 2024), Apple Intelligence (July 2024), Alibaba's Qwen2.5-Max (January 2025), and ChatGPT reaching 800 million weekly users (April 2025). These timelines collectively illustrate a long history of AI development, marked by periods of intense progress ("AI summers") and relative stagnation ("AI winters"), with the current period representing an unprecedented acceleration. The milestones listed are generally accurate and align with well-documented events in AI history. - AI Capabilities (Today, 2030, 2035 per ChatGPT) (Meeker's Pages 31-36) 1: The report includes lists of AI capabilities—current, 5-year outlook (circa 2030), and 10-year outlook (circa 2035)—as generated by ChatGPT 4o.1
- Today (Q2:25) 1: Capabilities include writing/editing, summarizing complex material, tutoring, brainstorming, automating repetitive work, roleplaying, connecting to tools (APIs), offering therapy/companionship, helping find purpose, and organizing life.
- Circa 2030 (5-Year Outlook) 1: Predictions include generating human-level text/code/logic, creating full-length films/games, understanding/speaking like a human (emotionally aware), powering advanced personal assistants, operating humanlike robots, running autonomous customer service/sales, personalizing digital lives, building/running autonomous businesses, driving autonomous scientific discovery, and collaborating creatively.
- Circa 2035 (10-Year Outlook) 1: Predictions include conducting scientific research autonomously, designing advanced technologies (materials, biotech), simulating human-like minds (digital personas with memory/emotion), operating autonomous companies, performing complex physical tasks, coordinating global systems (logistics, energy), modeling full biological systems, offering expert-level decisions (legal, medical), shaping public debate/policy, and building immersive virtual worlds from text prompts. These AI-generated predictions offer a glimpse into the technology's perceived trajectory. Many of the 5-year predictions are active areas of research and development. The 10-year predictions are more speculative but align with long-term AGI aspirations.
It is noteworthy that while the historical timeline explicitly acknowledges the "AI Winter" 1—a period of reduced funding and slower progress—the future projections generated by ChatGPT inherently assume continuous and largely unabated advancement. Past AI progress has been cyclical, with periods of intense hype often followed by disillusionment when promised breakthroughs did not materialize as quickly as anticipated. Current AI progress is heavily reliant on massive computational resources and vast datasets, which are costly and have significant environmental implications. Should current models encounter scaling plateaus, economic conditions shift unfavorably, or societal backlash against AI-related harms intensify, another "AI slowdown" or a nuanced "winter" is conceivable. While perhaps not as severe as past winters due to broader current adoption, this possibility is often overlooked in purely optimistic, AI-generated forecasts. A balanced perspective should consider potential headwinds and the possibility that progress might slow or shift in unforeseen directions.Furthermore, the predictions of achieving "human-level" capabilities 1 warrant careful interpretation. "Human-level" performance is typically benchmarked against specific, often narrowly defined tasks, such as passing academic exams or coding competitions. However, human intelligence is far more multifaceted, encompassing common sense, emotional intelligence, nuanced adaptability, and embodied experience—qualities that current AI systems largely lack. As AI achieves "human-level" performance on one set of benchmarks, the definition of true, comprehensive human-level intelligence often evolves to incorporate these harder-to-quantify aspects. Thus, achieving "human-level" AI is not a fixed goalpost. While AI will undoubtedly surpass human performance in many specific domains, the broader aspiration of Artificial General Intelligence (AGI) matching the full spectrum of human intellect remains a complex and evolving challenge. The AI-generated predictions should be viewed in this light: AI is set to become vastly more capable, but creating "human-like minds" involves far more than excelling at discrete tasks.
Subsection 1.5: AI Development Trending = Unprecedented (Meeker's Pages 37-49)
This subsection details the unprecedented trends in AI development, focusing on the shift in research leadership, developer ecosystem growth, patent activity, and performance milestones.
- ML Model Origins (Industry vs. Academia) (Meeker's Page 38) 1: A chart from the Stanford HAI AI Index Report shows that around 2015, industry surpassed academia as the primary source of notable machine learning models. In 2023, industry produced 55 notable models, while academia produced zero.1 This shift is attributed to industry's superior access to large datasets, massive compute resources, and significant financial capital. The Stanford HAI AI Index Report is a leading source for such data and confirms this widely acknowledged trend.18
- Developer Ecosystem Growth (Meeker's Pages 39-40) 1:
- NVIDIA: The number of developers in NVIDIA's ecosystem is reported to have grown 6x to 6 million over seven years (implicitly 2018-2025).1 NVIDIA's CUDA platform, pivotal for GPU-accelerated computing since its 2007 launch 21, underpins this growth. The company consistently highlights its expanding developer base as a key indicator of AI adoption.22
- Google Gemini: The ecosystem of developers building with Google's Gemini models reportedly grew 5x year-over-year to 7 million by May 2025.1 This rapid expansion, often announced at developer conferences like Google I/O, signifies strong interest in new foundation models. The rapid expansion of AI developer ecosystems around major platforms like NVIDIA and Google is a critical leading indicator. Historically, the platform that successfully attracts and retains the largest and most active developer community often becomes the dominant standard (e.g., Windows in PCs, iOS and Android in mobile). Developers create applications and tools that generate network effects, making the platform increasingly valuable and difficult to displace. In the AI context, this translates to developers building novel AI-powered services, fine-tuning models for specific applications, and creating new tools on particular APIs or hardware stacks. Thus, the ongoing race for AI supremacy is not solely about possessing the most advanced model; it is equally about cultivating the most vibrant and productive developer ecosystem. Tracking developer adoption rates and the proliferation of tools around different AI platforms (NVIDIA, Google, OpenAI, Hugging Face, etc.) offers crucial clues about future market leadership.
- US Computing-Related Patents (Meeker's Page 41) 1: Patent activity for computing-related inventions in the USA shows significant spikes following the Netscape IPO (1995) and, more recently and sharply, after the public launch of ChatGPT (2022).1 This suggests that major technological breakthroughs and their commercial validation often trigger waves of innovation and intellectual property generation.
- AI Performance Milestones (Meeker's Pages 42-44) 1:
- MMLU Benchmark: AI systems surpassed the human baseline (89.8%) on the Massive Multitask Language Understanding (MMLU) benchmark in 2024, achieving 92.3% accuracy.1 The Stanford HAI AI Index tracks such benchmark performances.18
- Turing Test: In a March 2025 study by Cameron Jones and Benjamin Bergen, 73% of human testers mistook responses from GPT-4.5 for human-generated responses in a Turing test setting.1 An example conversation illustrates the increasing realism of AI interactions.1 While AI's achievements on benchmarks like MMLU and its ability to pass Turing-like tests are impressive indicators of progress, it's important to distinguish this from solving broad human problems reliably. Benchmarks are, by nature, specific and sometimes narrow measures of performance. Excelling on a benchmark or in a conversational test does not equate to possessing general intelligence, robust common sense, or the nuanced ability to navigate complex, open-ended real-world challenges without generating unintended negative consequences. There's a risk that an overemphasis on benchmark performance can lead to "teaching to the test," where models are optimized for specific metrics rather than developing truly generalizable and robust intelligence. The true measure of AI's "unprecedented development" will ultimately be its capacity to consistently and safely deliver tangible value in diverse real-world applications, rather than solely its scores on standardized tests. A significant gap can exist between "superhuman" performance on a defined task and the attainment of "superhuman" wisdom or practical utility.
- Realistic Multimedia Generation (Meeker's Pages 45-48) 1:
- Image Generation: The evolution of Midjourney's models (v1 in Feb 2022 vs. v7 in April 2025) demonstrates a dramatic improvement in the realism of AI-generated images.1 A 2024 comparison of an AI-generated image (from StyleGAN2) versus a real photograph further underscores this progress.1
- Audio Generation & Translation: ElevenLabs, a leader in AI voice generation, has seen its global site visits grow significantly, reaching approximately 20 million by April 2025.1 Their technology is used to generate vast amounts of audio content and is adopted by a majority of Fortune 500 companies. Spotify is leveraging AI for realistic audio translation, with CEO Daniel Ek highlighting its potential to break down language barriers for creators and knowledge sharing.1
- Emerging AI Applications (Meeker's Page 49) 1: Citing Morgan Stanley (November 2024), the report lists several accelerating AI applications: Protein Folding (DeepMind's AlphaFold), Cancer Detection (Microsoft & Paige), Robotics (Google's LLM-instructed robots), Agentic AI (Amazon's task-completion tools), Universal Translation (Meta's multimodal model), and Digital Video Creation (Channel 1 AI's generative newscasts).1 These examples illustrate AI's expansion beyond text and image processing into solving complex problems across diverse scientific and industrial domains.
Subsection 1.6: AI Benefits & Risks (Meeker's Pages 50-52)
This subsection addresses the dual nature of artificial intelligence, acknowledging its immense potential benefits alongside significant inherent risks.
- Benefits & Risks Overview (Meeker's Page 51) 1: The report references Stuart Russell and Peter Norvig's seminal work, "Artificial Intelligence: A Modern Approach," to outline AI's potential upsides: freeing humanity from menial work, dramatically increasing production, and accelerating scientific research, which could lead to cures for diseases and solutions for climate change. Demis Hassabis, CEO of Google DeepMind, is quoted: "First we solve AI, then use AI to solve everything else".1 However, the same source highlights concurrent risks, including lethal autonomous weapons, surveillance and persuasion capabilities, biased decision-making, adverse impacts on employment, safety issues in critical applications, and cybersecurity threats.
The quote from Demis Hassabis, "First we solve AI, then use AI to solve everything else," encapsulates a particular strand of technological optimism. However, this perspective presents a potential paradox. "Solving AI," often interpreted as achieving Artificial General Intelligence (AGI) or at least highly capable, general-purpose AI, is a long-term endeavor. The very process of attempting to solve AI is already unleashing powerful, specialized AI tools that have significant and immediate real-world impacts and risks, as detailed by Russell and Norvig.1 Society cannot afford to wait until AGI is "solved" before addressing the ethical, societal, and safety challenges posed by current and near-term AI systems. The risks are not merely a post-AGI concern; they are concurrent with the development process itself. The pursuit of AGI inherently generates intermediate AI capabilities that demand immediate governance frameworks and mitigation strategies. The notion of "solving AI" in isolation before deploying its benefits to address other global problems is a simplification that overlooks the continuous, iterative nature of AI development and its ongoing, pervasive societal impact.
Furthermore, there's an observable asymmetry in the deployment timelines of AI's benefits versus its risks. Many of the most transformative benefits, such as discovering cures for complex diseases or fully addressing climate change, may rely on AGI or near-AGI capabilities that are still in the research or hypothetical stage. Conversely, many of the risks—such as algorithmic bias in financial or judicial systems, the misuse of facial recognition for surveillance, or the generation of sophisticated disinformation—are associated with the deployment of current-generation AI systems. This means some risks manifest much earlier in the AI development and deployment cycle than some of the most profound and widely anticipated benefits. Consequently, society must actively manage the immediate and tangible risks of AI today, even while vigorously pursuing its long-term, transformative potential. A failure to address current harms effectively could undermine public trust and support for continued AI development, potentially delaying or even derailing the achievement of those significant future benefits. - Stephen Hawking Quote (Meeker's Page 52) 1: The section concludes with the poignant warning from Stephen Hawking: "Success in creating AI could be the biggest event in the history of our civilization. But it could also be the last – unless we learn how to avoid the risks".1 This underscores the existential importance of navigating AI development responsibly.
Section 2: AI User + Usage + CapEx Growth = Unprecedented
This section details the extraordinary and historically unmatched growth rates observed in AI user adoption, platform usage, and capital expenditure dedicated to AI development and infrastructure.
Subsection 2.1: Consumer / User AI Adoption = Unprecedented (Meeker's Pages 55-60)
The adoption of AI by consumers and general users has occurred at a pace and scale that surpasses previous technological revolutions.
- ChatGPT User Growth (Meeker's Pages 55 & 56) 1:
OpenAI's ChatGPT serves as a primary indicator of this trend. The platform's weekly active users (WAUs) grew from virtually zero at its public launch in October 2022 to an astonishing 800 million by April 2025, an eightfold increase in just seventeen months.1 This figure was confirmed by OpenAI CEO Sam Altman in an April 2025 TED Talk.1 External market analysis from sources like DemandSage and ExplodingTopics corroborates this trajectory of rapid WAUs growth, with some variations in specific monthly figures but confirming the overall trend and magnitude.26 For example, ExplodingTopics reported 400 million WAUs by February 2025.26
The emphasis on WAUs for ChatGPT is significant. While Monthly Active Users (MAUs) are a standard metric for platform size and user stickiness, WAUs typically indicate a higher frequency of engagement, characteristic of tools that become integrated into daily or weekly routines. For a novel technology like generative AI, achieving such high WAUs so quickly 1 signals intense initial interest and widespread exploration by users. However, it is also true that early-stage WAUs can exhibit more volatility compared to MAUs. Initial hype can drive frequent usage, which might subsequently decline if the technology does not sustain its practical utility for a broad range of daily tasks for all users. Therefore, while 800 million WAUs is a remarkable figure, ongoing monitoring of the MAU/WAU ratio and user retention rates (as discussed later, see Page 85 1) will be crucial to determine if this high-frequency engagement is a sustainable trend or an initial peak driven by curiosity. The long-term value will ultimately depend on converting these active trial users into deeply embedded, regular consumers of the technology. - AI Global Adoption (ChatGPT vs. Internet – % Users Outside North America) (Meeker's Page 57) 1:
The global dissemination of AI tools like ChatGPT, particularly outside of North America, has been markedly faster than the internet's initial international expansion. The report indicates that the internet took approximately 23 years to reach a point where 90% of its users were outside North America. In stark contrast, the ChatGPT app achieved this same 90% international user share in merely 3 years.1 This data, sourced from the UN/ITU for internet statistics 15 and Sensor Tower for ChatGPT app data 1, underscores AI's immediate worldwide impact.
However, this rapid "global" adoption figure for ChatGPT requires careful contextualization. As Meeker's report notes, the ChatGPT app was not available in major markets such as China and Russia as of May 2025.1 This exclusion significantly affects the denominator used for calculating "global" user share. China, in particular, has a burgeoning ecosystem of local AI chatbots (e.g., DeepSeek, Ernie Bot, as detailed in Section 5), which are rapidly gaining users domestically. If these local alternatives were factored into a truly comprehensive "global LLM user" metric, the proportion of users of "USA-based LLMs" outside North America might appear different, or the overall "global AI user" distribution would certainly be more diversified. Therefore, a direct comparison with the "Internet"—which eventually achieved near-universal availability, including in China—is not perfectly analogous if key AI markets are excluded from one side of the comparison. While ChatGPT's international reach is undeniably swift, the narrative of its "global adoption" must acknowledge these significant geopolitical and market access segmentations. The AI landscape is visibly fragmenting along regional lines, with distinct technological ecosystems emerging, most notably in China. This makes direct global comparisons with past, more universally accessible technologies like the open internet increasingly complex.
Sensor Tower's 2024 AI Apps Market Insights report further nuances this picture, highlighting India as the largest market for AI app downloads globally, while North America continues to dominate in terms of revenue generation.25 Data from ExplodingTopics 26 and FirstPageSage 48 on combined web and app usage for ChatGPT also list the US and India as the top two countries by user share, followed by a large, undifferentiated "Others" category, which supports the notion of widespread international use but also points to a concentration in specific large markets.
The following table provides a quantitative comparison of adoption speeds, further illustrating the accelerated pace of AI tools.
Table 3: Comparative Global Adoption Speed: Internet vs. Leading AI Chatbots and Other Tech Platforms
| Platform/Technology | Time to 100MM Users | % Users Outside N. America (Internet @ Yr 23 vs. ChatGPT @ Yr 3) | Top 3 Countries by User Share (Approx. for AI Chatbots) |
| ChatGPT | 0.2 Years 1 | ~90% @ Year 3 (App) 1 | 1. USA (~15-16%), 2. India (~9-16%), 3. Brazil/Kenya/Germany (~3-6%) |
| TikTok | 0.9 Years 1 | N/A | China (Douyin), USA, Indonesia (Varies by source/time) |
| 2.5 Years 1 | N/A | India, USA, Brazil (Varies by source/time) | |
| 3.5 Years 1 | N/A | India, Brazil, Indonesia (Varies by source/time) | |
| Internet | Many Years | ~90% @ Year 23 1 | Global, with early dominance in N. America |
| DeepSeek (App) | Rapid (Launched 2025) | N/A | 1. China (~34%), 2. Russia (~9%), 3. India (~7%) |
Note: N/A indicates data not directly comparable or provided in the same format in the source material. Country shares for AI chatbots are estimates based on available data and may vary. ChatGPT app is not available in China/Russia.
- Years to Reach 100MM Users (Meeker's Page 58) 1:
The report highlights ChatGPT's exceptional speed in acquiring users, reaching 100 million users in just 0.2 years. This is significantly faster than other prominent technology platforms: TikTok took 0.9 years, Instagram 2.5 years, and Netflix (streaming) 10.3 years to achieve the same milestone.1 This rapid scaling is a consistent theme and is well-documented, with ChatGPT hitting 100 million MAUs by January 2023.26 - Days to Reach 1MM Customers/Users (Meeker's Page 59) 1:
Looking at the initial 1 million user mark, ChatGPT's acceleration is even more striking. It took only 5 days to reach 1 million users, compared to 74 days for the iPhone, 1,680 days for TiVo, and approximately 2,500 days for the Ford Model T.1 The report notes that ChatGPT was launched as a free research preview, which undoubtedly lowered the barrier to trial and contributed to this unprecedented initial adoption velocity. - Years to 50% Household Penetration in USA (Meeker's Page 60) 1:
Projecting forward, the report suggests that the "AI Era" might continue the trend of increasingly compressed adoption cycles for major technologies in US households. Referencing Morgan Stanley data, it compares the time taken to reach 50% household penetration: Second Industrial Revolution (42 years), PC Era (25 years), Desktop Internet Era (20 years), and Mobile Internet Era (12 years). The report speculatively suggests the AI Era might achieve this in just 3 years, by halving the previous cycle's duration.1 Morgan Stanley has indeed published research on AI adoption curves that support the notion of accelerated uptake.77
The "free" access model for ChatGPT's initial launch was a significant catalyst for its rapid early user acquisition.1 However, this rapid trial phase does not automatically translate into deep, sustained, or monetized adoption for all users. Furthermore, the concept of "AI adoption" for the projected 50% household penetration 1 is multifaceted and less clearly defined than for previous technologies. Does it imply 50% of households actively using a specific AI chatbot daily, interacting with AI-powered features embedded in existing products (like search engines or smart devices), or perhaps owning dedicated AI-specific hardware? Unlike "PC adoption" (owning a personal computer) or "internet adoption" (having an internet service subscription), AI is increasingly becoming an embedded layer within various technologies, rather than always being a distinct, standalone product. Therefore, while the speed metrics are undeniably impressive, the pathway to achieving 50% meaningful and sustained household engagement with AI might be more complex than simply halving the adoption cycle times of previous technologies, especially when considering long-term monetization strategies and the diverse ways in which AI will manifest in consumer lives.
Subsection 2.2: Technology Ecosystem AI Adoption = Impressive (Meeker's Pages 61-62)
The growth in AI is not limited to end-user applications; the underlying technology ecosystem that supports AI development and deployment is also expanding at an impressive rate.
- NVIDIA Computing Ecosystem Growth (Meeker's Page 62) 1:
Data from NVIDIA illustrates this trend within its computing ecosystem between 2021 and 2025. The number of AI startups leveraging NVIDIA's platform grew by 3.9 times, from 7,000 to 27,000. The number of applications using NVIDIA GPUs increased by 2.4 times, from 1,700 to 4,000. Most notably, the number of developers within the NVIDIA ecosystem expanded by 2.4 times, from 2.5 million to 6 million.1 An earlier chart in Meeker's report (Slide 5) also highlights this 6 million developer figure by 2025.1 This rapid expansion indicates robust activity in building and utilizing AI tools. NVIDIA consistently highlights the growth of its developer program and AI startup engagements in its financial communications, underscoring its central role in providing the hardware and software stack essential for AI development.22
This growth in developers, AI-focused startups, and GPU-accelerated applications signifies a maturing "supply side" for AI solutions. While consumer adoption represents the demand side, the expansion of ecosystems like NVIDIA's (and similarly, Google's Gemini developer community, as noted on Meeker's page 40 1) demonstrates that the tools, talent pool, and commercial entities required to build and deploy sophisticated AI solutions are also scaling rapidly. This creates a virtuous cycle: an increase in available tools and skilled talent leads to the development of more innovative AI applications, which, in turn, can drive further consumer and enterprise adoption. This robust supply-side infrastructure is critical for translating the theoretical potential of AI into tangible real-world products and services.
Subsection 2.3: Tech Incumbent AI Adoption = Top Priority (Meeker's Pages 63-67)
Major established technology companies are not only participating in the AI revolution but are making AI adoption and leadership a paramount strategic priority.
- 'AI' Mentions in Earnings Transcripts (Meeker's Page 64) 1:
An analysis of corporate earnings call transcripts from Q1 2020 to Q1 2024 by Uptrends shows a significant volume of "AI" mentions by leading technology companies. NVIDIA (1,061 mentions), C3.ai (1,057), Google (568), Baidu (995), and Meta (475) are among those frequently discussing AI, indicating its high strategic importance.1 This use of keyword tracking in financial communications is a common proxy for gauging strategic focus areas. - CEO Quotes on AI Focus (Meeker's Pages 65-67) 1:
The report compiles statements from prominent tech CEOs, all underscoring AI's transformative impact and central role in their company strategies:
- Amazon CEO Andy Jassy (April 2025): Envisions generative AI reinventing customer experiences and transforming numerous sectors, from coding and search to healthcare and robotics.1
- Google CEO Sundar Pichai (April 2025): Describes AI as the most important way to advance Google's mission and views the opportunity with AI as immense.1
- Duolingo Co-Founder & CEO Luis von Ahn (May 2025): Highlights GenAI's role in data creation, new feature development, and company-wide efficiencies, citing an AI-driven curriculum development for Duolingo Chess.1
- xAI Founder & CEO Elon Musk (May 2025): Stresses the importance of programming AI with "truth-seeking values" for AI safety and aims for a "maximally truth-seeking AI" with Grok.1
- Roblox Co-Founder & CEO David Baszucki (May 2025): Views AI as a "human acceleration tool" where the combination of people and the AI they use will define overall output.1
- NVIDIA Co-Founder & CEO Jensen Huang (May 2025): Predicts AI integration into "everything" within a decade, describing AI data centers as "AI factories" producing valuable intelligence.1 These quotes, verifiable against public company disclosures, consistently reflect the strategic pivot towards AI across the tech industry.
The intense focus on AI by these incumbents reflects both offensive and defensive strategic imperatives. Offensively, AI offers avenues to create novel products, enter new market segments, and significantly enhance existing services. Defensively, robust AI adoption is crucial to avoid disruption by agile, AI-native startups or by competitors who more effectively leverage AI's capabilities. The high frequency of "AI" mentions in earnings calls 1 and emphatic CEO pronouncements 1 serve as strong signals to investors and the broader market that these companies are committed to leading, or at least not being left behind, in the AI transformation. This environment creates an "AI mandate" within these organizations, compelling significant resource allocation, influencing M&A strategies, and fueling intense competition for AI talent. Such an "AI arms race" extends beyond mere technology development to encompass strategic messaging and talent acquisition, potentially driving up operational costs and fostering a high-stakes atmosphere where falling behind is perceived as an existential threat.
Subsection 2.4: 'Traditional' Enterprise AI Adoption = Rising Priority (Meeker's Pages 68-75)
The strategic importance of AI is extending beyond the technology sector, with traditional enterprises increasingly recognizing AI as a rising priority for their operations and growth.
- S&P 500 AI Mentions in Earnings Calls (Meeker's Page 69) 1:
Data from Goldman Sachs Research indicates that in Q4 2024, approximately 50% of S&P 500 companies mentioned "AI" in their quarterly earnings calls.1 This figure represents a significant increase from previous years, signaling growing mainstream business attention and investment in AI capabilities. - GenAI Improvements Targeted by Global Enterprises (Meeker's Page 70) 1:
A 2024 Morgan Stanley survey of global enterprises revealed that their primary targets for Generative AI improvements over the next two years are largely revenue-focused. Key areas include enhancing Production/Output, Customer Service, and Sales Productivity, rather than solely focusing on cost reduction.1 This suggests that enterprises view AI strategically as a tool for growth and operational efficiency in core business functions. Morgan Stanley's AlphaWise surveys consistently provide such insights into enterprise AI adoption drivers.30 - Global CMO GenAI Adoption Survey (Meeker's Page 71) 1:
Marketing departments are among the early adopters of AI in traditional enterprises. A 2024 Morgan Stanley survey of global Chief Marketing Officers (CMOs) found that approximately 75% of CMOs were already using or actively testing GenAI tools for marketing activities.1 - Enterprise Case Studies (Meeker's Pages 72-75) 1:
The report provides several case studies illustrating tangible AI adoption and impact in diverse non-tech sectors:
- Bank of America – Erica Virtual Assistant (Launched June 2018) 1: By February 2025, Erica had handled 2.5 billion client interactions, providing real-time financial insights and assistance.
- JP Morgan – End-to-End AI Modernization (Initiated 2020) 1: The company is leveraging AI/ML to drive value through both revenue generation and improvements in cost and risk efficiencies.
- Kaiser Permanente – Multimodal Ambient AI Scribe (Launched October 2023) 1: By December 2024, approximately 7,000 physicians were using the AI scribe, which had facilitated over 2.5 million patient visit documentations, reducing administrative burden.
- Yum! Brands – Byte by Yum! (Launched February 2025) 1: Within its first month, 25,000 Yum! restaurants (including KFC, Taco Bell, Pizza Hut) were using at least one product from the Byte by Yum! AI-driven restaurant technology platform, designed to optimize store operations.
The focus of traditional enterprises on GenAI improvements for tangible outcomes like production enhancement, customer service optimization, and sales productivity gains 1 signals a pragmatic approach. Unlike some technology companies that might invest heavily in foundational AI research, traditional enterprises are typically more concentrated on applying existing AI technologies to solve specific business problems and achieve a measurable return on investment (ROI). The case studies presented 1 consistently highlight applications that deliver clear efficiency improvements or enhance customer experiences. The pronounced emphasis on "revenue-focused" improvements over purely "cost-focused" ones 1 indicates a strategic perception of AI as a growth engine, albeit one that must rigorously prove its value. Consequently, for AI solution providers targeting these traditional enterprise sectors, the ability to demonstrate unambiguous ROI and offer use cases that align directly with core business objectives—such as productivity, customer engagement, and revenue growth—is of paramount importance. The adoption cycle in these industries will likely be governed more by proven, quantifiable value rather than by technological novelty alone.
Subsection 2.5: Education / Government / Research AI Adoption = Rising Priority (Meeker's Pages 76-80)
Public sector entities, including educational institutions, government agencies, and research organizations, are increasingly prioritizing the adoption and integration of AI.
- Education & Government AI Integrations (Meeker's Page 77) 1:
Several initiatives highlight this trend: Arizona State University's 'AI Acceleration' program (August 2023), a five-year AI research and literacy partnership involving Oxford University and OpenAI (March 2025), the University of Michigan's similar partnership with OpenAI (March 2025), the launch of ChatGPT Gov tailored for USA federal agencies (January 2025), and AI-focused partnerships involving USA National Laboratories for nuclear, cybersecurity, and scientific breakthroughs (January 2025).1 These examples demonstrate a commitment to leveraging AI for research advancement, educational enhancement, and operational improvements within the public and academic spheres. - Sovereign AI Policies & NVIDIA Partners (Meeker's Page 78) 1:
Nations are increasingly developing "Sovereign AI" policies, aiming to build domestic AI capabilities and infrastructure. The report highlights NVIDIA's role in partnering with various countries—including France (Scaleway), Spain (Barcelona Supercomputing Center), Ecuador (Telconet), Switzerland (Swisscom Group), Japan (AIST), Vietnam (FPT Smart Cloud), and Singapore (Singtel)—to support these national AI ambitions.1 NVIDIA CEO Jensen Huang is quoted likening national investments in AI infrastructure to historical investments in electricity and the internet.
The rise of "Sovereign AI" initiatives reflects a new dimension of geopolitical competition and a potential trend towards technological decoupling. Driven by the strategic risks of technological dependence, as underscored by the US-China tech rivalry 1, nations increasingly view AI as critical national infrastructure. These initiatives aim to ensure national control over AI development, data governance, and deployment, motivated by economic competitiveness, national security imperatives, and data privacy concerns. This global push for AI sovereignty could, however, lead to a fragmentation of the global AI landscape, potentially creating a "splinternet" effect for AI, with different national or regional AI ecosystems developing under varying standards, regulations, and technological priorities. While this might hinder seamless international collaboration, it also presents significant opportunities for technology providers like NVIDIA that can supply the tools and expertise necessary for nations to build their own AI capabilities.1 - FDA-Approved AI Medical Devices & NIH AI Budget (Meeker's Page 79) 1:
The healthcare sector, despite being heavily regulated, is witnessing a significant ramp-up in AI adoption. The number of new AI-enabled medical devices approved by the USA Food & Drug Administration (FDA) grew from single digits annually in the early 2000s to 223 approvals in 2023.1 This trend is supported by government funding; the Federal USA AI Budget for FY21-FY25 is $14.7 billion, with the National Institutes of Health (NIH) requesting 34% of this for FY25. Furthermore, a new FDA AI Policy announced in May 2025 aims to scale the internal use of AI across all FDA centers.1 - AI-Driven Drug Discovery (Reduced R&D Timelines) (Meeker's Page 80) 1:
AI is demonstrating its potential to significantly accelerate research and development in critical fields like medicine. For instance, AI-driven drug discovery platforms from companies like Insilico Medicine and Cradle are reportedly reducing the time to reach pre-clinical candidate status by 30% to 80% compared to traditional approaches.1 Traditional methods can take 2.5 to 4 years, while AI-assisted methods have shortened this to as little as 10-18 months for specific therapeutic targets.
The rapid advancements and adoption of AI in highly regulated and high-stakes fields like healthcare and defense highlight a critical "pacing problem" for regulatory bodies. While AI offers immense potential benefits, the speed of its development often outstrips the capacity of existing regulatory frameworks to establish and enforce appropriate safety, efficacy, and ethical guidelines. The FDA's increasing approval rate for AI medical devices and its own initiative to adopt AI internally 1 are positive steps. However, ensuring that these powerful tools are safe, unbiased, and effective in the long term remains a complex and ongoing challenge, particularly given the "black box" nature of some AI models, which can make validation and oversight difficult. There is a pressing need for agile, adaptive regulatory frameworks that can keep pace with AI innovation in these sensitive domains. A failure to develop such frameworks could either stifle beneficial innovation or lead to the premature deployment of unsafe or inequitable AI systems. The FDA's new AI policy is an acknowledgment of this challenge, but the issue is global and continuous.
Subsection 2.6: AI Usage (Focus on Usage Metrics) (Meeker's Pages 81-88)
This subsection delves into specific metrics that illustrate the depth and breadth of AI tool usage, particularly focusing on ChatGPT as a proxy for broader AI engagement.
- ChatGPT Usage Across Age Groups (USA) (Meeker's Page 82) 1:
Surveys by Pew Research (July 2023) and Elon University (January 2025) show increasing ChatGPT usage across all adult age groups in the USA. In January 2025, 44% of all US adults reported having used ChatGPT. Adoption was highest among those aged 18-29 (55%), followed by ages 30-49 (44%), ages 50-64 (21%), and ages 65+ (13%).1 OpenAI CEO Sam Altman noted that older users tend to use ChatGPT as a Google replacement, while younger demographics use it more like a life advisor.1 - ChatGPT App Engagement (USA) (Meeker's Pages 83 & 84) 1:
Data from Sensor Tower indicates a significant rise in user engagement with the ChatGPT mobile app in the USA.
- Daily Time Spent: From July 2023 to April 2025, the average daily minutes spent by active US users on the ChatGPT app increased by +202%.1
- Sessions and Duration: Over the same period, average daily sessions per user grew by +106%, and the average session duration increased by +47%.1 These metrics suggest that users are not merely trying the app but are integrating it more deeply into their routines, spending more time and engaging more frequently.
- ChatGPT vs. Google Search Desktop User Retention (Meeker's Page 85) 1:
According to YipitData, global desktop user retention rates for consumer ChatGPT (80%) were notably higher than for Google Search (58%) for the period January 2023 to April 2025.1 This higher retention for ChatGPT suggests that, for certain tasks or user segments on desktop, it is providing compelling ongoing value. User retention is a strong indicator of a product's ability to meet user needs consistently. For ChatGPT to outperform a deeply entrenched utility like Google Search in this specific metric implies it offers a superior or more engaging experience for particular use cases, possibly due to its conversational interface, information synthesis capabilities, or utility in content generation tasks that extend beyond traditional search functionalities. This high retention, when combined with the increasing engagement metrics 1, signals that AI chatbots are transitioning from novelty items to integrated tools for a substantial user base, posing a genuine long-term challenge to traditional information retrieval paradigms. - AI Chatbots @ Work Helpfulness (USA) (Meeker's Page 86) 1:
A Pew Research survey from October 2024 found that over 72% of employed US adults using AI chatbots found them helpful for increasing speed and/or improving the quality of their work.1 Specifically, 37% found them "Extremely/Very" helpful for speed, and 21% "Extremely/Very" helpful for quality. This indicates that AI chatbots are being recognized as valuable productivity tools in professional settings. - ChatGPT Usage Survey – USA Students (18-24) (Meeker's Page 87) 1:
An OpenAI survey conducted between December 2024 and January 2025 revealed that US students aged 18-24 are heavily leveraging ChatGPT for educational purposes. Top uses included academic research (45%), exploring topics (40%), and starting papers/projects (37%).1 - AI Usage Expansion – Deep Research Capabilities (Meeker's Page 88) 1:
The report notes that AI tools are evolving beyond simple question-answering to perform complex, multi-step research tasks, thereby automating specialized knowledge work. Examples include the "Deep Research" capabilities being developed or announced for Google Gemini, OpenAI ChatGPT, and xAI Grok.1
This evolution towards "Deep Research" signifies a potential paradigm shift from information retrieval to knowledge generation. Traditional search engines excel at finding existing documents and web pages. In contrast, these advanced AI features aim to synthesize information from numerous sources, reason about the collated data, and generate novel summaries or reports. This effectively means AI could perform the initial, often laborious, stages of analysis and synthesis, moving up the value chain of knowledge work. Such a development could profoundly impact professions that rely heavily on information synthesis and analysis, such as researchers, analysts, consultants, and journalists. While it could democratize access to complex research capabilities, it also brings to the forefront concerns about the reliability of AI-generated synthesis, the potential for perpetuating biases embedded in training data, and the risk of de-skilling human analytical capabilities.
Subsection 2.7: AI Agent Evolution = Chat Responses -> Doing Work (Meeker's Pages 89-92)
This subsection explores the progression of AI from simple conversational interfaces to more sophisticated "agents" capable of autonomous task execution.
- Text Explaining AI Agent Evolution (Meeker's Page 90) 1:
AI is evolving from reactive chatbots, which respond to user prompts within narrow flows, to proactive AI agents. These agents are described as intelligent, long-running processes that can reason, act, and complete multi-step tasks on a user's behalf. Examples include booking meetings, submitting reports, or orchestrating workflows across platforms, often using natural language commands. This shift is likened to the web's evolution from static pages to dynamic applications (e.g., Gmail, Google Maps). Enterprises are reportedly leading the deployment of these agents.1 - Google Searches for 'AI Agent' (Meeker's Page 91) 1:
Global Google search interest for the term "AI Agent" surged by +1,088% between January 2024 and May 2025.1 A notable spike in searches occurred around March 11, 2025, when OpenAI introduced developer tools for AI agents, indicating heightened public and developer interest in this emerging technology. This trend can be verified using Google Trends data.93 - AI Incumbent Agent Launches (Meeker's Page 92) 1:
Major technology companies are actively developing and releasing AI agent capabilities. Examples include:
- Salesforce Agentforce (General Release October 2024): For automated customer support, case resolution, and lead qualification.
- Anthropic Claude 3.5 Computer Use (Research Preview October 2024): Enables direct computer screen control for tasks like data extraction from websites.
- OpenAI Operator (Research Preview January 2025): Similar capabilities for direct computer control and task execution.
- Amazon Nova Act (Research Preview March 2025): Focused on home automation, information collection, purchasing, and scheduling. These product announcements, verifiable through company communications, demonstrate the industry's push towards more autonomous AI systems.
The development of AI agents can be seen as the emergence of an "application layer" for underlying foundation models. LLMs provide the core intelligence (reasoning, language understanding), and chatbots were the initial widely adopted interface to this intelligence. AI agents represent a more sophisticated interface that allows this core intelligence to interact with other software, APIs, and digital systems to perform complex tasks. This is analogous to how operating systems provided a platform for software applications in previous computing eras. The successful development and widespread adoption of robust AI agents could unlock a vast new range of AI-powered services and automations. The companies that manage to build and control these agentic platforms could effectively become the new "operating systems" of the AI era, mediating interactions between users, AI models, and the broader digital ecosystem. This represents a significant area for future value creation and intense competition.
Subsection 2.8: Next Frontier For AI = Artificial General Intelligence (Meeker's Pages 93-94)
This subsection discusses Artificial General Intelligence (AGI) as the aspirational, albeit still uncertain, future of AI development.
- Text Explaining AGI (Meeker's Page 94) 1:
AGI is defined as AI systems capable of performing the full range of human intellectual tasks, including reasoning, planning, learning from limited data, and generalizing knowledge across diverse domains—unlike current AI, which excels within specific boundaries. Timelines for AGI are uncertain, but expert expectations have advanced. OpenAI CEO Sam Altman is quoted as saying in January 2025, "We are now confident we know how to build AGI as we have traditionally understood it".1 The report suggests AGI would redefine software, enabling systems to understand goals, generate plans, and self-correct. While the potential productivity upside is significant, the document also acknowledges the profound geopolitical, ethical, and economic consequences that warrant a measured view.1
Sam Altman's statement expressing confidence in knowing how to build AGI is significant. However, there is a crucial distinction between having a theoretical pathway or blueprint and having actually constructed, rigorously tested, fully understood all emergent properties of, and ensured the safety and controllability of such a system. The current trajectory towards AGI appears to involve scaling existing architectures, like Transformers, with exponentially more data and compute. This approach may indeed lead to systems that exhibit AGI-like capabilities across a wide array of tasks. Nevertheless, whether these massively scaled systems will possess genuine understanding, consciousness akin to human experience, or be inherently and reliably aligned with human values are profound questions that remain largely open and are subjects of intense debate within the scientific community. Altman's confidence may reflect engineering progress on a potential path to AGI, but the fundamental scientific and philosophical challenges concerning the nature of AGI, its capacity for unpredictable emergent behaviors, and the methods to ensure its safety and alignment are far from resolved. The "Next Frontier" is therefore not solely about achieving new capabilities but also about ensuring comprehension and control over what is created.
Subsection 2.9: AI User + Usage + CapEx Growth = Unprecedented (Focus on CapEx) (Meeker's Pages 95-129)
This extensive subsection details the unprecedented capital expenditure (CapEx) being poured into AI, driven by the immense computational demands of training and deploying AI models, and the concurrent buildout of specialized data center infrastructure.
- Evolution of Tech CapEx (Meeker's Page 96) 1:
Tech CapEx has evolved its focus over the decades: from an initial emphasis on storage and access, to distribution and scale, and now, decisively, to computation and intelligence, primarily for AI. Hyperscalers are leading this charge, investing heavily in specialized chips (GPUs, TPUs, AI accelerators) and advanced data center technologies like liquid cooling. Microsoft's Vice Chair and President Brad Smith is quoted, likening AI and data centers to "the next stage of industrialization," similar in impact to electricity.1 This sentiment is echoed in Microsoft's official reports on AI governance.72 - Big Six Tech CapEx vs. Global Data Generation (Meeker's Page 98) 1:
A chart illustrates that the CapEx of the "Big Six" US tech companies (Apple, NVIDIA, Microsoft, Alphabet, Amazon AWS, Meta) grew +21% annually between 2014 and 2024, while global data generation grew +28% annually.1 This correlation is logical: more data necessitates more infrastructure for storage, processing, and analysis. Data is sourced from Capital IQ and the Hinrich Foundation. - Global Hyperscaler Cloud Revenue (Meeker's Page 99) 1:
The revenue for global hyperscale cloud providers (including Amazon AWS, Microsoft Intelligent Cloud, Google Cloud, Oracle Cloud, IBM Cloud, Alibaba Cloud, and Tencent Cloud) has grown at +37% annually from 2014 to 2024.1 This revenue growth reflects the surging demand for computing resources, which are foundational to AI development and deployment. Data is based on company disclosures and Morgan Stanley estimates.30 - AI Model Training Dataset Size (Tokens) (Meeker's Page 101) 1:
The size of datasets used to train AI models (measured in tokens) has been growing at +250% annually from June 2010 to May 2025, according to Epoch AI data.1 Larger and more diverse datasets are critical for training more capable AI models, and this directly fuels the need for more extensive compute and infrastructure. Epoch AI's research consistently supports this trend of rapidly expanding training data volumes.79 - Big Six CapEx vs. ChatGPT WAUs (Meeker's Page 102) 1:
The recent acceleration in AI adoption, proxied by ChatGPT's WAUs growth (+200% between 2023-2024), is correlated with an accelerated increase in CapEx by the Big Six tech companies (+63% in the same period).1 This suggests a direct link between AI application success and infrastructure investment. Sources are Capital IQ for CapEx 28 and OpenAI disclosures for WAUs.26 - Big Six CapEx as % of Revenue (Meeker's Page 103) 1:
Tech giants are allocating an increasing share of their revenues to capital investments. CapEx as a percentage of revenue for the Big Six rose from approximately 8% in 2014 to 15% in 2024.1 This trend, based on Capital IQ and Morgan Stanley data 30, underscores the strategic importance and capital intensity of AI. - Amazon AWS CapEx as % of Revenue (Cloud vs. AI Patterns) (Meeker's Page 105) 1:
Amazon AWS provides a case study of these investment cycles. Its CapEx as a percentage of revenue was high during the initial cloud buildout (27% in 2013), then declined as that phase matured (4% in 2018). However, with the rise of AI/ML, AWS has initiated a new cycle of massive infrastructure investment, pushing this metric to 49% in 2024.1 This pattern is analyzed in Morgan Stanley research.30 - NVIDIA GPU Performance Improvements (Meeker's Page 107) 1:
Advancements in GPU technology are a critical enabler of the AI boom. For a theoretical $1 billion data center, NVIDIA GPU improvements between 2016 (Pascal) and 2024 (Blackwell) have led to a +225x increase in AI FLOPS performance, a +27,500x increase in annual inference token capacity (implying +30,000x higher theoretical token revenue), all while data center power use decreased by 43%, resulting in a +50,000x improvement in energy efficiency (tokens per MW-year).1 These figures are based on NVIDIA's own performance claims 22 and highlight how hardware innovation makes AI compute more powerful and efficient, which in turn fuels greater demand and broader application, a dynamic somewhat reminiscent of Moore's Law's historical impact on computing.95 - Global Stock of NVIDIA GPU Computing Power (Meeker's Page 108) 1:
The total installed AI computing power from NVIDIA GPUs globally has been growing at +130% per year from Q1 2019 to Q4 2024, according to Epoch AI data.1 This exponential growth in available compute is a direct consequence of the demand from AI workloads. - NVIDIA's Share of Data Center CapEx (Meeker's Page 110) 1:
NVIDIA has become a primary beneficiary of this AI-driven CapEx surge. Its share of global data center CapEx (based on NVIDIA's data center revenue) rose from around 10% in 2022 to 25% in 2024.1 Data is sourced from Dell'Oro Research and NVIDIA. - Big Six R&D Spend (Meeker's Page 112) 1:
Alongside CapEx, R&D spending by the Big Six tech companies is also on the rise, increasing from 9% of revenue in 2014 to 13% in 2024, with absolute R&D spending growing +20% annually.1 This reflects sustained investment in AI innovation itself, beyond just infrastructure. Data is from Capital IQ. - Big Six Cash Reserves (Meeker's Pages 114 & 115) 1:
These technology giants possess enormous financial resources to fuel their AI ambitions. Their collective free cash flow grew +263% over ten years to $389 billion by 2024.1 Similarly, their combined cash on balance sheets (including equivalents and marketable securities) grew +103% to $443 billion by 2024.1 Data is from Capital IQ. - Compute Spend to Train & Run Models (Text) (Meeker's Pages 116-117) 1:
The core driver for this massive CapEx is the immense and escalating need for computational power to both train and deploy AI models. Training costs for frontier LLMs are exceptionally high and continue to rise, with figures often exceeding $100 million per model and projected to reach into the billions.1 Dario Amodei, CEO of Anthropic, noted in mid-2024 that training $10 billion models could commence in 2025.1 While training is episodic, inference (running models in real-time) is continuous and is becoming the dominant portion of AI compute costs, as stated by Amazon CEO Andy Jassy and NVIDIA CEO Jensen Huang.1 Although per-unit inference costs (cost-per-token) are falling dramatically due to hardware and algorithmic efficiencies 36, the sheer explosion in AI usage means total inference compute demand is soaring. This creates a dynamic where lower unit costs fuel higher overall spending, putting pressure on cloud providers, chipmakers, and enterprise IT budgets alike. The economics of AI currently remain characterized by heavy capital intensity and a race to serve exponentially expanding usage.1 - Data Centers as Key Beneficiary (Meeker's Pages 118-124) 1:
The surge in AI-driven demand has propelled data center spending to historic highs. Global IT company data center CapEx reached $455 billion in 2024 and continues to accelerate.1 Hyperscalers and AI-first companies are investing billions in compute-ready capacity designed for real-time inference and model training. NVIDIA's Jensen Huang refers to these AI data centers as "AI factories".1
Construction timelines are also accelerating. The xAI Colossus facility in Memphis, Tennessee, a 750,000 sq. ft. data center, was built in just 122 days—half the time it typically takes to construct an average US home.1 This rapid deployment is becoming more common due to innovations like prefabricated modules and streamlined permitting.1
The annualized value of private data center construction in the USA saw +49% annual growth between 2022 and 2024, a significant acceleration from the +28% annual growth seen from 2014 to 2022.1 In terms of capacity in primary US markets, new data center capacity (pre-leased or under construction) grew +16x from 2020 to 2024, while newly-filled existing capacity grew +5x.1 The xAI Colossus facility, as a case in point, scaled from 0 to 200,000 GPUs in just seven months (April to November 2024), with plans for 1 million GPUs.1
However, this rapid build-out faces challenges. CapEx is driven by land, power, chips, and cooling, while OpEx is dominated by energy and maintenance. Power availability is increasingly a gating factor, as components like transformers and turbines are not commodities that can be manufactured overnight.1 - Data Centers = Electricity Guzzlers (Meeker's Pages 125-129) 1:
The escalating computational demands of AI are placing significant strain on energy resources. Citing a 2025 International Energy Agency (IEA) report, "Energy and AI," the document notes that AI-focused data centers are becoming major electricity consumers.1 In 2024, data centers accounted for about 1.5% of global electricity consumption, having grown at roughly 12% per year since 2017—more than four times faster than total electricity consumption growth.1 The US accounts for 45% of this global data center electricity consumption, followed by China (25%) and Europe (15%).1
While AI itself can unlock energy efficiencies, its current power demands are immense and growing.1 The IEA report highlights that this trajectory will necessitate substantial investment in energy generation and grid infrastructure. Global data center electricity consumption has tripled over nineteen years (2005-2024).1 Regionally, the USA leads in data center electricity consumption.1 Ultimately, for model builders to sustain this level of compute and energy consumption, they will need to achieve profitability.1
Section 3: AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer Usage Rising (Meeker's Pages 129-152)
This section examines the complex economics of AI models, characterized by soaring training costs for frontier models alongside rapidly declining costs for utilizing these models (inference), leading to converging performance among top models and a surge in developer adoption.
The development of the most powerful Large Language Models (LLMs) has become an extraordinarily capital-intensive endeavor. As the push for higher performance drives models towards ever-larger parameter counts and more intricate architectures, the costs associated with training these models are escalating, now frequently running into the hundreds of millions and even billions of dollars per model.1 This intense race to create the most capable general-purpose models may, paradoxically, be contributing to a commoditization effect and diminishing returns, as the output quality among leading models begins to converge, making sustained differentiation increasingly challenging.
Simultaneously, the cost associated with applying and using these trained models—a process known as inference—is decreasing at a remarkable pace. This decline is fueled by continuous improvements in hardware efficiency. For instance, NVIDIA's 2024 Blackwell GPU is cited as consuming 105,000 times less energy per token generated compared to its 2014 Kepler GPU predecessor.1 When coupled with breakthroughs in the algorithmic efficiency of the models themselves, the unit cost of inference is plummeting. This creates a new cost curve for AI utilization that is trending sharply downwards, in stark contrast to the upward trajectory of training costs.
As inference becomes cheaper and more efficient, the competitive landscape among LLM providers is shifting. The focus is no longer solely on model accuracy but increasingly on factors like latency (speed of response), uptime (reliability), and the cost-per-token. Consequently, tasks that might have cost dollars to perform with AI can now be done for pennies, and those costing pennies may soon be available for fractions of a cent.
These evolving economic dynamics have significant implications. For end-users and developers, this trend is highly beneficial, offering dramatically lower unit costs to access powerful AI capabilities. As these costs fall, the creation of new AI-driven products and services is flourishing, leading to rising user adoption and broader engagement. However, for the providers of these foundational models, this situation presents considerable challenges regarding monetization and long-term profitability. The high upfront investment in training is difficult to recoup when the cost of serving the models is rapidly declining and pricing power is eroding. This has put the prevailing business models for LLMs in a state of flux. Furthermore, new questions are emerging about the universal applicability of the "one-size-fits-all" LLM approach, as smaller, more cost-effective models specifically trained for custom use cases (such as OpenEvidence, mentioned as an example) begin to gain traction. The fundamental question of whether AI providers will successfully build broad horizontal platforms or will need to focus on specialized, high-value applications remains open. In the immediate term, the economics of general-purpose LLMs often resemble those of commodity businesses, characterized by high operational costs and significant capital burn.
Cost-per-token is a key metric in this context, representing the expense incurred for processing or generating a single token (which can be a word, sub-word, or character) during the operation of a language model. It is crucial for evaluating the computational efficiency and cost-effectiveness of deploying AI models, especially in natural language processing applications.
Subsection 3.1: AI Model Training Compute Costs = High / Rising (Meeker's Page 132)
The financial and computational resources required to train state-of-the-art AI models have seen a dramatic escalation.
- Chart: Estimated Training Cost of Frontier AI Models – 2016-2024, per Epoch AI & Stanford: This chart, using a logarithmic scale for training cost in USD, shows an approximate 2,400x growth in the estimated training cost of frontier AI models between 2016 and 2024. Costs have risen from relatively modest figures to well over $100 million for some recent models.
- Quote from Dario Amodei (Anthropic CEO, June 2024): Amodei stated, "Right now, [AI model training costs] $100 million. There are models in training today that are more like a billion… I think that the training of…$10 billion models, yeah, could start sometime in 2025." This quote underscores the rapidly increasing capital investment required to stay at the cutting edge of AI model development.
- Verification/Augmentation: Epoch AI and the Stanford HAI AI Index are key sources for tracking AI model training costs and compute.18 The trend of escalating costs for frontier models is well-documented, driven by the pursuit of larger models trained on more extensive datasets to achieve higher performance.
Subsection 3.2: Inference Costs Per Token Falling (Meeker's Pages 134-138)
While training costs soar, the cost to actually use or run these AI models for specific tasks (inference) is on a steep downward trajectory. This is a critical dynamic shaping the accessibility and widespread application of AI.
The observation by former Microsoft CTO Nathan Myhrvold in 1997 that "Software is a gas…it expands to fill its container" appears particularly apt for AI today. As AI models become more capable, their usage increases across various applications. This increased usage—more queries, more models being run, more tokens processed per task—naturally drives up the demand for computational resources. The appetite for AI compute is not diminishing; rather, it is expanding to consume all available resources, much like software did during the desktop and cloud computing eras.
However, the infrastructure supporting AI is not static; it is advancing at an unprecedented rate. As highlighted on Meeker's page 136, NVIDIA's 2024 Blackwell GPU consumes 105,000 times less energy to generate a language model token compared to its 2014 Kepler predecessor. This staggering improvement in hardware efficiency is a testament to architectural and materials science innovations that are fundamentally reshaping the possibilities at the hardware level. These efficiency gains are crucial for mitigating the strain that escalating AI and internet usage places on electrical grids.
Yet, these efficiency improvements often lead to increased overall consumption, a phenomenon consistent with Jevons Paradox, first proposed in 1865. This paradox observes that technological advancements improving resource efficiency can actually lead to increased overall usage of those resources due to lower effective costs and new applications becoming viable. This is now driving a renewed focus on expanding energy production capacity and raising new questions about the grid's ability to manage the burgeoning demand from AI data centers. Once again, a perpetual pattern in technology is evident: costs fall, performance rises, and usage grows, all in tandem. This cycle is clearly repeating itself with AI.
- AI Inference 'Currency' = Tokens (Meeker's Page 135): The report clarifies that "tokens" (words, sub-words, or characters) are the basic units for measuring AI inference. For context, 1 million tokens equate to roughly 750,000 words or about 5,000 typical ChatGPT responses (assuming 200 tokens per interaction). This unit is central to understanding inference pricing.
- AI Inference Costs – NVIDIA GPUs (Meeker's Page 136): A chart based on NVIDIA data shows a 105,000x decline in the energy (Joules) required to generate an LLM token using NVIDIA GPUs between 2014 (Kepler) and 2024 (Blackwell). This dramatic improvement in energy efficiency per token is a key driver of falling inference costs.
- AI Inference Costs – Serving Models (Meeker's Page 137): Data from Stanford HAI indicates that the price for customers per 1 million tokens for AI inference fell by 99.7% between November 2022 and December 2024. This rapid price drop makes AI capabilities accessible to a much broader range of users and developers. Epoch AI research also confirms this trend of rapidly falling LLM inference prices, with declines ranging from 9x to 900x per year for achieving specific performance milestones, and a median decline of 50x per year across various benchmarks and performance levels.36 OpenAI CEO Sam Altman has also noted that the cost to use a given level of AI falls about 10x every 12 months, citing the price drop from GPT-4 to GPT-4o as an example.37
- AI Cost Efficiency Gains vs. Prior Technologies (Meeker's Page 138): A comparative chart shows that the cost of AI inference (specifically, a 75-word ChatGPT response) has fallen much more rapidly relative to its launch year than the costs of electric power or computer memory did in their respective historical adoption phases. Data for electricity costs is based on Richard Hirsh's work 32, and computer memory storage costs are from John C. McCallum's research.34 This highlights the unique speed of AI's economic evolution.
Subsection 3.3: Tech's Perpetual A-Ha = Declining Costs + Improving Performance → Rising Adoption… (Meeker's Pages 139-140)
This subsection connects the falling costs and improving performance of AI to the well-established technological pattern where such dynamics lead to increased adoption.
- USA Internet Users vs. Relative IT Cost (Meeker's Page 139): A chart using data from FRED (Federal Reserve Economic Data) and the ITU shows that as the relative cost of Information Technology (hardware and services, indexed to 1/1/89 = 100) declined significantly from 1989 to 2023, the number of US internet users grew substantially. This illustrates the historical relationship between falling IT costs and rising internet adoption.
- AI Model Training Compute vs. Relative IT Cost (Meeker's Page 140): This chart overlays the growth in AI model training compute (FLOPs, per Epoch AI, showing +360%/year growth) against the declining US Consumer Price Index for IT. While IT costs have generally fallen, the compute allocated to training AI models has skyrocketed. This suggests that while the unit cost of compute might be decreasing or the efficiency of AI development is improving (as per algorithmic gains), the total investment and resources poured into pushing AI capabilities (i.e., training compute) are massively increasing due to the perceived value and potential of more powerful models. The falling cost of underlying IT components enables more ambitious AI projects, which in turn consume more total compute.
Subsection 3.4: AI Model Performance = Converging Rapidly (Meeker's Page 142)
Despite the varying approaches and investments, the performance of top AI models is showing signs of convergence.
- Chart: Performance of Top AI Models on LMSYS Chatbot Arena – 1/24-2/25, per Stanford HAI: This chart displays the LMSYS Arena Score (a measure based on human preference in head-to-head chatbot comparisons) for the highest-scoring model in any given month. The trend line from January 2024 to February 2025 shows a general upward trajectory in overall model quality, but also a narrowing of the performance gap between different leading models as multiple models achieve very high scores. This suggests that while the frontier of AI capability continues to advance, it is becoming harder for any single model to maintain a decisive and lasting performance lead.
- Verification/Augmentation: The Stanford HAI AI Index Report 18 is a primary source for tracking benchmark performance, including LMSYS Chatbot Arena results. The convergence implies that access to cutting-edge AI performance is becoming less dependent on a single provider, which has implications for competition and commoditization.
Subsection 3.5: Developer Usage Rising (Meeker's Pages 144-151)
The combination of declining inference costs and converging model performance is fueling an explosion in AI adoption and experimentation among developers.
The significant drop in inference costs—a 99.7% decline between 2022 and 2024 for running language models—has democratized access to powerful AI tools. What was once prohibitively expensive for all but the largest corporations is now within reach of individual developers, small startups, academic researchers, and even employees in small businesses. This cost collapse has made experimentation cheaper, iteration cycles faster, and the productization of AI ideas feasible for a much wider audience.
Simultaneously, the narrowing performance gap between top-tier frontier models and smaller, more efficient alternatives is changing how developers select models. For many common use cases—such as text summarization, content classification, data extraction, or routing queries—the real-world performance difference between the absolute best model and a highly capable but less expensive alternative can be negligible. Developers are increasingly discovering that they no longer need to pay a premium for a flagship model to achieve reliable outputs. Instead, they can opt for cheaper models run locally or accessed via lower-cost API providers, and still obtain functionally similar results, especially when these models are fine-tuned on task-specific data. This trend is weakening the pricing leverage of incumbent model providers and leveling the playing field for AI development.
At the platform level, a proliferation of foundation models has created unprecedented flexibility. Developers can now choose from dozens of models—OpenAI's GPT series, Meta's Llama family, Mistral's Mixtral, Anthropic's Claude, Google's Gemini, Microsoft's Phi, and many others—each with varying strengths, such as reasoning, speed, or code generation. This abundance of choice is fostering a move away from vendor lock-in. Instead of consolidating under a single provider who could potentially gate access or dictate prices, developers are distributing their efforts across multiple ecosystems. This plurality of options empowers builders to select the best-fit model based on their specific technical requirements or financial constraints.
What is emerging is a powerful flywheel of developer-led infrastructure growth. As more developers build AI-native applications, they also contribute to the ecosystem by creating tools, wrappers, and libraries that make it easier for others to follow. New front-end frameworks, embedding pipelines, model routers, vector databases, and serving layers are multiplying at an accelerating rate. Each wave of developer activity reduces the friction for the next, compressing the time from idea to prototype and from prototype to product. In this process, the barrier to building with AI is collapsing—not just in terms of cost, but also in terms of complexity. This is not merely a platform shift; it's an explosion of creativity and accessibility. As observed throughout technology history, exemplified by former Microsoft President Steve Ballmer's famous chant, "Developers! Developers! Developers!," the platform that consistently attracts and sustains developer momentum—and can effectively scale and steadily improve—often emerges as the winner.
- AI Tool Adoption by Developers (Meeker's Page 147): A Stack Overflow Developer Survey shows that the share of professional developers using AI in their development processes increased from 44% in 2023 to 63% in 2024. For those learning to code, the adoption rate jumped from 25% to 50% in the same period. This indicates a rapid integration of AI tools into software development workflows.
- AI Developer Repositories – GitHub (Meeker's Page 148): The number of AI developer repositories (with 500+ stars) on GitHub increased by approximately 175% over sixteen months, from November 2022 to March 2024, according to data from Chip Hyuen. This growth spans infrastructure tools, model development frameworks, and AI-powered applications, signifying a vibrant open-source and collaborative AI development community.
- AI Developer Ecosystem – Google (Meeker's Page 149): Google reported a 50x year-over-year increase in monthly tokens processed across its products and APIs, from 9.7 trillion in May 2024 to over 480 trillion in May 2025. The number of developers building with Gemini also grew 5x to 7 million in the same period. This demonstrates massive scaling in the usage of Google's AI platforms by developers.
- AI Developer Ecosystem – Microsoft Azure AI Foundry (Meeker's Page 150): Microsoft's Azure AI Foundry saw a 5x year-over-year increase in quarterly tokens processed, reaching 100 trillion tokens in Q1 2025 (with 50 trillion in the last month of that quarter alone). Over 70,000 enterprises and digital natives are using the platform to design, customize, and manage AI apps and agents.
- AI Developer Use Cases (Meeker's Page 151): A graphic from IBM (2024) illustrates the broad and varied use cases for AI in software development. These include code generation, bug detection and fixing, testing automation, project/workflow management, documentation, refactoring and optimization, security enhancement, DevOps and CI/CD pipelines, user experience design, and architecture design. This breadth shows AI's pervasive impact across the entire software development lifecycle.
Subsection 3.6: …(Likely) Long Way to Profitability (Meeker's Page 152)
Despite the rapid growth in AI adoption, usage, and developer activity, and the falling costs of inference, the path to sustained profitability for many AI model providers remains challenging. The high costs of training frontier models, intense competition, and the commoditizing effects of performance convergence and open-source alternatives create a complex financial landscape. While AI is undoubtedly transforming industries and creating immense value, capturing that value in the form of consistent profits for the foundational model builders themselves is an ongoing endeavor that will likely take considerable time and further business model innovation.
Section 4: AI Usage + Cost + Loss Growth = Unprecedented (Meeker's Pages 153-247)
This section delves into the financial realities of the current AI boom, highlighting that alongside unprecedented growth in usage, there are equally unprecedented costs and, for many, significant financial losses. The path to monetization and profitability is complex and fraught with challenges, even as the technology's adoption accelerates.
The report prefaces this section by acknowledging that statements like "it's different this time," "we'll make it up on volume," and "we'll figure out how to monetize our users in the future" are often danger signals in business. However, it also concedes that in technology investing, these very notions have occasionally led to immense successes, citing examples like Amazon, Alphabet (Google), Meta (Facebook), Tesla, Tencent, Alibaba, and Palantir. The current AI cycle might indeed be different, and leading companies could eventually achieve profitability through sheer volume and future monetization strategies.
However, "different this time" also means an unprecedented level of competition. Never before have so many founder-driven or founder-assisted companies (like NVIDIA, Microsoft, Amazon, Alphabet, Meta, and Tesla, many with market capitalizations exceeding $1 trillion and gross margins often above 50%) attacked the same massive opportunity simultaneously. This is occurring in a relatively transparent global environment, further intensified by high-stakes competition between global powers, notably China and the United States.
The concept of technological tipping points, often described by Ernest Hemingway's phrase "gradually, then suddenly," is highly relevant. For personal computers, key tipping points were Apple's Macintosh (1984) and Microsoft's Windows 3.0 (1990). For the Internet, it was Netscape's IPO (1995). For the Mobile Internet, Apple's iPhone App Store launch (2008) was pivotal. For Cloud Computing, the launch of foundational AWS products (2006-2009) marked a turning point. In AI, the launch of NVIDIA's A100 GPU chip (2020) and OpenAI's public version of ChatGPT (2022) are identified as such critical junctures. The report suggests that global competition in AI significantly intensified with China's DeepSeek launch (January 2025) and Jack Ma's attendance at a symposium with Chinese President Xi Jinping (February 2025), signaling strong national backing.
The capital fueling AI's growth (and its associated losses) originates from large corporations with substantial free cash flow and robust balance sheets, as well as from wealthy and ambitious capital providers worldwide. This dynamic combination of intense competition, abundant capital, and entrepreneurial drive will undoubtedly advance AI rapidly. The central riddle, however, lies in determining which business models will ultimately prove sustainable and profitable in the long run.
Subsection 4.1: Technology Disruption Pattern Recognition (Meeker's Page 155)
The report draws parallels between the current AI boom and historical patterns of technology disruption, referencing Alasdair Nairn's book "Engines That Move Markets." These historical cycles often exhibit a recurring rhythm:
- Early Euphoria: Initial excitement and optimistic projections about the new technology's potential.
- Break-Neck Capital Formation: Rapid and often excessive investment flowing into the sector.
- Bruising Competition: Intense rivalry among numerous players, leading to price wars, high cash burn, and market consolidation.
- Clear-Cut Winners and Losers: Eventually, a few dominant players emerge, while many others fail or are acquired.
Nairn's observations, as highlighted in the report, offer cautionary insights relevant to today's AI landscape:
- Technological advances requiring huge capital expenditure often risk disappointing returns in the early years, even if ultimately successful.
- Heavy capital expenditure and long return periods make such ventures high-risk, especially without protection against competition.
- Winners are not always those with the best technology, but those who best anticipate industry and market development.
- First-mover advantage can be quickly lost without barriers to entry.
- Identifying losers in a technological shift is often simpler than picking the winners.
These historical patterns suggest that while AI holds immense promise, the path to market leadership and sustained profitability will likely involve significant challenges, capital destruction for some, and eventual market consolidation.
Subsection 4.2: AI-Related Monetization = Very Robust Ramps (Meeker's Pages 156-171)
Despite the costs and competition, various segments of the AI ecosystem are experiencing robust revenue growth, indicating strong early monetization.
The evolution of AI hardware strategy is seeing a notable shift. For years, NVIDIA, with its powerful GPUs, has been central to the AI hardware stack, becoming the default for training and inference due to its parallel computation capabilities and scale. This reliance, coupled with a sudden surge in demand, created supply constraints despite NVIDIA's impressive scale-up efforts. Hyperscalers and cloud providers are now actively working to diversify their supply chains and manage long lead times.
This dynamic is accelerating the rise of custom silicon, particularly Application-Specific Integrated Circuits (ASICs). Unlike general-purpose GPUs, ASICs are designed for specific computational tasks, offering maximum efficiency for AI workloads like matrix multiplication and inference acceleration. Google's Tensor Processing Unit (TPU) and Amazon's Trainium chips are now core to their respective AI stacks. Amazon, for example, claims its Trainium2 chips offer 30-40% better price-performance than standard GPU instances for certain workloads, enabling more affordable inference at scale. These custom chip initiatives are not peripheral projects but foundational strategic bets on performance, cost economics, and architectural control.
Custom chips also reflect a broader strategy to manage the substantial economics of AI infrastructure. As Amazon CEO Andy Jassy noted in early 2025, "AI does not have to be as expensive as it is today, and it won't be in the future". Custom silicon is a key lever in controlling these escalating expenses.
Concurrently, a new ecosystem of specialized AI infrastructure providers is emerging. CoreWeave has rapidly scaled as a cloud GPU provider, repurposing hardware supply chains from gaming and cryptocurrency mining to serve enterprise AI customers. Oracle has repositioned itself as a GPU-rich cloud platform with AI-specific offerings. Astera Labs, a less visible but critical player, develops high-speed interconnects essential for minimizing latency in data movement between GPUs and memory systems—an increasingly important performance bottleneck. These companies are not building foundation models themselves but are providing the essential infrastructure upon which these models depend. As compute demand compounds, these specialists are becoming indispensable in a market where speed, availability, and efficiency are key differentiators.
Works cited
- Trends – Artificial Intelligence (AI), May 30, 2025, Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey, BOND, Trends_Artificial_Intelligence.pdf
- Mary Meeker – Wikipedia, accessed May 31, 2025, https://en.wikipedia.org/wiki/Mary_Meeker
- The AI 'space race' could reshape the world order, Mary Meeker warns – PitchBook, accessed May 31, 2025, https://pitchbook.com/news/articles/mary-meeker-report-ai-race-2025
- Our History — Morgan Stanley, accessed May 31, 2025, https://ourhistory.morganstanley.com/stories/surviving-the-crisis/story-1995-internet-report
- Mary Meeker's Internet Trends Report Reverts To Historical Patterns Of Slide Count Growth, accessed May 31, 2025, https://news.crunchbase.com/venture/mary-meekers-internet-trends-report-reverts-to-historical-patterns-of-slide-count-growth/
- Jay Simons | Profile – Stage2 Capital, accessed May 31, 2025, https://www.stage2.capital/team/jay-simons
- Jay Simons – The Network, accessed May 31, 2025, https://www.thenetwork.com/profile/jay-simons-448f9dde
- Daegwon Chae – Partner at BOND – The Org, accessed May 31, 2025, https://theorg.com/org/bondcap/org-chart/daegwon-chae
- Daegwon Chae Profile: Contact Information & Network – PitchBook, accessed May 31, 2025, https://pitchbook.com/profiles/person/159606-73P
- Alexander Krey | BOND – Bondcap, accessed May 31, 2025, https://www.bondcap.com/team/alexander-krey/
- Alexander Krey – Facebook, LinkedIn – Clay.earth, accessed May 31, 2025, https://clay.earth/profile/alexander-krey
- 30 Best Vint Cerf Quotes With Image – Bookey, accessed May 31, 2025, https://www.bookey.app/quote-author/vint-cerf
- Vint Cerf Quotes – BrainyQuote, accessed May 31, 2025, https://www.brainyquote.com/authors/vint-cerf-quotes
- Digital 2025: Global Overview Report – DataReportal, accessed May 31, 2025, https://datareportal.com/reports/digital-2025-global-overview-report
- Facts and Figures 2024 – Internet use – ITU, accessed May 31, 2025, https://www.itu.int/itu-d/reports/statistics/2024/11/10/ff24-internet-use/
- AI Investment Trends 2025: VC Funding, IPOs, and Regulatory Chall, accessed May 31, 2025, https://natlawreview.com/article/state-funding-market-ai-companies-2024-2025-outlook
- 2025 Startup Insights – AI number one for US venture investment – Global Shares, accessed May 31, 2025, https://www.globalshares.com/insights/2025-startup-insights-ai-number-one-for-us-venture-investment/
- The 2025 AI Index Report | Stanford HAI, accessed May 31, 2025, https://hai.stanford.edu/ai-index/2025-ai-index-report
- A Costly Illusion of Control: No Winners, Many Losers in U.S.-China AI Race, accessed May 31, 2025, https://www.thecairoreview.com/essays/a-costly-illusion-of-control/
- AI Rivalries: Redefining Global Power Dynamics – TRENDS Research & Advisory, accessed May 31, 2025, https://trendsresearch.org/insight/ai-rivalries-redefining-global-power-dynamics/
- CUDA – Wikipedia, accessed May 31, 2025, https://en.wikipedia.org/wiki/CUDA
- 2025 NVIDIA Corporation Annual Review, accessed May 31, 2025, https://images.nvidia.com/pdf/Annual-NVIDIA-CEO-Letter-2025.pdf?linkId=100000365055773
- 2025 NVIDIA Corporation Annual Review, accessed May 31, 2025, https://s201.q4cdn.com/141608511/files/doc_financials/2025/annual/NVIDIA-2025-Annual-Report.pdf
- Global Internet usage – Wikipedia, accessed May 31, 2025, https://en.wikipedia.org/wiki/Global_Internet_usage
- 2024 AI Apps Market Insights – Sensor Tower, accessed May 31, 2025, https://sensortower.com/blog/state-of-ai-apps-2024
- Number of ChatGPT Users (March 2025) – Exploding Topics, accessed May 31, 2025, https://explodingtopics.com/blog/chatgpt-users
- ChatGPT Statistics 2025 – DAU &MAU Data (Worldwide) – Demand Sage, accessed May 31, 2025, https://www.demandsage.com/chatgpt-statistics/
- Big tech earnings preview: Microsoft, Meta, Amazon & Apple | S&P Global, accessed May 31, 2025, https://www.spglobal.com/market-intelligence/en/news-insights/research/big-tech-earnings-preview-microsoft-meta-amazon-n-apple
- U.S. Tech Earnings: AI Spending Keeps Surging Despite DeepSeek's Efficiency Breakthrough | S&P Global Ratings, accessed May 31, 2025, https://www.spglobal.com/ratings/en/research/articles/250212-u-s-tech-earnings-ai-spending-keeps-surging-despite-deepseek-s-efficiency-breakthrough-13414142
- This Is What AI Commitment Looks Like: $392 Billion and Rising | WisdomTree, accessed May 31, 2025, https://www.wisdomtree.com/investments/blog/2025/05/21/this-is-what-ai-commitment-looks-like-392-billion-and-rising
- Funding the Next Phase of AI Development | Morgan Stanley, accessed May 31, 2025, https://www.morganstanley.com/insights/podcasts/thoughts-on-the-market/ai-capex-funding-ai-development-lindsay-tyler-michelle-wang
- Utility monopolies still reign in the South – Southern Environmental Law Center, accessed May 31, 2025, https://www.selc.org/news/utility-monopolies-still-reign-in-the-south/
- The Rise and Fall of the American Electrical Grid, accessed May 31, 2025, https://americanaffairsjournal.org/2022/08/the-rise-and-fall-of-the-american-electrical-grid/
- The Year That the Entire Computer Industry Ran Out of Memory – VICE, accessed May 31, 2025, https://www.vice.com/en/article/the-year-that-the-entire-computer-industry-ran-out-of-memory/
- The price of computer storage has fallen exponentially since the 1950s – Our World in Data, accessed May 31, 2025, https://ourworldindata.org/data-insights/the-price-of-computer-storage-has-fallen-exponentially-since-the-1950s
- LLM inference prices have fallen rapidly but unequally across tasks – Epoch AI, accessed May 31, 2025, https://epoch.ai/data-insights/llm-inference-price-trends
- AI Usage Cost Falls 10x Every 12 Months, Says OpenAI CEO Sam Altman, accessed May 31, 2025, https://www.outlookbusiness.com/start-up/news/ai-usage-cost-falls-10x-every-12-months-says-openai-ceo-sam-altman
- OpenAI – Wikipedia, accessed May 31, 2025, https://en.wikipedia.org/wiki/OpenAI
- OpenAI Is A Systemic Risk To The Tech Industry, accessed May 31, 2025, https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/
- Comparing U.S. vs. Chinese AI Model Performance – Voronoi, accessed May 31, 2025, https://www.voronoiapp.com/technology/Comparing-US-vs-Chinese-AI-Model-Performance–4819
- Trends – Artificial Intelligence | Policy Commons, accessed May 31, 2025, https://policycommons.net/artifacts/21019554/trends_artificial_intelligence-1/21919986/
- Record of 4 Million Robots in Factories Worldwide – International Federation of Robotics, accessed May 31, 2025, https://ifr.org/ifr-press-releases/news/record-of-4-million-robots-working-in-factories-worldwide
- IFR World Robotics report says 4M robots are operating in factories globally, accessed May 31, 2025, https://www.therobotreport.com/ifr-4-million-robots-operating-globally-world-robotics-report/
- Waymo's Quickly Taking More Market Share Than I Expected – CleanTechnica, accessed May 31, 2025, https://cleantechnica.com/2025/04/13/waymos-quickly-taking-more-market-share-than-i-expected/
- Waymo Robotaxis Make Up 20% of Uber Rides in Austin, Data Shows – Reddit, accessed May 31, 2025, https://www.reddit.com/r/waymo/comments/1jwrevq/waymo_robotaxis_make_up_20_of_uber_rides_in/
- Robotaxis in 2025-2030: Global Expansion and Adoption Trends (Latest Numbers), accessed May 31, 2025, https://patentpc.com/blog/robotaxis-in-2025-2030-global-expansion-and-adoption-trends-latest-numbers
- Robotaxi Market Size & Share | Industry Growth [2032] – SkyQuest Technology, accessed May 31, 2025, https://www.skyquestt.com/report/robotaxi-market
- ChatGPT Usage Statistics: May 2025 – First Page Sage, accessed May 31, 2025, https://firstpagesage.com/seo-blog/chatgpt-usage-statistics/
- Gaming Industry Report 2025: Market Size & Trends – Udonis Blog, accessed May 31, 2025, https://www.blog.udonis.co/mobile-marketing/mobile-games/gaming-industry
- Diffusion of AI Jobs Across Economic Sectors – UMD-LinkUp AI Maps, accessed May 31, 2025, https://aimaps.ai/download/whitepaper-sheets/UMD-LinkUp-AI-Maps-AI-JOBS-CREATION_SECTOR-LEVEL-ANALYSIS-(Q1-2018-through-Q4-2024)-January-27-2025.pdf
- AI Job Growth includes ChatGPT-Fueled Surge Amid Overall Employment Slowdowns – PR Newswire, accessed May 31, 2025, https://www.prnewswire.com/news-releases/ai-job-growth-includes-chatgpt-fueled-surge-amid-overall-employment-slowdowns-302366586.html
- UMD-LinkUp AI Maps_FROM WEST TO THE REST (White Paper #1) – FINAL, accessed May 31, 2025, https://cdn2.assets-servd.host/link-up/production/images/Research/UMD-LinkUp-AI-Maps_FROM-WEST-TO-THE-REST-White-Paper-1-FINAL.pdf?dm=1705503751
- Mapping the Spread of Artificial Intelligence Jobs | LinkUp, accessed May 31, 2025, https://www.linkup.com/insights/blog/mapping-the-spread-of-ai-jobs-part1
- AI Skills Demand Vs Degree Requirements: 2025 Statistics and Data – Software Oasis, accessed May 31, 2025, https://softwareoasis.com/ai-skills-demand/
- 50 NEW Artificial Intelligence Statistics (May 2025) – Exploding Topics, accessed May 31, 2025, https://explodingtopics.com/blog/ai-statistics
- about.google, accessed May 31, 2025, https://about.google/company-info/our-story/#:~:text=The%20name%20was%20a%20play,it%20universally%20accessible%20and%20useful.%E2%80%9D
- Our Approach – How Google Search Works, accessed May 31, 2025, https://www.google.com/intl/en_us/search/howsearchworks/our-approach
- FAQs – Alibaba Group, accessed May 31, 2025, https://www.alibabagroup.com/faqs-corporate-information
- Mission Statement, Vision, & Core Values of Alibaba Group Holding Limited (BABA), accessed May 31, 2025, https://dcfmodeling.com/blogs/vision/baba-mission-vision
- Facebook Mission and Vision Statement Analysis | EdrawMind, accessed May 31, 2025, https://www.edrawmind.com/article/facebook-mission-and-vision-statement-analysis.html
- A brief history of Facebook – ResearchGate, accessed May 31, 2025, https://www.researchgate.net/publication/283986172_A_brief_history_of_Facebook
- Meta's CTO on AI Development: A New Space Race – News and Statistics – IndexBox, accessed May 31, 2025, https://www.indexbox.io/blog/metas-cto-compares-ai-development-to-the-space-race/
- What Comes After Mobile? Meta's Andrew Bosworth on AI and Consumer Tech, accessed May 31, 2025, https://a16z.com/after-mobile-consumer-tech-andrew-bosworth/
- AN EXCLUSIVE WITH T. ROWE PRICE'S BRIAN ROGERS ON THE LESSONS LEARNED FOR SUCCESSFUL INVESTING – WealthTrack, accessed May 31, 2025, https://wealthtrack.com/exclusive-t-rowe-prices-brian-rogers-lessons-learned-successful-investing/
- T. Rowe Price Veteran Shares Lessons in Investment Management – Fordham Now, accessed May 31, 2025, https://now.fordham.edu/business-and-economics/former-t-rowe-price-chairman-shares-lessons-investment-management/
- Superintelligence Strategy, accessed May 31, 2025, https://www.nationalsecurity.ai/
- Mutual Assured AI Malfunction: A New Cold War Strategy for AI Superpowers – Maginative, accessed May 31, 2025, https://www.maginative.com/article/mutual-assured-ai-malfunction-a-new-cold-war-strategy-for-ai-superpowers/
- Historical Economic Growth and Income Dataset – Kaggle, accessed May 31, 2025, https://www.kaggle.com/datasets/mozturkmen/historical-economic-growth-and-income-dataset
- Maddison Project – Wikipedia, accessed May 31, 2025, https://en.wikipedia.org/wiki/Maddison_Project
- Our World in Data, accessed May 31, 2025, https://ourworldindata.org/
- Economic Growth – Our World in Data, accessed May 31, 2025, https://ourworldindata.org/economic-growth
- Governing AI – Microsoft, accessed May 31, 2025, https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/RW1hAdp.pdf
- Governing AI: A Blueprint for the Future – Microsoft, accessed May 31, 2025, https://www.microsoft.com/cms/api/am/binary/RW14Gtw
- Consumption of fixed capital | Australian Bureau of Statistics, accessed May 31, 2025, https://www.abs.gov.au/book/export/26750/print
- General-purpose technology – Wikipedia, accessed May 31, 2025, https://en.wikipedia.org/wiki/General-purpose_technology
- Understanding General Purpose Technology(GPT) and Its Impact – EMB Global, accessed May 31, 2025, https://blog.emb.global/general-purpose-technology-gpt/
- Wealth Management Perspectives – Morgan Stanley, accessed May 31, 2025, https://advisor.morganstanley.com/john.howard/documents/field/j/jo/john-howard/Artificial_Intelligence_deep_dive_2025.pdf
- Thematics: Uncovering Alpha in AI's Rate of Change – Morgan Stanley, accessed May 31, 2025, https://www.morganstanley.com/content/dam/msdotcom/what-we-do/wealth-management-images/uit/AI-Enablers-Adopters-research-report.pdf
- Data on Notable AI Models – Epoch AI, accessed May 31, 2025, https://epoch.ai/data/notable-ai-models
- Data on Large-Scale AI Models – Epoch AI, accessed May 31, 2025, https://epoch.ai/data/large-scale-ai-models
- How Many AI Models Will Exceed Compute Thresholds? – Epoch AI, accessed May 31, 2025, https://epoch.ai/blog/model-counts-compute-thresholds
- At least 20 AI models have been trained at the scale of GPT-4, accessed May 31, 2025, https://epoch.ai/data-insights/models-over-1e25-flop
- Machine Learning Trends – Epoch AI, accessed May 31, 2025, https://epoch.ai/trends
- Algorithmic progress likely spurs more spending on compute, not less | Epoch AI, accessed May 31, 2025, https://epoch.ai/gradient-updates/algorithmic-progress-likely-spurs-more-spending-on-compute-not-less
- AI Supercomputers May Run Into Power Constraints by 2030 – PYMNTS.com, accessed May 31, 2025, https://www.pymnts.com/artificial-intelligence-2/2025/ai-supercomputers-may-run-into-power-constraints-by-2030/
- The computational performance of leading AI supercomputers has doubled every nine months | Epoch AI, accessed May 31, 2025, https://epoch.ai/data-insights/ai-supercomputers-performance-trend
- Tracking Large-Scale AI Models | 81 models across 18 countries – Epoch AI, accessed May 31, 2025, https://epoch.ai/blog/tracking-large-scale-ai-models
- ChatGPT Paid Users Surge to 20 Million, Driving 30% Annual Revenue Growth – AIbase, accessed May 31, 2025, https://www.aibase.com/news/16802
- ChatGPT's Skyrocketing Subscriber Base Drives Major Revenue Surge | Brand Vision, accessed May 31, 2025, https://www.brandvm.com/post/chatgpts-skyrocketing-subscriber-base
- From SEO to GEO: How AI Tools Like ChatGPT Are Disrupting Google Search and Redefining Digital Visibility – Outlook Business, accessed May 31, 2025, https://www.outlookbusiness.com/artificial-intelligence/from-seo-to-geo-how-ai-tools-like-chatgpt-are-disrupting-google-search-and-redefining-digital-visibility
- Google Search is 373x bigger than ChatGPT search – Search Engine Land, accessed May 31, 2025, https://searchengineland.com/google-search-bigger-chatgpt-search-453142
- Statistics – ITU, accessed May 31, 2025, https://www.itu.int/en/itu-d/statistics/pages/stat/default.aspx
- Google Trends, accessed May 31, 2025, https://trends.google.com/trends/
- Trending Now – Google Trends, accessed May 31, 2025, https://trends.google.com/trending
- newsroom.intel.com, accessed May 31, 2025, https://newsroom.intel.com/tech101/understanding-moores-law#:~:text=Moore's%20Law%20is%20the%20prediction,industry%20since%20its%201965%20publication.
- What Is Moore's Law and Is It Still True? – Investopedia, accessed May 31, 2025, https://www.investopedia.com/terms/m/mooreslaw.asp