← All articles

The Volcano Under the AI Boom: Why a $1.5 Trillion Lithium Discovery Is Really an AI Adoption Story

Eric FraserFebruary 18, 20268 min read
8 min read
XLinkedIn

There is a dormant supervolcano straddling the Nevada-Oregon border called the McDermitt Caldera. Geologists recently confirmed that the lithium deposits beneath it could contain between 20 and 40 million metric tons, potentially the largest concentration ever identified on Earth. The headline number being thrown around is $1.5 trillion.

That is a big number. But the key question is not whether the lithium is there. The key question is whether anyone can actually get it out, and on what timeline. And the answer right now is: not fast. No commercial-scale claystone lithium mine currently exists in the United States, and the economic feasibility of large-scale extraction at McDermitt is not yet proven. The process is more energy-intensive and water-intensive than conventional brine extraction, and may generate large volumes of tailings. Mining operations are tentatively discussed for 2026, but this is exploration-phase work. Production at meaningful scale is years out. Maybe many years.

So why is a CTO writing about a lithium deposit?

Because the real story underneath the AI story right now is not about models or tokens or agents. It is about electricity.

The Grid Constraint Nobody Is Talking About

Electricity demand in the United States is rising faster than the power grid, much of which was built decades ago, was designed to handle. Global power consumption by data centers is projected to grow from around 55 gigawatts to 84 gigawatts within the next two years, according to Goldman Sachs research. PJM Interconnection, the largest grid operator in the United States serving over 65 million people across 13 states, projects it will fall a full six gigawatts short of its reliability requirements by 2027.

That is not a theoretical problem. That is a constraint that is already shaping where data centers get built, what they cost to operate, and what AI compute costs downstream of those facilities. The heavy-machinery companies building data centers are facing years-long backlogs for critical components like natural gas turbines. Getting new power generation equipment connected to the U.S. grid can take more than a decade.

Let that timeline sink in. The AI industry is scaling compute demand on a curve measured in months, against an energy infrastructure that responds in years and decades.

The Dependency Chain

Follow the logic one layer deeper and the connection to enterprise AI becomes unavoidable. Each link in this chain constrains the next.

Lithium is the bottleneck for energy storage. Energy storage is the bottleneck for grid stability. Grid stability is the bottleneck for AI compute scaling. AI compute scaling is the bottleneck for the cost structure of every AI deployment your organization is planning for 2027, 2028, and beyond.

An editorial ink illustration of a stepped stone aqueduct where each narrowing channel, labeled LITHIUM, STORAGE, GRID, COMPUTE, and COST on carved stone markers, constrains the flow to the next, visualizing how each bottleneck in the chain constrains the next

If McDermitt delivers at scale, it could meaningfully change the cost structure of battery storage infrastructure, which could ease the grid constraint, which could make AI compute cheaper and more available over time. But "could" and "over time" are doing enormous amounts of work in that sentence. We are talking about a five-to-ten-year horizon before any of that materializes in a way that changes the economics of enterprise AI.

The lithium story is a "maybe this gets better eventually" story. It is not a "this changes things now" story.

What This Actually Means for Enterprise AI Adoption

Right now, venture capital money is subsidizing compute costs. Enterprises can experiment with expensive AI workflows because the token costs are artificially low. The major AI providers are spending billions to capture market share, and they are passing those savings down to enterprise buyers in the form of pricing that does not reflect the true cost of the underlying infrastructure.

That window is not going to stay open forever. And the grid constraint suggests it may close from the supply side, not the demand side.

The current environment where AI experimentation is affordable is a temporary condition propped up by VC subsidies on one side and underpriced energy infrastructure on the other. Both of those supports have expiration dates.

This is the part where most analysts stop. They note the constraint, flag it as a risk, and move on. But for enterprises making AI adoption decisions today, the implication is concrete and urgent.

Organizations that do not move now, that do not build internal AI capability and organizational readiness while compute is relatively cheap and available, are going to find themselves in a fundamentally more expensive environment in 2028, 2029, 2030. The cost of running AI workloads is not guaranteed to keep falling. It may, in fact, start climbing as demand outstrips constrained supply. And the organizations that waited, that treated the current cost environment as permanent rather than as a window, will face the worst of both worlds: higher costs and lower organizational readiness.

The Question Your AI Roadmap Is Not Asking

Every enterprise AI readiness assessment I have seen evaluates the same dimensions: data maturity, talent, governance, organizational culture. Those matter. But nearly all of them share a blind spot that is about to become very expensive.

None of them ask whether the compute infrastructure will exist at a price point that makes your use cases viable in three years.

Think about that. A Fortune 500 company commissions a $200,000 AI readiness assessment, builds a three-year transformation roadmap around it, and nobody in the room asks: "What happens to our unit economics when the VC subsidies dry up and the grid cannot keep pace with demand?" That is like planning a cross-country road trip without checking whether gas stations will exist along the route.

The data to answer this question is publicly available. PJM publishes capacity projections. The EIA tracks grid reliability. Every major hyperscaler discloses their capital expenditure plans in quarterly earnings. Energy futures markets price in supply constraints years ahead. The information is sitting there. Almost nobody in the enterprise AI advisory world is connecting it to adoption strategy.

That gap is going to cost some companies a lot of money.

The Worst of Both Worlds

Here is the scenario that should keep enterprise technology leaders up at night.

It is 2029. Your organization spent the last three years "monitoring AI developments" and "waiting for the technology to mature." The board was cautious. Leadership wanted to see proven ROI from other companies before committing. That felt responsible at the time.

Now compute costs are 40% higher than they were in 2026 because demand has outrun constrained supply. The organizations that moved early have spent three years building internal capability, training their people, iterating on processes, and compounding their operational advantages. Your competitors are running AI workloads at scale on institutional knowledge they started building when tokens were cheap.

Your organization is starting from zero, in a more expensive environment, against competitors who already have a three-year head start. That is the worst of both worlds: higher costs and lower readiness, at the same time.

This is not speculation. This is what the grid constraint math points toward. The current environment where AI experimentation is affordable is a temporary condition propped up by VC subsidies on one side and underpriced energy infrastructure on the other. Both of those supports have expiration dates. And a $1.5 trillion lithium deposit under a dormant volcano, no matter how massive, is not going to solve the problem in time.

The volcano is not the story. The clock is the story.

An editorial ink illustration of a stone harbor at low tide: a ship sits beached on an exposed slipway marked COST beside a 2026 stone marker, while a second ship under full sail is visible in the distance past a HEAD START buoy, with the Pioneer crouching on the beached ship scanning through a telescope and a shadow monster embedded in the dark tidal mud beneath the hull

Eric Fraser
Eric Fraser

CTO & Co-Founder

Eric bridges the gap between AI capability and business outcomes. From red-teaming Anthropic releases to training Harvard think tanks, he brings rare firsthand experience at the frontier of human-AI partnership.

AI Technology, Training