We Must Prepare For an AI Bubble Now

2 hours ago 4

In 2008, when the housing bubble burst and the global economy crashed, policymakers were caught flatfooted. Despite months of worry and concern about a housing bubble, signs of financial institutions on the road to crisis, and the possibility of a major economic downturn, they scrambled to put together an emergency response package. They struggled to understand the interconnectedness of the mortgage markets and the different parts of the financial system. And they had few “off the shelf” ideas for how to fix the system, leading to months of inaction before the sprawling, complicated Dodd-Frank Act was passed in 2010

More than 15 years later, we are still feeling the consequences of these policy failures. The “too big to fail” banks are even bigger now than they were before 2008. Legislation in 2018 loosened some very-large-though-not-huge banks from stricter oversight, and when Silicon Valley Bank failed in 2023, the federal government bailed it out. More broadly, anger over bank bailouts and minimal accountability in the post-2008 period has contributed to the rise of economic populism on the political left and right. The Great Recession and the bailout of bankers, but not homeowners, also had lasting effects: the country now exists in what some have called a K-shaped economy. The wealthiest are doing great and spending freely, and everyone else is suffering and tightening their belts.  

In recent months, technology industry leaders and investors, along with policymakers and pundits, have become increasingly worried about the possibility of an “AI bubble” bursting. As investments in AI rise, so do such concerns about a potential bubble. It’s time to start asking not whether there will be an AI crash, but what we should do today so that we are best prepared to respond to one tomorrow. 

The AI business bubble 

At the core of the AI bubble is a basic math problem. There is a fundamental mismatch between the trillions being invested in the infrastructure to develop AI and the billions people and companies are spending to use AI. Specifically, J.P. Morgan Chase analysts anticipate $5 trillion of spending on AI infrastructure between now and 2030. This year alone, four tech companies—Amazon, Alphabet, Meta and Microsoft— have plans to invest $670 billion on AI infrastructure. When measured as a percentage of U.S. GDP, this is more than the Apollo space program, the U.S. interstate highway system, railroads, and every other major capital expenditure in U.S. history except the Louisiana Purchase, according to the Wall Street Journal. Yet OpenAI and Anthropic have annualized revenues of about $25 billion and $19 billion, respectively. Unless AI revenues grow by orders of magnitude soon, there’s a Grand Canyon-sized gap that will be hard to cross.

Notably, this over-investment is being funded by the public. Big Tech companies are spending their cash, using capital from equity investments, issuing record levels of corporate bonds, and leveraging private credit, junk bonds, structured finance, asset-backed securities, and more. The size of the required investment means that nearly every financial market is involved.

This is the kind of financial jargon that makes most people’s eyes glaze over. But here’s what you need to know: if you use banks, have a retirement account, or depend on the financial system in any way, you too are bearing some of the risk. Your 401(k), life insurance plan, pension plan, and bank provide much of the money that turns into loans or investments in each of those financial mechanisms. 

Worse still, we’re seeing a rise in specific forms of financial engineering—circular financing, “off books” special purpose vehicles, huge private credit loans, and significant volumes of credit default swaps and asset-backed securities—which obscure a full understanding of the systemic risks. And, again, those mechanisms are funded, in large part, by retirees, small businesses, and others with savings, and all of us who depend on the financial system.

Even if AI is a technology that is widely adopted and transformative for society, its mismatch of investment and revenues, coupled with all these financial interconnections, could crash the economy.

Comparing crashes 

One scenario is that the AI bubble is like the late-1990s dot-com bubble, in which overinvestment led to the failures of hundreds of companies and a brief recession, but had the benefit of pushing the tech sector forward through investment in internet infrastructure. 

Another possibility is a version of the 2008 crash, in which the bursting bubble takes down the global economy. This is not an unreasonable worry, since the “Magnificent Seven” tech companies—Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla—were responsible for a significant portion of America’s economic growth last year. Those companies are entangled through investments in each other and rival AI companies, and are enmeshed in financial engineering.

Whatever scenario plays out, policymakers need to prepare now by developing plans and proposals for what to do if—and indeed, when—there is an AI crash. In order to do so, they should learn three lessons from previous crises. 

First, in 2008, one of the central features of the crisis response was a commitment to bailing out capital, but little emphasis on helping ordinary people. The Troubled Asset Relief Program (TARP), for example, worked to shore up the banks, not homeowners. Indeed, when the Obama administration did develop foreclosure programs, Secretary of the Treasury Tim Geithner said they were meant to “foam the runway”—meaning the focus was to help the banks avoid a crash.

 Second, policymakers spent far too little time thinking about and trying to fix the underlying structural problems that caused the crises in the first place. Despite the common trope that one should never “let a crisis go to waste,” policymakers failed to think differently about affirmative policies in the sector that could help ordinary people and transform the economy. Dodd-Frank, for example, kept much of the financial system intact and simply overlaid an additional technocratic layer of long, complicated regulations on top of a problematic market and regulatory structure. The boldest proposals for a modern version of postal banking and public banking only emerged years after Dodd-Frank passed well after the appetite for new ideas had closed. Indeed, the one big, fresh idea that had been proposed before 2008—Elizabeth Warren’s idea for a consumer financial protection bureau—made it into legislation and was extraordinarily successful, though the second Trump administration has been working hard to shutter it.

 Compare that to the banking reforms of the New Deal, a reaction to the Great Crash in 1929. That system hit at the core structural problems in the financial markets, creating deposit insurance, prohibiting financial conglomerates, and creating simple structural limits on bank activities. That system worked without another major crisis until it was watered down in the 1980s and then abandoned in the following decades.

 Third, during the Great Recession, there was little accountability for the people who caused the crisis directly or for those whose wrong policy choices enabled and exacerbated it. While one obscure, mid-level banker was imprisoned, none of the heads of the major banks were prosecuted for crimes or went to jail. In comparison, in the 1930s, the head of the New York Stock Exchange, Richard Whitney, went to prison. More than a thousand bankers went to jail after the savings and loan scandal in the 1980s. And after the Enron-era accounting scandals, the top corporate bosses were prosecuted. As ProPublica’s Jesse Eisinger has observed in his book on the lack of accountability for corporate crimes, the Justice Department today prosecutes disproportionately few high-level white-collar criminals. 

 What is less discussed is that policymakers facilitated these crises. They pushed policies—like financial deregulation—that enabled the behaviors that led to the crises. Larry Summers, for example, was a critical figure in the 1990s in pushing for deregulation of the financial sector. But Summers was not ostracized from the policy community—or even sharply questioned—for getting wrong perhaps one of the most consequential decisions in U.S. economic history. Instead, he was chosen to lead the response to the crisis from the National Economic Council and then spent the following decade as one of the most influential figures in economic policy, before recently stepping back amidst evidence of his close relationship with Jeffrey Epstein.

 When elite policymakers fail spectacularly because their entire worldview was wrong, but then are rewarded with more power, prestige, and influence, what does that say to people about government? Why would you ever trust policymakers or elites if they never have to account for their mistakes? They should not be entrusted with new policy roles or influence.

Preparing for a crash 

“Plan beats no plan,” former Treasury Secretary Timothy Geithner once said. A true but tragic remark, given how unprepared policymakers were in 2008. For this time to be different, policymakers need to have plans ready. And that means getting prepared now, before a possible catastrophe strikes.

 Policymakers should start developing and debating proposals now to address the underlying structural problems in the AI sector that are likely to be drivers of a future crash. They need to understand and start developing reforms to address circular financing, opaque debt financing, massive (and often non-transparent) subsidies by state and local governments, and the integration and interconnections that have led to sprawling conglomerates. They need to start proposing new, imaginative ways to develop the AI-sector in a way that works for ordinary people and small businesses, like public cloud computing services and worker protections, rather than leaving the fate of American society to the AI oligarchs. They should commit in advance to robustly prosecuting fraud under federal and state criminal laws to send the signal now that illegal behavior will not be tolerated. And they need to commit to helping ordinary people, not bailing out the AI companies.

No one knows how or when this anticipated crash will occur. So any specific proposal, including one we recently published, will not be perfect for what happens. But as Dwight D. Eisenhower, both as NATO Supreme Commander and as President, remarked, it is the act of planning, not the specific plans, that is indispensable. This is why the time for proposals, hearings, and debates is now.

 If there is an AI-induced crisis and crash, the window of opportunity to address structural problems in the sector, hold bad actors accountable, and build a different kind of future—one with less inequality and more opportunity—will close quickly. 

Unless policymakers get prepared now, they will miss it. And people, once again, will be furious that our leaders failed us.

Read Entire Article






<