Why Do Americans Fear The Advent Of AI More Than People In Other Countries?

Elon Musk and Donald Trump in the Oval Office on May 30, 2025
You can’t turn around without bumping into an opinion about AI and its risk to jobs. The tech magnates assure is it will replace unprecedentedly huge numbers of white-collar workers, predicting double-digit unemployment. The economists, myself included, point out that, at least thus far, there are only weak correlations at best between AI workplace penetration and weak hiring or higher-than-average unemployment.
For a summary of the lay of this land, you won’t do better than Ezra Klein’s latest oped summarizing the debate. Though I’ll summarize the argument, one I’ve made often up here, the point of this post is not to rehearse this part of the debate. It’s to noodle over why U.S. citizens are so much more negative about AI than those in other advanced economies.
AI and Jobs: Will This Time Be Different?
First, we should be clear that AI is the thing we say most about and know least about. With that caveat out of the way, allow me to add to the noise.
The underlying fact that should always guide one is this discussion is that productivity—output per hour, ergo a metric of technological progress in economic production1—trends up over time and so do jobs and hours worked. For all our technological gains, the unemployment rate, outside of recessions, tends to stay pretty low. (Yes, I’ve argued there’s often too much slack in the labor market, but I’m talking about unemployment at 5.5 percent instead of 3.5 percent, while the AI doomers are talking about massive joblessness.)
Thus, there must an intervening variable, which is demand. Technology replaces some functions in the workplace and introduces new ones.
Then there’s the complementary aspect of technology, i.e., the fact that AI makes incumbent workers more productive. Ezra and others are discussing this under the rubric of “Jevon’s Paradox,” the idea that when a resource becomes cheaper, we use more of it. Jevon, a British economist in the mid-1800’s, noted the paradox regarding the invention of the steam engine, which used half as much coal to generate the same amount of power as existing engines. Instead of demand for coal tanking, it soared, as did the UKs industrial production.
In the AI context, rather than being replaced, software engineers, e.g., can do a lot more with AI’s help. As Ezra points out, “Claude Code is a marvel, yet demand for software engineers is booming.”
I don’t want to get too far over these skis. This time might be different, and surely many workers will be displaced. More on that in a moment. But the point here is that I’d listen more closely to the economists on this one, at least so far.
AI Less Popular Than ICE!?
So why then, in a recent poll, is AI less popular than those masked ICE bandits?
For one, we mere humans are risk averse, and if someone tells us that there’s a technology coming that can replace us, of course we’re going to be fearful. That’s universal.
But I maintain that there’s a unique U.S. version of these worries. Part of this may stem from adoption differences:
But a bigger part, I stipulate, is trust in the gov’t to implement the necessary guardrails to give the workforce a better chance to exploit the Jevon-style workplace complementarities versus getting replaced.
In their tacking of international sentiment re AI, a Stanford University study reports:
The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 3i percent. The global average was 54 percent, with Southeast Asian countries leadding (Singapore 81 percent, Indonesia 76 percent).
Globally, the EU is trusted more than the United States or China to regulate AI effectively. Across 25 countries in Pew's 2025 survey, a median of 53 percent said they trust the EU, compared to 37 percent for the United States and 27 percent for China.
At least two factors combine to generate this result.
First, there’s more of a “what’s bad for Main Street is good for Wall Street” vibe over here. When CEOs on U.S. earnings calls talk about layoffs, their share prices go up. Though we’re probably getting closer to each other, there’s still less social solidarity here than in most other advanced economies.
Second, there’s much greater discomfort here with regulatory guardrails and safety nets. Research has shown that if people are confident that social policy will catch their fall if an entrepreneurial risk goes south, they’re more likely to take such risks. If you believe your gov’t is likely to shield you from most of the downsides from a new technology, you’re prone to be less worried about it. Relative to most other advanced economies, workers here operate without a net.
Third, AI firms have very deep pockets and have long been purchasing political protection against regulation or candidates who are tapping into the American public’s deep concerns about AI’s downside risks. No other advanced economy comes close to us in terms of buying political influence, which in this context, reasonably puts fear in the hearts of working Americans.
Fourth, as I’ve endlessly underscored up here, people are already deeply stressed about affordability. The fact that in too many cases, their paycheck isn’t covering their needs makes them a bit touchy re the prospect of losing that paycheck to an LLM.
Fifth, nobody can trust the grift operation known as the Trump administration to have their back on this. Even putting that freakshow Musk aside, Trump has literally had the tech bros in his office giving him gold. That does not bode well for any protections from their excesses.
Yet Another Opening for Democrats
You know my methods, Watson. Hope for the best, prepare for the worst. Ezra and the rest of us suggesting this time might not be so different might be wrong. Which means there’s a huge opening here for Democrats to present a robust AI insurance program that’s responsive to points 1-5 above. Yes, it should bolster existing safety net programs, like unemployment insurance, but while that’s essential for an interim job displacement, over the long term people want the dignity of a job, and even more so, they want their kids to have the opportunities to build successful careers.
This requires education and training programs that boost complementarity and dampen displacement probabilities. It means looking at wage insurance ideas and perhaps even job guarantees—public jobs programs—should extensive, lasting displacement actually occur. Keynes knows there’s a ton of work to do in this economy—I’m thinking health care, human services, child care, personal-touch stuff, not to mention music, literature, and other jobs—that no AI agent can realistically perform (don’t tell me AI writes great books—I’ve seen such work and it sucks).
This shouldn’t be hard, Democrats. Even if the historical odds suggest we should be okay, as greater demand will more than soak up the extra supply, Americans are justly concerned about the risks of AI to their and their children's livelihoods, risks which loom a lot larger here than in other economies.
The time is thus nigh to craft this policy agenda and to tell the people about it. Happy to help, but let’s get to it!1
(1. I’m thinking of total factor productivity, meaning output net of hours, capital investment, and other inputs, so what’s left is considered a proxy for tech gains in production.)
Jared Bernstein is a former chair of the White House Council of Economic Advisers under President Joe Biden. He is a senior fellow at the Council on Budget and Policy Priorities. Please consider subscribing to his Substack.
- President Trump Signs Executive Order Challenging State AI Laws | Paul Hastings LLP ›
- EU countries, lawmakers fail to reach deal on watered-down AI rules | Reuters ›
- How the Executive Branch Is Reshaping AI Federalism | Lawfare ›
- AI bill would crack down on deepfake distribution and protect whistleblowers ›
- Battle for AI Governance: White House’s Plan to Centralize AI Regulation and States’ Continuous Opposition ›








