Authored by Roger Bate via the Brownstone Institute,
Before the pandemic, I would have considered myself a technological optimist. Historically, new technologies have often been met with exaggerated fears and concerns. For example, railways were once believed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for a variety of societal issues. However, over time, these fears subsided, societies adapted, and living standards improved. This familiar pattern led me to believe that artificial intelligence would follow a similar trajectory: disruptive at first, potentially misused, but ultimately manageable.
The Covid crisis, however, shattered my confidence in technology. It wasn’t the technology itself that failed, but rather the institutions tasked with managing it.
During the pandemic, governments and expert bodies around the world responded to uncertainty with drastic social and biomedical interventions based on worst-case scenarios and enforced with unwavering certainty. Dissenting opinions were often dismissed rather than debated, emergency measures became long-term policies, and admitting mistakes or holding anyone accountable was a rare occurrence. This experience revealed a fundamental flaw in modern institutions: their inability to handle uncertainty without overstepping their boundaries.
This hard-learned lesson now plays a significant role in discussions about the regulation of artificial intelligence.
The AI Risk Divide
Concerns about advanced AI can generally be divided into two camps. One group, led by thinkers like Eliezer Yudkowsky and Nate Soares, believes that advanced AI poses catastrophic risks inherently. They argue that once AI systems reach a certain level of sophistication, they become uncontrollable and potentially dangerous, regardless of the intentions behind their creation.
Another camp, which includes figures such as Stuart Russell, Nick Bostrom, and Max Tegmark, also acknowledges the risks associated with AI but remains optimistic that proper alignment, governance, and gradual implementation can keep these systems under human control.
Despite their differing perspectives, both camps agree on one point: unrestricted development of AI is risky, and some form of oversight or regulation is necessary. Their disagreement lies in the feasibility and urgency of such measures. However, what often goes unexamined is whether the institutions responsible for providing this oversight are actually capable of fulfilling that role.
The experiences during the Covid crisis raise doubts in this regard.
Covid wasn’t just a health emergency; it was an experiment in governance driven by experts in the face of uncertainty. Authorities, faced with incomplete data, repeatedly opted for extreme interventions based on speculative threats. Dissent was often suppressed, policies were justified without transparent analysis, and fear of potential futures overshadowed rational decision-making.
What this pattern reveals is how modern institutions tend to behave when confronted with existential threats. Decisiveness, narrative control, and moral certainty take precedence, while admitting mistakes becomes challenging and precaution turns into dogma.
The takeaway here isn’t that experts are inherently flawed, but rather that institutions tend to reward overconfidence over humility, especially when political interests, funding, and public fear align. Once exceptional powers are claimed in the name of safety, they are rarely relinquished willingly.
These dynamics are now evident in debates surrounding AI oversight.
The “What if” Machine
A common justification for extensive state intervention is the “what if” scenario involving potential bad actors. This line of thinking often leads to preemptive, large-scale, and sometimes secretive government actions to avert potential catastrophes.
Similarly, during the pandemic, similar reasoning was used to justify broad biomedical research initiatives, emergency authorizations, and social restrictions. The logic was circular: the state had to take drastic actions based on speculative threats to prevent potential dangers.
Now, AI governance is being framed in a similar manner. The concern isn’t just about the unpredictability of AI systems but also about the fear of such unpredictability justifying ongoing emergency governance—centralized control over computing, research, and information dissemination—under the guise of necessity.
Private Risk, Public Risk
An important distinction often overlooked in these discussions is the difference between risks posed by private entities and those posed by state authority. Private companies are subject to constraints—albeit imperfect ones—like liability, competition, reputation, and market forces. These constraints don’t eliminate risks entirely, but they create feedback mechanisms.
On the other hand, governments operate differently. When states act in the name of preventing catastrophes, feedback mechanisms weaken, failures are rationalized, costs are shifted, secrecy is justified, and potential future threats become tools for present-day policies.
Some AI experts have hinted at this issue. Bostrom, for instance, has warned about the “lock-in” effects not just from AI systems but from governance structures established during moments of crisis. Aguirre’s call for global restraint relies on international bodies that have a poor track record of humility and error correction. Even more moderate proposals assume regulatory bodies can resist politicization and scope creep.
The experiences during Covid provide little reason to have confidence in these assumptions.
The Oversight Paradox
This dilemma lies at the core of the AI debate. If one believes that advanced AI must be regulated, slowed down, or stopped, it is most likely governments and international institutions that will have the authority to do so. However, recent behaviors of these entities do not inspire confidence in their ability to exercise that power judiciously and reversibly.
Emergency situations create lasting impacts. Powers acquired to address hypothetical risks tend to persist and expand. Institutions rarely downgrade their own significance. In the context of AI, this raises the concern that responding to AI risks could entrench rigid, politicized control systems that are harder to dismantle than any individual technological advancement.
The real danger, therefore, is not just the potential of AI surpassing human control but the fear of such a scenario justifying forms of authority that are challenging to live under or escape.
Rethinking the Real Risk
This is not an argument for complacency regarding AI or a denial of the potential harm powerful technologies can cause. Instead, it’s a call to broaden our perspective. Institutional failure is as critical an element as any other existential risk. A system built on the assumption of benevolent, self-correcting governance isn’t any safer than one built on the assumption of benevolent, aligned superintelligence.
Before the pandemic, skepticism of technology was often dismissed as human negativity bias—our tendency to believe that our era faces uniquely insurmountable challenges. Post-pandemic, however, skepticism appears more rooted in experience than bias.
Therefore, the central question in the AI discourse isn’t just about aligning machines with human values but also about whether modern institutions can navigate uncertainty without exacerbating it. If trust in these institutions has diminished— which the Covid crisis suggests it has— then calls for extensive AI oversight need to be scrutinized as rigorously as claims of inevitable technological progress.
The greatest risk may not be that AI becomes too powerful but that the fear of such power justifies controls that are harder to accept or break free from.
Roger Bate is a Brownstone Fellow, Senior Fellow at the International Center for Law and Economics (Jan 2023-present), Board member of Africa Fighting Malaria (September 2000-present), and Fellow at the Institute of Economic Affairs (January 2000-present).
Loading recommendations…
