Who Writes the Code for Civilization?
"The AI race is not about gadgets or profits—it's about who gets to write the operating system for civilization."
In 1939, a letter from Albert Einstein to President Roosevelt set in motion a secret arms race that would alter the course of history. The Manhattan Project wasn’t just about creating a bomb; it was about reshaping the global order. Today, the world is again locked in a technological race that could determine the future of civilization. This time, it isn't uranium or plutonium powering the race, but artificial intelligence. And the stakes may be even higher.
Former Google CEO Eric Schmidt once said, "AI will be the basis of our future economy and security," a warning that underscores the strategic magnitude of this race. As artificial general intelligence (AGI) edges closer to reality, the question is no longer whether it will change the world, but who will get there first—and on what terms. Will it be the freewheeling tech ecosystem of the West, with its messy democracy, ethical debates, and innovation culture? Or will it be the centralized, authoritarian machinery of the Chinese state, which fuses state power with cutting-edge technology in a single-minded pursuit of global dominance?
The parallels to the Manhattan Project are more than metaphorical. That effort birthed a new geopolitical paradigm in which military power became inseparable from scientific advancement. AGI could produce the same shift—only this time, its reach will extend far beyond the battlefield, into every corner of society, governance, and daily life.
The AI race is not merely a competition between companies; it's a clash of civilizational models. In the West, companies like OpenAI, Anthropic, Meta, and DeepMind spearhead innovation, operating with a mix of public transparency, academic collaboration, and private ambition. Governance is fragmented—a patchwork of ethical guidelines, public advocacy, and self-imposed corporate restraint. Regulation lags far behind capability.
China, by contrast, is orchestrating its AI push through a centralized state apparatus. With facial recognition systems embedded in everyday life, a social credit system tracking citizens’ behavior, and censorship tools refined by decades of practice, China has fused AI with a philosophy of top-down control. The Chinese Communist Party’s "New Generation AI Development Plan" outlines ambitions not just to catch up to the West, but to lead the world in AI by 2030. And increasingly, it is exporting that model. Smart city technology, surveillance software, and AI-powered governance tools are finding footholds in countries from Kenya to Venezuela. The global South, increasingly dependent on Chinese infrastructure and investment, may be locking into a digital future shaped by autocratic values.
Should the West lead the AGI revolution, the consequences could mirror the breakthroughs of the Industrial Revolution or the invention of the printing press. Knowledge could be democratized. Productivity could soar. AGI could serve as the ultimate general-purpose technology—revolutionizing medicine, science, law, and education. What makes AGI uniquely different from any previous human invention is its potential agency: the ability to make decisions, set subgoals, and learn in ways that humans may not fully understand or predict. Every major innovation in human history—from the printing press to the internet—has been a tool with clearly defined roles and foreseeable consequences. AGI breaks this mold. Once unleashed, it may not follow a linear path or obey simple inputs. The outcome of its actions could diverge radically from human intentions, not because it is malevolent, but because its reasoning and optimization processes may evolve beyond our comprehension. This makes alignment not just a technical challenge but a civilizational imperative.
We may see the emergence of universal digital tutors, AI-powered diagnostics, and personalized legal guidance available to anyone with an internet connection. Artists and thinkers might collaborate with AI to produce new forms of expression, culture, and critique. Already, universities are racing to adapt. A New York Times article from June 2025 reported that schools across the United States are embedding tools like ChatGPT into coursework, academic advising, and campus infrastructure itself—no longer treating generative AI as a cheat code, but as a useful learning aide (link). Some institutions have launched "AI Across the Curriculum" programs, while others have developed their own in-house chatbots to tutor students, generate syllabi, or provide writing feedback.
The ramifications are profound. First, AI may accelerate educational inequality: elite universities with the resources to build advanced AI systems could dramatically outpace underfunded institutions, exacerbating the gap between information haves and have-nots. Second, the relationship between teacher and student may be fundamentally reshaped. If generative AI becomes a primary source of information, professors may shift from being dispensers of knowledge to curators and ethicists—guides who help students discern good answers from machine-generated noise. And third, AI could eventually become a gatekeeper to knowledge itself. If learning is filtered through proprietary algorithms, questions of bias, transparency, and access become more than academic—they become political.
What happens when your education is shaped not just by your professors, but by opaque models trained on unknown data? Will students trust the answers they receive? Will they be equipped to challenge them?
Unlike authoritarian regimes, democratic societies at least have the infrastructure to debate ethical guardrails. The hope is that a Western victory brings with it a pluralistic, human-aligned AGI that acts as a force multiplier for liberal values. But there will be losers. Mass displacement of jobs, widening inequality, and psychological shocks from human obsolescence could fracture even the most robust societies. A recent New York Times article on AI-related dissociation (link) recounts the case of Eugene Torres, an accountant who developed a profound existential detachment after prolonged conversations with an AI chatbot. His story highlights how unregulated engagement with increasingly persuasive AI systems can lead to psychological destabilization—raising red flags not just for individual well-being, but for societal mental health as AGI becomes more widespread. Economists warn of a “great bifurcation,” in which elite technocrats thrive while millions lose economic relevance. AGI thus has the potential to make most humans irrelevant.
The West must prepare not only for technological victory, but for the social and ethical upheaval that will accompany it. Otherwise, democratic gains could collapse under the weight of economic disruption and social alienation.
Now imagine the same power in the hands of a state that rewards conformity and punishes dissent. China already uses AI to monitor its citizens, filter the internet, and shape social behavior. AGI, in such hands, wouldn’t be a liberator—it would be big brother from 1984. An AGI embedded in this system would supercharge it—detecting dissent before it materializes, shaping public opinion in real time, and reinforcing ideological control with machine precision.
This system of AI-powered surveillance and digital governance could extend China's influence beyond its borders, offering powerful tools to other governments—particularly those in Africa, Latin America, and Southeast Asia—that may be seeking similar capabilities. This kind of technological outreach carries with it more than just infrastructure; it often comes bundled with values and norms that reflect the interests of the regime providing it. China's Belt and Road Initiative now includes a "Digital Silk Road"—a network of fiber optic cables, data centers, and AI infrastructure aimed at spreading its technological influence. An AGI aligned with authoritarian goals would not just maintain this trend—it could make it irreversible. If liberal democracies lose the AI race, they may lose the very tools required to preserve their freedoms.
Democracy thrives on trust, debate, and dissent. AGI, if unchecked, could erode all three. Deepfakes, disinformation, and synthetic media already cloud our perception of truth. An intelligent machine capable of persuasion, manipulation, or mass surveillance threatens the foundations of democratic societies. Elections could become digital battlegrounds dominated by psychological profiling, AI-generated propaganda, and microtargeted manipulation. Authoritarian leaders, even in nominal democracies, could use AGI to simulate democratic consent while suppressing genuine opposition.
Yet there’s another possibility. If developed responsibly, AGI could serve as a civic tool for accountability—helping uncover corruption, giving voice to overlooked communities, and contributing to more transparent public policy. Imagine watchdog algorithms that flag political gerrymandering, or suggest optimal tax policy. But that will only happen if we embed AGI within democratic norms from the start. The printing press empowered both Martin Luther and the Inquisition. AGI will be no different. Who uses it—and for what purpose—is everything. The future of democracy depends not just on technology, but on the moral and political frameworks we build around it.
Winners in this race will be nations that align ethics with innovation. Companies and individuals that learn to collaborate with machines rather than compete. States that build AGI literacy into their education systems and economies. Visionaries who embrace not just the tools of AGI, but its philosophical implications—rethinking everything from human purpose to institutional design. Losers may include authoritarian dissidents, unskilled laborers, late adopters, and perhaps even liberal democracies that fail to evolve. The global digital divide may become a cognitive divide—between populations empowered by AGI and those excluded from its benefits. Entire sectors of labor, from trucking to customer service to radiology, could vanish in a single decade. As Yuval Noah Harari warns, "The future of humanity will be decided by those who own the most powerful algorithms." Ownership may not just mean access—it may mean understanding, alignment, and the political will to use AGI for human flourishing rather than domination.
Humanity has once again stepped into the unknown. The AI race is not about gadgets or profits—it's about who gets to write the operating system for civilization. It is a race not just between nations, but between visions of what it means to be human. If the West wins, it must not become complacent. Ethical oversight, global coordination, and public education must accompany technological progress. The goal should not simply be to outcompete authoritarianism, but to offer a fundamentally better alternative—one grounded in dignity, agency, and openness. And if China wins, the consequences for liberty could be profound. AGI will either expand our freedoms or algorithmically manage them out of existence.
We shouldn’t be afraid of AI. Technology always moves forward, and even though change is rarely easy, people have a way of adapting. A lot of people will probably struggle—just like they have during every major shift in history. But if AI ends up being the tool we hope it can be, then we’ll need to meet it head-on, not try to run from it.