AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China…. The single greatest risk of AI is that China wins global AI dominance and we — the United States and the West — do not.
So writes Marc Andreesen in a blog from June 2023. Andreesen — who has a vested interest in undercutting Chinese firms as one of the largest investors in rival U.S. tech companies — argues that China’s AI is fundamentally undemocratic and therefore evil, while AI that is made in America “will save the world” by giving CEOs, government officials, and military commanders infinitely knowledgeable AI assistants.
Andreesen’s view is shared widely across the tech industry and, increasingly, among U.S. government officials responsible for foreign policy, who argue that only U.S. technology leadership can stop the world from descending into a dystopia of AI-powered authoritarianism orchestrated by China. America’s alternative is to promote what U.S. National Security Advisor Jake Sullivan calls “democracy-affirming” AI alongside select allies.
In recent years, the U.S. has circumvented international institutions and detracted from existing multilateral work on responsible AI by creating parallel, U.S.-led AI policy processes that attempt to promote homegrown “democracy-affirming” AI. Washington and Silicon Valley have been among the primary sources of friction in international responsible AI initiatives. The U.S. has resisted calls to decrease the power of multinational tech companies, obstructed processes to place limits on autonomous weapons, and formed political blocs in opposition to China’s AI development. As a result, little progress has been made to limit the power of U.S. tech companies, even as they use increased investment in AI to strengthen their influence over governments and the global economy.
U.S. efforts to advance “democracy-affirming” AI have downsides in addition to frustrating international AI regulation. Digital trade agreements that defend the interests of tech companies will disempower working people by restricting their ability to protect their data. Misguided U.S. policies that aim to hollow out China’s AI sector on the basis that it facilitates human rights violations are likely to backfire. And making tech companies a de-facto arm of the U.S. national security state is more likely to boost their profits and undermine accountability than to uphold “shared democratic values.”
The international community has been working for years to develop rules to govern AI. From UNESCO’s “normative instrument” on AI ethics to consultations like the Internet Governance Forum to UN Secretary-General António Guterres’ proposed Global Digital Compact to “make transparency, fairness and accountability the core of AI governance,” there is no shortage of democratic mechanisms for governing AI that include every country. In 2018, the UN’s High-level Panel on Digital Cooperation convened global experts from industry, academia, civil society and governments to devise recommendations for how to regulate AI and other emerging technologies.1 Those recommendations, which cover everything from algorithmic auditing to military uses of AI, have since been taken up by the UN and a large number of governments. Nevertheless, the U.S. often behaves as though it alone should have the right to determine how AI is governed.
Only global cooperation can pave the way for safe and trustworthy AI, but the U.S. has been a thorn in the side of these responsible AI efforts. American companies have pushed for international endorsement of self-regulation — company-led oversight of AI governance that insulates firms from government scrutiny and public accountability — and for prohibitions on profit-sharing schemes like digital services taxes.2
Meanwhile, government officials have proposed that the U.S. should partner with Britain, Germany, France, and Japan to “promote the development of AI consistent with liberal democratic values.” This process has started to take shape in the form of the U.S.-EU Trade and Technology Council and the Quadrilateral Security Dialogue (“the Quad”) between the U.S., Japan, India, and Australia.3 Through these U.S.-led alliances, Washington is working to write global rules for AI with each of the seven next-largest economies, except for China.4 These alliances exclude smaller economies whose populations are likely to bear the brunt of harms stemming from AI systems, whether in terms of algorithmic bias, nonconsensual data extraction, or unregulated testing of risky AI use cases.
The U.S.-EU Trade and Technology Council and the Quad have both committed to ensuring that AI provides benefits in line with “shared democratic values,” but their approach is counterproductive. The Trade and Technology Council favors a risk-based approach to AI governance, meaning that AI models would likely be regulated based on pre-deployment risk assessments. It is no coincidence that U.S. tech companies favor this approach, as a risk-based approach imposes front-end compliance costs rather than back-end liability for harm.5 The Quad has made less progress on regulatory harmonization, but it has been an effective vehicle for escalating the tech war with China. For instance, Japan may follow America’s lead and further restrict outbound investment in China’s AI sector, while India is positioning itself as a safe alternative for investment, further balkanizing AI development and potentially limiting international cooperation that would reduce risks.6
In addition to gumming up global cooperation on AI governance, these U.S.-led blocs reinforce American efforts to water down AI regulation in Europe and Asia. The U.S. has succeeded in barring civil society groups from participating in drafting the Council of Europe’s AI treaty, which will help limit the treaty’s scope to governments and exclude companies from regulation.7 President Biden’s Indo-Pacific Economic Framework has been described as a “digital trade sneak attack on AI oversight and regulation” as it may preemptively ban countries from passing rigorous privacy rules or other legislation that disproportionately impacts the tech industry.8 For their part, U.S. companies are the world’s leading lobbyists against responsible AI legislation, with entire teams dedicated to weakening rules like the EU’s AI Act.
It is reasonable for the U.S. to develop its own framework for governing AI and to consult with other countries in doing so. AI governance strategies differ significantly from region to region, and there is no reason why the African Union’s AI strategy needs to be the same as that of the UN or the EU. However, the U.S. has established technology alliances with countries under its security umbrella that directly challenge the legitimacy of global AI regulatory processes. American companies profess their commitment to responsible AI even as they sabotage comprehensive regulation. U.S. control over global AI governance appears to be what “democracy-affirming” AI actually means to Washington.
The U.S. frequently claims that only American leadership can ensure that AI will promote the common good. Jake Sullivan, the U.S. National Security Advisor, has said the U.S. is leading a coalition of democracies around the world to “promote democracy-affirming and privacy preserving technologies.”
But policymakers have little ground to stand on when they seek to export the American model of AI governance.9 Unlike other countries, the U.S. lacks national data protection legislation that would regulate the inputs to AI systems. The White House’s flagship initiative for making AI companies accountable to the public, the AI Bill of Rights blueprint, is in its own words “non-binding and does not constitute U.S. government policy.” The AI Risk Management Framework from the U.S. National Institute of Standards and Technology is “intended for voluntary use” and is backed by several major tech companies as an alternative to a new regulatory agency for AI. Though President Biden will significantly strengthen both of these policies with an executive order in the coming weeks, this step to regulate federal AI procurement will fall far short of legislative initiatives in other countries. And even if the U.S. were an exemplar of AI governance at home, it is unlikely this would translate into a coherent approach to advancing responsible AI via U.S. foreign policy.
A prime example of the “democracy-affirming” technologies that the U.S. government supports is AI for military applications. As Pentagon Chief Information Officer John Sherman said recently, “One thing we take pride in… is to be responsible in how we apply AI and develop it. Not in ways that you see in China and Russia and elsewhere... We can do this, and create decision advantage for our warfighters, correctly with our democratic values.”
Lieutenant General Richard Moore, a senior Air Force official, subsequently said the quiet part out loud: the U.S. is best positioned to build ethical AI because “there are societies that have a very different foundation than ours… our society is a Judeo-Christian society and we have a moral compass. Not everybody does.”
The Pentagon has asserted its authority to use AI in a wide variety of systems in war zones around the world. Its 2023 policy on autonomy in weapons systems deemphasizes the need for human control over autonomous weapons and approves the sale and transfer of such weapons. Moreover, any limits on the use of autonomous weapons can be waived “in cases of urgent military need.” The U.S. already sells more arms than the next four leading countries combined — bankrolling new, AI-enabled product lines for defense contractors lays bare the shallowness of Washington’s commitment to “democracy-affirming” technologies.
On the world stage, American democratic values are often cast aside to advance military priorities. The U.S. government has, along with Russia, been a leading opponent of international constraints on the development and use of autonomous weapons systems.10 Though almost 100 countries support a ban on autonomous weapons, the U.S., Russia, Israel and Britain have long undermined UN treaty negotiations.11 Over the last decade, the U.S. has opposed nearly all international proposals to restrict autonomous weapons, causing talks to grind to a halt. Another notch in the belts of America’s generals, another loss for international efforts to govern AI.
The U.S. expressed a more hawkish vision for “democracy-affirming” AI at the 2021 Summit for Democracy, a U.S.-led gathering of “like-minded democracies” that focused in part on how to align democracy with digital technologies. In the wake of the summit, the White House announced it would enact export controls “to ensure that critical and emerging technologies work for, and not against, democratic societies.”
This proposal came to fruition in October 2022, when the U.S. Commerce Department restricted the sale of advanced semiconductors and production equipment to Chinese companies, slowing China’s effort to train large AI models using leading-edge chips.12 As the text of the restrictions states, part of the legal justification for export controls is that
advanced AI surveillance tools, enabled by efficient processing of huge amounts of data, are being used by [China] without regard for basic human rights to monitor, track, and surveil citizens… [there is a] risk of these items being used contrary to the national security or foreign policy interests of the United States, including the foreign policy interest of promoting the observance of human rights throughout the world. 13
Congressman Mike Gallagher, Chairman of the House Select Committee on the Strategic Competition Between the U.S. and the Chinese Communist Party, outlined the motivation behind such restrictions in more hyperbolic terms: “The Chinese Communist Party wants to use AI to perfect its model of total techno-totalitarian control… we are the good guys, and there are bad guys in the world that want to use this technology for bad purposes.”
Although promoting human rights is one of the core legal justifications for U.S. constraints on China’s semiconductor industry, there is no evidence that export controls will improve human rights.15 China already has sophisticated surveillance infrastructure that will not be undone by slowing its semiconductor industry’s growth. CCTV cameras, iris scanners, and WiFi sniffers that cull data from nearby devices do not rely on advanced chips. Human beings make up the backbone of China’s surveillance state, not microelectronics; China has two million police officers — more than any other country — who collect, label, and analyze surveillance data. China’s human rights abuses cannot be wished away by sanctioning Chinese semiconductor equipment manufacturers.
Unilateral semiconductor export controls, which purport to advance democracy, have the ironic effect of undercutting semi-democratic international institutions. The U.S. acted alone because it had no hope of getting such severe restrictions through the Wassenaar Arrangement, a multilateral institution that coordinates export control policy among 42 countries.16 But the U.S. still wanted to leverage its control over global supply chains, so it used a powerful economic weapon known as the Foreign Direct Product Rule to extend the reach of export controls overseas.17
The main extraterritorial targets of U.S. export controls were firms in the Netherlands and Japan that manufacture advanced lithography machines required to produce AI chips. After immense economic and political pressure from the U.S., both the Netherlands and Japan adopted their own rules to comply with U.S. export controls.18
China has filed a complaint at the World Trade Organization (WTO) alleging these export controls illegally restrict trade of one thousand products in contravention of WTO rules. However, U.S. Trade Representative spokesman Adam Hodge made clear in 2022 that the US would not respect such rules:
The United States has held the clear and unequivocal position, for over 70 years, that issues of national security cannot be reviewed in WTO dispute settlement and the WTO has no authority to second-guess the ability of a WTO member to respond to a wide-range of threats to its security.… The United States will not cede decision-making over its essential security to WTO panels.
While the WTO and the Wassenaar Arrangement are far from flawless bastions of democracy, America’s refusal to recognize the authority of the multilateral governance schemes it itself established after the Cold War shows that export controls are much more about maintaining U.S. dominance than upholding democratic values.
Beijing has retaliated against semiconductor export controls by limiting exports of raw materials for chip production, blocking mergers led by US companies, and restricting procurement of memory chips from the U.S. company Micron. Tit-for-tat economic warfare has led to increasingly extreme calls for restrictions on China’s technology sector.19 The U.S. is considering every possible step to undermine China’s AI ecosystem and is likely to block China’s access to a wider variety of chips as well as U.S. cloud computing services in the coming months.
Export controls have helped worsen the relationship between the U.S. and China. Beijing’s ambassador to the U.S., Xie Feng, said of U.S. export controls: “This is like restricting the other side to wear outdated swimwear in a swimming competition while you yourself are wearing a Speedo Fastskin. So this is not fair.”20 Wang Yi, China’s top diplomat, told Secretary of State Antony Blinken during his recent visit to Beijing that such measures are a serious impediment to improving U.S.-China relations. Wang demanded that the U.S. “lift illegal unilateral sanctions against China, stop suppressing China’s scientific and technological advances, and not wantonly interfere in China’s internal affairs.”21
This is not just rhetoric — the tech war has meaningfully increased the likelihood of military conflict between the U.S. and China. As the U.S. and China continue to expand export controls, détente between Washington and Beijing becomes even less likely. Defense contractors are salivating over the potential of high-tech war with China, which could boost sales for their weapons much as the wars in Iraq and Afghanistan did. Taiwan, already the likeliest flash point, has drawn Washington’s attention due to its position at the center of the global semiconductor supply chain, prompting former U.S. officials to suggest the U.S. should consider bombing Taiwan’s semiconductor factories to deter China from seizing them.22
As China’s military capabilities grow, it has demonstrated an increased willingness to respond to perceived challenges to its authority over Taiwan with force.23 China’s leadership views reunification with Taiwan as among its very highest political priorities, leading President Xi Jinping to call for the military to improve its ability to “fight and win” local wars. Export controls do not advance democracy, but they do seem likely to reduce the chance of long-term peace in the Pacific.
We must ensure that AI advances international competitiveness and national security.** **While we may wish it were otherwise, we need to acknowledge that we live in a fragmented world where technological superiority is core to international competitiveness and national security. AI is the next frontier of that competition. With the combination of OpenAI and Microsoft, and DeepMind within Google, the United States is well placed to maintain technological leadership. Others are already investing, and we should look to expand that footing among other nations committed to democratic values. But it’s also important to recognize that the third leading player in this next wave of AI is the Beijing Academy of Artificial Intelligence…. The United States and democratic societies more broadly will need multiple and strong technology leaders to help advance AI, with broader public policy leadership on topics including data, AI supercomputing infrastructure and talent.
Smith’s assertions about the threat posed by AI in China are overstated. Large AI models built by U.S. companies like Anthropic and Meta outperform any Chinese competitor on standard performance benchmarks. Since 2021, the U.S. has had more venture capital investment in tech than the next 30 countries combined. And while Chinese VCs publicly claim China is just one year behind the U.S. in developing large AI models, in private they admit the gap is closer to three years.
It is in U.S. tech companies’ interest to make bogeymen out of their Chinese competitors. Both incumbents and new power players in AI realize that, as with the fight to regulate social media, the best way to dodge government scrutiny is to allege that regulation will advantage China and that U.S. national champions in tech ought to be uplifted, not constrained. As a result, many U.S. companies have grown closer to the military, allowing them to partner with the Pentagon to lobby against stringent AI regulation.
Neither trillion-dollar companies nor the U.S. national security state will empower ordinary people using artificial intelligence — their priorities lie in accumulating money and power by hyping the China threat. In the words of Meredith Whittaker, President of Signal, “If you define democracy as a representative government where we get to vote for Google or Amazon or Facebook or Microsoft, then yeah you can democratize AI.… There is a contradiction inherent in that framework. How do you democratize a technology that itself… is a product of concentrated power?”
In searching for ways to “democratize” AI, a better approach is to look to proposals made by stakeholders that Washington has blocked from participating in international AI negotiations. For example, there is a growing push among international civil society for AI governance to advance economic democracy. Few governments have proposed any solution to the fundamental, fatal flaw in the tech industry: all the power is held by a handful of billionaire CEOs. One proposal to help rectify this power imbalance is to use the framework of solidarity — the equal and just sharing of prosperity and burdens — to more equitably allocate the gains from AI. In practice, this might look like adequately compensating anyone whose actions provide data to train an AI model and redistributing the wealth AI generates for tech companies and their executives.
Another step that could make AI systems less likely to undermine democratic values would be preventing companies from harvesting limitless data without consent. The burdens of this extraction fall disproportionately on poor countries, which are less likely to have data protection laws and adequate state capacity to block incursions by massive tech companies. Sabelo Mhlambi, a natural language processing researcher and former fellow at the Berkman Klein Center, has argued that such “data colonialism” should be rejected and that social protections for communities should be prioritized over speedy rollouts of new AI systems.24
These proposals for altering the political economy of AI development show a way forward for making AI more compatible with democracy that bears no resemblance to proposals from the U.S. government or multinationals. Data sovereignty, group rights, and economic justice are nowhere to be found in American proposals for how to make AI comport with “shared democratic values” — indeed, these values are not shared by U.S. elites.
The geopolitical tussle over who gets to build the biggest, baddest AI first does not affirm democracy. Economic and diplomatic warfare with China over AI does not promote human rights. Building autonomous weapons and watering down regulations does not further freedom. We need a truly global vision for how to make AI benefit people, not corporations; communities, not defense contractors.
High-level panels are consultative UN bodies that are established by the UN Secretary-General, usually in response to requests by UN Member States. They are tasked with issuing concrete recommendations to Member States on a specific issue area such as internal displacement or water management. The High-level Panel on Digital Cooperation issued its recommendations in 2019 and they were officially presented to the UN General Assembly in 2020. ↩
In the words of Volker Türk, the UN High Commissioner for Human Rights, “Two schools of thoughts are shaping the current development of AI regulation. The first one is risk-based only, focusing largely on self-regulation and self-assessment by AI developers. Instead of relying on detailed rules, risk-based regulation emphasizes identifying, and mitigating risks to achieve outcomes. This approach transfers a lot of responsibility to the private sector. Some would say too much – we hear that from the private sector itself. It also results in clear gaps in regulation. The other approach embeds human rights in AI’s entire lifecycle. From beginning to end, human rights principles are included in the collection and selection of data; as well as the design, development, deployment and use of the resulting models, tools and services.… we need to resist the temptation to let the AI industry itself assert that self-regulation is sufficient, or to claim that it should be for them to define the applicable legal framework.” ↩
The U.S.-EU Trade and Technology Council was established in June 2021 and has a broad remit that includes bilateral meetings between top officials to resolve trade disputes and jointly develop standards for AI, green tech, and governance of digital platforms. The Quad was created in 2007 but was inactive for nine years until it was re-formed in 2017. In addition to annual joint naval exercises, the Quad has coordinated policy related to climate, public health, and emerging technologies. ↩
China has recently taken a somewhat similar tact. At the BRICS Summit in August, President Xi Jinping announced the formation of an AI Study Group with Brazil, Russia, India, and South Africa, saying “we need to jointly fend off risks, and develop AI governance frameworks and standards with broad-based consensus, so as to make AI technologies more secure, reliable, controllable and equitable.” This initiative has not yet begun, so the degree to which it will negatively impact global responsible AI efforts remains to be seen. ↩
A March 2023 joint statement of the U.S.-EU Trade and Technology Council reads “the United States and the European Union reaffirm their commitment to a risk-based approach to AI to advance trustworthy and responsible AI technologies. Cooperating on our approaches is key to promoting responsible AI innovation that respects rights and safety and ensures that AI provides benefits in line with our shared democratic values.” The September 2021 Quad Principles on Technology Design, Development, Governance and Use begin by saying “the Quad countries (Australia, India, Japan, and the United States of America) affirm that the ways in which technology is designed, developed, governed, and used should be shaped by our shared democratic values and respect for universal human rights.” ↩
After the G7 summit in May 2023, the leaders of Japan, the U.S., the EU and several European countries agreed to establish a G7 working group on generative AI called the Hiroshima AI Process. Secretary of State Antony Blinken and Commerce Secretary Gina Raimondo recently wrote “we will continue to work with the G7 through the Japan-led Hiroshima Process…We want AI governance to be guided by democratic values and those who embrace them, and G7-led action could inform an international code of conduct for private actors and governments, as well as common regulatory principles for states.” Relatedly, the communiqué from the summit states “we should counter unjustified obstacles to the free flow of data,” advancing a top priority for multinational tech companies. In September, G7 digital ministers issued a joint statement acknowledging the need to use trustworthy AI and foundation models “in furtherance of democracy, human rights, the rule of law, and our shared democratic values and interests.” ↩
Though the U.S. is merely an observer to the Council of Europe, it holds significant sway and has “been pushing to curb the scope of the treaty as of day one.” In June, a leaked draft of the U.S. negotiating position revealed that it has advocated for each country to be allowed to decide whether the AI treaty applies to its companies. ↩
The Indo-Pacific Economic Framework (IPEF) is being modeled on previous, more expansive trade agreements such as the Trans-Pacific Partnership and the U.S.-Mexico-Canada Agreement, both of which included “non-discrimination” provisions forbidding governments from adopting domestic regulations that may have a disproportionate effect on the tech industry. IPEF is also on track to prohibit countries from enacting legislation that requires companies to store their data locally or share their source code with the government, preventing regulators in the Indo-Pacific from adequately auditing U.S.-made AI systems. As with other trade deals, the draft text of IPEF is kept secret, though lobbyists for multinationals have preferential access. According to advocates familiar with the draft text, major tech companies “are promoting a form of international preemption. Their goal is to use closed-door ‘trade’ negotiations to secure binding international ‘digital trade’ rules that limit, if not outright forbid, governments from enacting or enforcing domestic policies to counter Big Tech privacy abuses and online surveillance.” This has led to pushback from lawmakers such as Senator Elizabeth Warren and small- and medium-sized tech companies like Yelp. ↩
It goes without saying that Chinese AI governance is not a global model. China’s laws regulating data collection do not prevent its security state from tracking the movements, biometrics, and online behavior of hundreds of millions of people. China has 500 million surveillance cameras, more than half of the world’s total, which feed facial recognition algorithms with valuable data. Internationally, China seeks to reshape internet protocols to facilitate surveillance, gain leverage over other countries by controlling telecommunications networks, and export AI-enabled surveillance infrastructure. Countries around the world should have the space to develop their own models for AI governance and not be constrained by the preferences of great powers. ↩
China invests heavily in autonomous weapons and has allegedly exported some such systems. But in terms of international regulation of autonomous weapons, it has staked out a somewhat more mixed position. In 2015, China suggested there should be a preemptive international ban on “certain evil weapons.” In 2018, China called for the international community to “negotiate and conclude a succinct protocol on the prohibition to ban the use of fully autonomous lethal weapons.” China has also recognized that fully autonomous weapons should be subject to international humanitarian law and that they likely do not comply with the Geneva Convention. At the same time, China does not support restrictions on the development of autonomous weapons and has helped water down some international proposals for regulation. ↩
Negotiations at the UN have principally taken place via the Convention on Conventional Weapons (CCW), where countries have been discussing autonomous weapons since 2014; the CCW has 126 states parties, the vast majority of which support adding to the CCW a new, legally-binding instrument prohibiting the use of autonomous weapons. There is precedent for such a ban—in 1995, states adopted an additional protocol to the CCW that banned the use of blinding laser weapons, which were still under development. However, the CCW works by consensus, meaning opposition from a single state is enough to torpedo any analogous protocol on autonomous weapons. ↩
A detailed description of the 130-page text of semiconductor export controls is beyond the scope of this article. The major pillars of these export controls include bans on (i) the sale of leading-edge graphics processing units and manufacturing equipment for advanced chips to China, (ii) the sale of any U.S.-origin technology for the development of semiconductor manufacturing equipment to China, (iii) the sale of any foreign-made item produced using U.S. technology to 28 entities in China that provide high-performance compute, and (iv) the provision of services by U.S. persons to Chinese companies engaged in advanced chipmaking. ↩
U.S. semiconductor export controls are also justified by claims that advanced chips help China design better weapons of mass destruction and contribute to the modernization of its military. In sum, export controls seek to degrade China’s ability “to produce advanced military systems including weapons of mass destruction; improve the speed and accuracy of its military decision making, planning, and logistics, as well as of its autonomous military systems; and commit human rights abuses.” ↩
There is good reason to suspect that U.S. restrictions on China’s semiconductor industry will be counterproductive in the medium- to long-term. A wide variety of analysts have argued that export controls will reduce demand for U.S. technologies, accelerate innovation in China, fracture global supply chains, and prematurely exhaust U.S. leverage. Even on their own terms, there is no consensus that export controls will help ensure that democratic countries “win the AI race.” ↩
The Wassenaar Arrangement is a consensus-based organization that requires agreement among all members in order to adopt new export controls. The Biden administration initially proposed broad multilateral restrictions on exports of lithography equipment, but U.S. allies rejected the idea. ↩
The Foreign Direct Product Rule allows the Commerce Department’s Bureau of Industry and Security to regulate the transfer of foreign-made items if they are a “direct product” of U.S. technology, software, components, or intellectual property. Its scope has dramatically expanded in recent years with the addition of new Foreign Direct Product Rules that apply specifically to Huawei, Russia, and Belarus. The Commerce Department’s October 2022 semiconductor export controls introduced two new Foreign Direct Product Rules: the Advanced Computing Foreign Direct Product Rule, which bans the sale of foreign-made advanced logic chips or semiconductor production equipment to any individual or entity in China, and the Supercomputer Foreign Direct Product Rule, which bans the sale to China of foreign-made items used in the design, development, production, operation, installation, or maintenance of supercomputers and their components. ↩
Former Dutch Prime Minister Mark Rutte insisted that he was negotiating “from a position of sovereignty… [not] under pressure,” but the Belgian Prime Minister pointed out that the Dutch fell victim to U.S. “bullying.” ↩
On the other hand, tech executives have been remarkably vocal in their opposition to restrictions on China’s semiconductor sector. While this stems from their commercial interest in selling semiconductor products into China, their criticisms are notable nonetheless. Peter Wennink, CEO of the Dutch company ASML, the most valuable tech company in Europe and the leading producer of lithography equipment, has said that “If [China] cannot get those machines, they will develop them themselves.… That will take time, but ultimately they will get there.” He later added that export controls may amount to “compelling China to be innovative.” Jensen Huang, CEO of the US chip designer Nvidia, the world’s most valuable semiconductor company, said “If [China] can’t buy from… the United States, they’ll just build it themselves… If we are deprived of the Chinese market, we don’t have a contingency for that…. We can theoretically build chips outside of Taiwan, it’s possible [but] the China market cannot be replaced. That’s impossible.” Morris Chang, former CEO of Taiwan Semiconductor Manufacturing Co., which produces 90% of the world’s advanced logic chips, has said that “in the chip sector, globalization is dead” and “free trade is almost dead,” which will result in substantially more expensive chips. ↩
The Speedo Fastskin is a line of racing swimsuits introduced in the 2000s that used technological breakthroughs to increase swimmers’ buoyancy and reduce drag. At the 2008 Beijing Olympics, 98% of medals were won by swimmers wearing Fastskin swimsuits and 25 world records were broken, more than in any year since goggles were introduced in 1976. In 2009, the International Swimming Federation banned the most popular Fastskin from competition along with other swimsuits that increase speed and buoyancy. ↩
On a follow-up visit to Beijing in August, Commerce Secretary Raimondo expressed some openness to collaboration with China on reducing AI risk, stating “There are other areas of global concern, such as climate change, artificial intelligence, the fentanyl crisis, where we want to work with you as two global powers to do what's right for all of humanity." She added that guardrails for AI is an area where “the world expects our two countries to work together.” Nonetheless, the fundamental issue of semiconductor export controls was not addressed; Raimondo said “Their asks were to reduce export controls on technology… Of course, I said no… We don’t negotiate on matters of national security.” ↩
The theory that China could make use of Taiwan’s semiconductor factories in the wake of annexing the island is a myth. Semiconductor factories require components, equipment, and software from around the world to stay operational, and foreign firms would no longer supply these essential technologies after an invasion. ↩
For example, in response to House Speaker Nancy Pelosi’s visit to Taiwan in August 2022 Beijing took unprecedented steps such as flying missiles over Taiwan and simulating a quarantine of the island. China also normalized incursions by its aircraft in Taiwan’s Air Defense Identification Zone (ADIZ), with more violations of Taiwan’s ADIZ in 2022 than in the previous three years combined. ↩
Mhlambi writes “When global power is centralized amongst a few societies, especially so in the North, it ensures a select few actors will dominate the process of creating the technology used globally. The increase of power among a few actors increases the potential and impact of harm by those actors. Powerful multinational companies whose core incentives are not public good are in a position to create technology with devastating impacts. As technology is value-laden and its creation reflects an outlook on how society ought to be, the imposition of technology on societies without their representation in its creation is also an imposition of values. This is a reproduction of historical colonial relationships and, just as in the past, is likely to create structural exploitation and inequality.” Relatedly, Abeba Birhane, a Senior Fellow at the Mozilla Foundation, argues that “the mathematization and formalization of social issues… [means that] unjust and harmful outcomes… are treated as side effects that can be treated with technical solutions such as ‘debiasing’ datasets rather than problems that have deep roots in the mathematization of ambiguous and contingent issues, historical inequalities, and asymmetrical power hierarchies or unexamined problematic assumptions that infiltrate data practices.” ↩
Kevin Klyman is a researcher at Harvard’s Kennedy School focused on AI and geopolitics. He has written for Foreign Policy, TechCrunch, and The American Prospect and his research has been published by Human Rights Watch and Harvard’s Belfer Center.
Elisabeth Siegel is a PhD candidate at the University of Oxford, focusing on the politics of emerging technology (especially AI) and the co-constitutive effects that AI development is having on interstate relations (and vice versa).