Categories: Policy

Microsoft and OpenAI partner in a bid to reshape U.S. export laws for the sale of “emerging technologies”

Microsoft and OpenAI – the artificial intelligence research lab co-founded by Elon Musk – are partnering on a plan that could reshape U.S. export laws. This would potentially to make it easier for a range of different types of technologies to be sold around the world. Proposals have already been submitted to the U.S. government, but a blog post, published by Microsoft, outlines the two companies’ thinking.

“Microsoft and OpenAI share [The Department of] Commerce’s goal that any controls enhance rather than undermine national security” the post states. “We, along with many others, however, highlighted the substantial downsides with restrictions that are promulgated via traditional export control approaches alone.”

What this means is that there is a balancing act. On the one hand there is a need to ensure “emerging and foundational technologies” can be sold with as little friction as possible. On the other, ‘bad actors’ must be prevented from using or adapting technologies in ‘harmful’ ways. The needs of economic growth have to be balanced with national security concerns.

The news is significant because it underlines the alignment of private tech companies with the government. But beyond that, it also demonstrates the way in which the spectre of malign ‘foreign actors’ is deployed to reinforce the status of U.S. tech companies as ‘the good guys.’

How did the Microsoft and OpenAI partnership come about?

It’s not entirely clear how this specific partnership came about. However, it appears that both Microsoft and OpenAI were part of a coalition of technology companies that were involved in a consultation process designed to address the specific issues of selling software on a global market while protecting national security.

The challenge of selling artificial intelligence around the world

The technologies being discussed aren’t explicitly named. Instead, the writer opts to use innuendo; technologies are described either as “emerging and foundational” or “sensitive”. However, the examples given indicate that what’s being talked about is really just artificial intelligence.

In Microsoft and OpenAI’s view artificial intelligence poses particular challenges because it has no intrinsic ethics:

“Restrictions based only on the performance criteria of these technologies themselves, for example, would ignore that technologies containing the same performance criteria are used for both beneficial uses (e.g. developing powerful new medications or more efficient fertilizers) and nefarious ones (e.g. developing WMD, carrying out human rights abuses).”

Mentioning facial recognition technology to emphasise the point, the author writes that “the same digital biometrics technology, software and hardware capture and analyze information to identify people, whether for the purpose of finding a terrorist or a missing child versus finding an oppressed dissident or minority.”

What Microsoft and OpenAI propose

To tackle the commercial challenges that ‘traditional’ or ‘legacy’ export controls pose, the proposal suggests a few different things. This includes adding features to software that “enable real-time controls against prohibited uses and users,” hardware authentication or verification that would hand some element of control to the creators, and anti-tamper features so malign users can’t adapt products and tools for their own oppressive or exploitative ends.

Why this might be bad

As mentioned at the start, this story underlines the close relationship between large technology companies and the U.S. government. That isn’t necessarily surprising, but it should be concerning. A more insidious aspect of all this is that it diverts attention from the harm that artificial intelligence and other technologies can do in a more prosaic way. Everyday algorithmic discrimination is probably a far greater threat to people than malign individuals or nation states.

This post was published on November 10, 2020 6:46 pm 6:46 pm

Richard Gall

Founder and Editor in Chief of The Cookie. Interested in the intersection of technology, politics, and society.

Recent Posts

Types of guys: the production of a folk taxonomy of masculinity

If you’ve spent time within a particular type of online space in recent years you…

December 7, 2021 8:45 pm

Bureaucratic technology and the limits of solutionism at Cop26

In The Utopia of Rules the late David Graeber argues that we have moved from…

December 7, 2021 6:39 pm

The world on a grid: Accidentally Wes Anderson and empty aesthetics

'Accidentally Wes Anderson' is a social media phenomenon. Starting life as an Instagram account which…

November 27, 2021 5:04 pm

The pleasures of surveillance: Re-reading Marwick through Foucault

Alice Marwick’s conception of social surveillance (Marwick, 2012) builds heavily on Foucault’s notion of “capillaries…

November 18, 2021 5:36 pm

The future of observability: smarter troubleshooting with change intelligence

Sponsored by 10KMedia. Trends like observability have underlined that one of the key challenges facing…

September 23, 2021 9:01 am

Substation: how an open source alternative to Substack could help us rediscover a more independent and adventurous culture

“If I get really excited about licensing schemes, kill me,” software developer Jesse von Doom…

May 13, 2021 11:49 am

This website uses cookies.