Microsoft and OpenAI – the artificial intelligence research lab co-founded by Elon Musk – are partnering on a plan that could reshape U.S. export laws. This would potentially to make it easier for a range of different types of technologies to be sold around the world. Proposals have already been submitted to the U.S. government, but a blog post, published by Microsoft, outlines the two companies’ thinking.
“Microsoft and OpenAI share [The Department of] Commerce’s goal that any controls enhance rather than undermine national security” the post states. “We, along with many others, however, highlighted the substantial downsides with restrictions that are promulgated via traditional export control approaches alone.”
What this means is that there is a balancing act. On the one hand there is a need to ensure “emerging and foundational technologies” can be sold with as little friction as possible. On the other, ‘bad actors’ must be prevented from using or adapting technologies in ‘harmful’ ways. The needs of economic growth have to be balanced with national security concerns.
The news is significant because it underlines the alignment of private tech companies with the government. But beyond that, it also demonstrates the way in which the spectre of malign ‘foreign actors’ is deployed to reinforce the status of U.S. tech companies as ‘the good guys.’
How did the Microsoft and OpenAI partnership come about?
It’s not entirely clear how this specific partnership came about. However, it appears that both Microsoft and OpenAI were part of a coalition of technology companies that were involved in a consultation process designed to address the specific issues of selling software on a global market while protecting national security.
The challenge of selling artificial intelligence around the world
The technologies being discussed aren’t explicitly named. Instead, the writer opts to use innuendo; technologies are described either as “emerging and foundational” or “sensitive”. However, the examples given indicate that what’s being talked about is really just artificial intelligence.
In Microsoft and OpenAI’s view artificial intelligence poses particular challenges because it has no intrinsic ethics:
“Restrictions based only on the performance criteria of these technologies themselves, for example, would ignore that technologies containing the same performance criteria are used for both beneficial uses (e.g. developing powerful new medications or more efficient fertilizers) and nefarious ones (e.g. developing WMD, carrying out human rights abuses).”
Mentioning facial recognition technology to emphasise the point, the author writes that “the same digital biometrics technology, software and hardware capture and analyze information to identify people, whether for the purpose of finding a terrorist or a missing child versus finding an oppressed dissident or minority.”
What Microsoft and OpenAI propose
To tackle the commercial challenges that ‘traditional’ export controls pose, the proposal suggests a few different things. This includes adding features to software that “enable real-time controls against prohibited uses and users,” hardware authentication or verification that would hand some level of control to the creators, and anti-tamper features so malign users can’t adapt products and tools for their own oppressive or exploitative ends.
Why this might be bad
As mentioned at the start, this story underlines the close relationship between large technology companies and the U.S. government. That isn’t necessarily surprising, but it should be concerning. A more insidious aspect of all this is that it diverts attention from the harm that artificial intelligence and other technologies can do in a more prosaic way. Everyday algorithmic discrimination is probably a far greater threat to people than malign individuals or nation states.