Should Microsoft help the Pentagon ‘increase lethality’? | Tech Industry
Breaking Tech Industry news from the top sources
I’ve written recently about Microsoft taking on the role of the tech industry’s conscience. Among other stands, the company has called for the federal government to regulate face-recognition technology and warned in a blog post about the potential dangers of AI-powered technology, ranging from serious privacy invasions to suppression of free speech. Microsoft has also pushed for international pacts to limit the way in which the United States and other countries use cyberweapons.
But now Microsoft is trying to land a massive $10 billion U.S. Defense Department contract involving AI and cloud technologies — a contract so controversial that Google has declined to participate in the bidding after many of its employees voiced concern. Microsoft employees have asked the company to do the same, but the company has refused.
Is Microsoft putting its morals aside in order to try to land a lucrative contract? Or is it instead helping make the country militarily stronger and safer? It’s a tough call, but a deep look at the issue offers an answer.
First, let’s start with the contract itself. It’s for a program called JEDI (Joint Enterprise Defense Infrastructure) that will develop a cloud-based infrastructure for employing data in warfare. Ellen Lord, the undersecretary of defense for acquisition and sustainment, said it will also include the use of artificial intelligence and machine learning for warfare, and added, “JEDI Cloud is an acquisition for foundational commercial cloud technologies that will enable warfighters to better execute a mission that is increasingly dependent on the exploitation of information.” Department of Defense Chief Management Officer John H. Gibson II offered a more blunt explanation: “We need to be very clear. This program is truly about increasing the lethality of our department.”
Amazon and IBM have joined Microsoft and others in pursuing the contract. Google was notable for its absence from the field. That came about after an uproar by its employees over a previous Defense Department contract. That one, called Project Maven, uses AI to interpret video information, and would be used to target drone strikes. Four thousand Google employees signed a petition demanding the company adopt “a clear policy stating that neither Google nor its contractors will ever build warfare technology.” As a result, Google ended its participation in Project Maven. And when it came time to bid on JEDI, Google said it wouldn’t, in part because “we couldn’t be assured that [the JEDI deal] would align with our AI Principles.”
Microsoft employees, following the lead of Google’s workers, have tried to pressure Microsoft not to purse JEDI. In an open letter to Microsoft, the employees claimed that the contract is “shrouded in secrecy, which makes it nearly impossible to know what we as workers would be building. … Many Microsoft employees don’t believe that what we build should be used for waging war. When we decided to work at Microsoft, we were doing so in the hopes of ‘empowering every person on the planet to achieve more,’ not with the intent of ending lives and enhancing lethality.”
The employees also point to a blog post by Brad Smith, president and chief legal officer of Microsoft, and Harry Shum, executive vice president of Microsoft’s AI and Research Group, titled “The Future Computed: Artificial Intelligence and its role in society.” In it, Smith and Shum say that AI development should be guided by strong ethical principles to make sure AI is “designed and used responsibly.” Those ethical principles, their blog says, are “fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability.”
Employees say that JEDI violates several of those principles. They have also asked that Microsoft’s participation in JEDI be reviewed by the company’s AI ethics committee. The letter concludes, “Microsoft, don’t bid on JEDI.”
Microsoft brushed off its employees’ complaints and continues to work toward landing the contract. In a blog post about why the company is pursuing JEDI, Smith explained, “We readily decided this summer to pursue this project, given our longstanding support for the Defense Department. All of us who live in this country depend on its strong defense. … We want the people of this country and especially the people who serve this country to know that we at Microsoft have their backs. They will have access to the best technology that we create.”
Smith admitted, though, that “Artificial intelligence, augmented reality and other technologies are raising new and profoundly important issues, including the ability of weapons to act autonomously.” And he acknowledged that once Microsoft provides technology to the military, how it’s used will be out of the company’s hands, telling The New York Times, “We can’t control how the U.S. military uses our technology once we give it to them.” But he argued, “The military is subject to civilian control. And we believe we will have an opportunity to influence those discussions.” In his blog post, he concluded, “To withdraw from this market is to reduce our opportunity to engage in the public debate about how new technologies can best be used in a responsible way.”
So as far as Microsoft is concerned, it’s case closed. The company is bidding on the contract.
Should it, though? Or should it pay attention to employees who ask the company to pull out? What’s the ethical thing to do?
This is one of those issues in which both sides have strong moral arguments. But Microsoft falls short on one thing: The company isn’t willing to have an open discussion about whether its participation violates the ethical principles Smith himself laid out in “The Future Computed.” And the company also isn’t willing to have its AI ethics committee weigh in, either. If ever there were an issue that gets to the core of the use of AI ethically, it’s the use of AI for “increasing lethality,” which the Defense Department says is the purpose of JEDI.
So Microsoft should have a thorough debate about it, air all the issues, and have its AI ethics committee help make the decision. To do it any other way makes it hard to take Microsoft at its word that it’s being ethical and trying to get the contract to help the country, rather than to add more billions to its bottom line.
This story, “Should Microsoft help the Pentagon ‘increase lethality’?” was originally published by