How Big Tech Manipulates Academia To Avoid Regulation
Big tech money and direction are incompatible with an honest exploration of ethics, according to Rodrigo Ochigame, a Ph.D. candidate in science, technology, and society at the Massachusetts Institute of Technology.
Ochigame is a former researcher of artificial intelligence at the MIT Media Lab where he worked as a graduate student researcher for Joichi Ito, the lab’s former director. Ochigame was in Ito’s group on AI ethics — a role he kept until Aug. 15, 2019, immediately after Ito apologized for his ties to Jeffrey Epstein, a convicted pedophile and wealthy financier.
Ito is a venture capitalist, angel investor and early-stage investor in Kickstarter, Twitter, Flickr and numerous other Internet companies.
He acknowledged that he accepted money from Epstein, both for the Media Lab and for Ito’s outside venture funds. Ito did not disclose that Epstein had earlier pleaded guilty to a child prostitution charge in Florida, or that Ito had tried to hide Epstein’s name from official records, The New Yorker later reported.
“The irony of the ethical scandal enveloping Ito … is that he used to lead academic initiatives on ethics,” Ochigame wrote in a report for The Intercept.
Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company.
MIT President L. Rafael Reif admitted that he personally signed off on a donation to the research university from Epstein, sent a thank-you letter and approved the MIT Media Lab head to cover up the source of the money.
“It is now clear that senior members of the administration were aware of gifts the Media Lab received between 2013 and 2017 from Jeffrey Epstein’s foundations,” Reif said in a statement.
At the Media Lab, Ochigame said he learned that the discourse of “ethical AI,” championed by Ito, was aligned strategically with a Silicon Valley effort to avoid legally enforceable restrictions of controversial technologies. A key group behind this effort, with the lab as a member, made policy recommendations in California that contradicted the conclusions of research conducted by Ochigame and colleagues at MIT — “research that led us to oppose the use of computer algorithms in deciding whether to jail people pending trial,” Ochigame said.
“MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation,” Ochigame said.
Corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” — for example, requiring or encouraging police to adopt “unbiased” or “fair” facial recognition. These corporate initiatives frequently cited academic research that Ito had supported, at least partially, through the MIT-Harvard fund, according to Ochigame.
Ochigame says there are three kinds of regulatory possibilities for a given technology: (1) No legal regulation at all, making “ethical principles” and “responsible practices” voluntary. (2) Moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits. (3) Restrictive legal regulation curbing or banning the technology. “Unsurprisingly, the tech industry tends to support the first two and oppose the last,” Ochigame said.
Silicon Valley’s vigorous promotion of “ethical AI” has constituted a strategic lobbying effort, one that has enrolled academia to legitimize itself, Ochigame said. Ito played a big role in this corporate-academic fraternizing, meeting often with tech executives. Throught the MIT-Harvard fund, Ito and his associates sponsored many projects, including a conference on “Fairness, Accountability, and Transparency” in computer science. The initial director was the former global public policy lead for AI at Google. Other sponsors of the conference included Google, Facebook, and Microsoft.
Listen to GHOGH with Jamarlin Martin | Episode 68: Jamarlin Martin
Jamarlin talks about the recent backlash against Lebron James for not speaking up for Joshua Wong and the violent Hong Kong protestors.
MIT, Harvard, and many other universities and institutes received money from the tech industry to work on AI ethics, Ochigame said. Most such organizations are also headed by current or former executives of tech firms. For example, the Stanford Institute for Human-Centered AI is co-directed by a former vice president of Google. University of California, Berkeley’s Division of Data Sciences is headed by a Microsoft veteran. And the MIT Schwarzman College of Computing is headed by a board member of Amazon.
The Partnership on AI to Benefit People and Society is a group founded by Microsoft, Google/DeepMind, Facebook, IBM, and Amazon in 2016. The MIT Media Lab is a member.
The corporate lobby’s effort to shape academic research has been extremely successful, Ochigame said. There is now a huge amount of work under the rubric of AI ethics. “To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on ‘ethical AI’ is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies,” he wrote.
Ito, with no formal training, said he became positioned as an “expert” on AI ethics, a field that barely existed before 2017. “But it is even stranger that two years later, respected scholars in established disciplines have to demonstrate their relevance to a field conjured by a corporate lobby,” he said.
No defensible claim to “ethics” can sidestep the urgency of legally enforceable restrictions to the deployment of technologies of mass surveillance and systemic violence, Ochigame wrote. “Until such restrictions exist, moral and political deliberation about computing will remain subsidiary to the profit-making imperative expressed by the Media Lab’s motto, ‘Deploy or Die,'”.