Published on October 19, 2020

While many were impressed by Google CEO Sundar Pichai’s op-ed piece in the Financial Times, it’s unclear whether Pichai was entirely forthright with his call for increased AI regulation. In fact, it is likely that Pichai acknowledges that AI regulation is inevitable, and he wants to have an influence on the forthcoming laws.

Quite similar to Big Tech’s collective call for a federally mandated U.S. data privacy law, Pichai’s clarion call for AI regulation is an attempt to get ahead of the situation. After all, formal legislation on AI initiatives would likely overwhelm AI start-ups and benefit deep-pocket players like Google.

The AI sector is in a state of hyper growth. With $26.6 billion in VC funding and 2,200 deals in 2019, investment in AI reached an all-time high. Moreover, lobbyists for the tech industry are active on Capitol Hill, with big players like the Information Technology Industry Council (ITI) staunchly advocating for industry self-regulation.

Challenges to AI governance

Despite the influx of money pouring into AI initiatives across the globe, it’s vital that tech companies are not allowed to operate with impunity. Issues such as surveillance, digital manipulation, autonomous weapons, technological unemployment, and criminal justice bias are too real to ignore.

Deepfake Audio and Video
Within the past few years, deepfake audio has already been used to swindle a handful of companies. As an example, an energy company executive was recently tricked into wiring $220,000 to a Hungarian organization because he thought the request came from his boss.

Even more problematic than voice phishing, or vishing, is the potential for deepfake audio and video to disrupt elections and sow distrust in news media. Big Tech has readily acknowledged the potential threat of these generative adversarial networks (GANs), going so far as to join forces with each another to address the issue. Amazon, Facebook, Microsoft, and the Partnership on AI recently solicited help in creating deepfake identification technologies from developers across the globe. Their Deepfake Detection Challenge (DFDC) offered over a million dollars in prizes.

Facial Recognition Biases
According to the recent NIST report on facial recognition biases, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects, African Americans experience the highest rates of false positives among all races. NIST examined the majority of facial recognition algorithms—189 software algorithms from 99 vendors—and found that these algorithms disproportionately affected African Americans. It is feared that such tools, if put into practice, could result in false accusations of innocent people.

Surveillance Concerns
In recent months, Pichai has addressed the dangers of facial recognition technologies. In fact, he’s pledged that Google will never sell its proprietary algorithms, as he’s concerned that they could be misused. The European Union also worries about facial recognition technology, to the point where they are considering a five-year moratorium on the use of facial recognition in public spaces. This would give governments the time necessary to identify rules and ethical guidelines before the technology is deployed on a wide scale.

Google’s Pichai supports such a moratorium; however, other Big Tech leaders do not. For example, Microsoft President Brad Smith doesn’t support the moratorium, nor does he promise not to sell facial recognition software at a later date. That said, Smith does agree that there needs to be more regulation in general.

Potential solutions

Although powerful lobbyist groups, like ITI, argue that any regulation of AI technologies will stifle innovation, the societal risks of not regulating AI are too serious to ignore. Tesla CEO Elon Musk, who admittedly can be a bit alarmist at times, believes that AI will eventually be able to simulate consciousness and “outthink us in every way.” A strong advocate of AI regulation, Musk co-founded OpenAI, an artificial intelligence research company, and he has also called for an AI regulatory agency, similar to the FAA for aviation or the FDA for food, drugs, and medical devices.

In addition to Pichai and Musk, other Big Tech execs have offered solutions as well. The policy principals at IBM created an AI regulatory framework that calls for an AI ethics official, mandatory checking for biases, an assessment of the potential for harm, and explainable AI (XAI).

Also, as AI increasingly runs on IoT devices, including smartphones, cars, and wearables, federated machine learning can help from a privacy perspective. Federated learning is the practice of training your AI algorithm on the IoT device itself. Rather than sending all the algorithmic data to a central server or cloud, only relevant data summaries are sent back to the central server. During federated machine learning, the data transfers can be made even more secure via homomorphic encryption and differential privacy.

In regard to managing facial recognition issues, it would be prudent to put forth a moratorium on surveillance technologies in public spaces, and to also employ additional layers of protection on devices. At this point in time, such additional layers of protection include “smile to unlock” and “liveness detection” technologies.

Lastly, the acting forces behind AI regulation should come from outside industry professionals and trade associations. Presently, there is only one organization that is comprised of both industry and non-industry participants: the Partnership on AI. Such partnerships are a step in the right direction.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan