Listen to the article (AI powered narration)

Published on August 07, 2024

Less than a year after the launch of ChatGPT, President Biden released his executive order on AI. The longest in American history, the order built upon his administration’s AI Bill of Rights, NIST’s Risk Management Framework, as well as input from Google, OpenAI, and a dozen other generative AI companies.

In total, fifteen companies agreed to create safe and secure models; however, none of these companies’ large language models (LLMs) currently meet the executive order’s compute threshold, which would require them to conduct stress tests and report the results back to the Commerce department.

In lieu of more regulation, we just have to take them at their word.

Big Tech has a horrible safety and security track record

These companies have shown us time and again that unless they face massive monetary consequences, they will continue to put profits ahead of negative societal effects.

For example, Mark Zuckerberg and Elon Musk are notorious for placing their companies’ bottom lines ahead of safety and security measures. Under Musk’s leadership, safety and security jobs were the first on X’s chopping block, and Zuckerberg has allowed misinformation to run rampant across his platforms in the run-up to previous elections.

Even when asked to address myriad child safety harms on their platforms, tech CEOs are generally obstinate. At a recent bipartisan Congressional child safety hearing, three out of the five of the tech CEOs had to be subpoenaed.

In the case of generative AI, Big Tech companies realize regulation is inevitable, and they want to be part of the process. With a first-mover advantage, OpenAI is particularly amenable to regulation, as it would strengthen their moat.

AI raises the stakes

Due to the speed at which these generative AI tools are progressing, it’s no wonder there was a palpable sense of urgency behind Biden’s order. Published a day before the AI Safety Summit in the U.K., the order revealed a U.S. desire to influence international AI regulation.

This is a departure from historical American policy. The U.S. has been more than happy to sit on the sidelines when it comes to the regulation of social media, search, and data collection. With GDPR and the forthcoming AI Act, the E.U. is years ahead of the U.S. in this regard; however, Biden has decided—rightfully so—that AI is too important to ignore.

That said, without Big Tech’s actual buy-in, Biden’s order is toothless.

The order feels like an attempt to rectify past policy mistakes

After all, folks on both sides of the aisle are unhappy with how we have let search and social media companies operate for years with impunity.

Aside from making recommendations to agencies and Congress—e.g., passing a federal data privacy law; identifying which agencies are purchasing data from data brokers; developing standards for detecting synthetic media—the order mandates that all creators of dual-use foundation models (LLMs of a certain size) conduct red team tests (adversarial stress tests of their systems) and report the results back to the Commerce Department.

Wisely, Biden opted not to ask Congress to create a new agency to regulate AI, as this would greatly stifle innovation. That said, his order raises some questions. There is no mention of LLMs’ environmental impact, and confoundingly, it relies solely on LLMs’ size as the proxy for harm. The order’s singular focus on red teaming is also odd, seeing as that is just one AI accountability mechanism.

Importantly, despite these verbal agreements, when the time comes, I suspect some of these generative AI companies will balk at sharing their red team results, citing concerns about IP loss.

Federal AI legislation is unlikely

Given the current dysfunction in Congress, it’s highly unlikely we’ll get AI legislation passed before the November election. That said, as a patchwork of state laws forms, perhaps lobbyists, industry, and Congress will become increasingly amenable to making something happen.

The Americans for Prosperity Foundation has already filed a lawsuit against NIST, and of course, we can expect more legal action when the Justice Department begins enforcing the order.

Relying on the Korean War-era Defense Production Act (DPA), the Biden administration has indicated that the DoJ will indeed enforce the mandate. Also, it’s worth noting that both Trump and Biden invoked the DPA during COVID, and Trump issued his own executive order on AI in 2019.

The biggest issue is this reliance on tech companies’ supposed buy-in

When it comes to safety measures, Meta and X are not the only companies with a shaky track record; OpenAI’s record also leaves a lot to be desired. Although the company recently announced that it would be working with the newly formed U.S. AI Safety Institute, this announcement comes in the wake of top OpenAI executives Jan Leike and Ilya Sutskever leaving the company due to AI safety concerns. In my opinion, OpenAI CEO Sam Altman’s announcement last week feels more like PR damage control than a genuine concern for safety and societal impact.

Simply put, it is far too easy for these generative AI companies to make non-binding, verbal commitments, especially when their models are too small to be held legally liable. However, soon, their models will grow. And then we’ll discover which companies we can trust.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan
x Elevate productivity: Achieving the essential balance of tech and human well-being