Biden’s executive order on AI brings awareness to emerging technology but lacks ‘teeth’ to make major impact, Northeastern expert says

Kamala Harris standing over President Biden as he signs a new executive order on AI in the White House.
President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, in Washington. Vice President Kamala Harris looks on at right. AP Photo/Evan Vucci

President Joe Biden issued a sweeping executive order Monday aimed at steering the development of artificial intelligence technologies. It’s the first order of its kind to come out of the federal government directly related to regulating the emerging technology. 

The new directive provides standards and guidance on a number of focus areas, including safety and security, privacy, equity and civil rights, consumer and worker protections, research, competition and innovation, work abroad and governmental use of AI.

  

As part of the new order, and in accordance with the Defense Production Act, AI companies will be required to share the safety test results of new AI models with the federal government before they are released. 

Additionally, the National Institute of Standards and Technology will create new “standards, tools, and tests” for companies to use while stress-testing their AI systems for vulnerabilities and other security issues as part of a practice called “red teaming.” 

Those standards will be used by the Department of Homeland Security, which is in the process of establishing an AI Safety and Security Board as part of the order. The Department of Homeland Security will also collaborate with the Department of Energy to “address AI systems’ threats to critical infrastructure, as well as radiological, nuclear and cybersecurity risks,” according to the order. 

Headshot of Usama Fayyad.
Usama Fayyad, Executive Director for the Institute of Experiential Artificial Intelligence, poses for a portrait. Photo by Matthew Modoono/Northeastern University

Additionally, the order establishes the creation of a new safety program to be run by the U.S. Department of Health and Human Services designed “to receive reports of — and act to remedy — harms or unsafe healthcare practices involving AI.”

These are just a few of the highlights of the new directive, which the Biden administration says builds on conversations it’s had with 15 leading AI companies that have voluntarily pledged to “drive safe, secure, and trustworthy development of AI.” Google, Microsoft and Open AI are among the companies that have made the pledge.    

Usama Fayyad, executive director of Northeastern’s Institute for Experiential AI, spoke with Northeastern Global News about the pros and cons of the new order. This interview has been edited for brevity and clarity: 

This order covers a lot of different aspects of AI development and deployment. What specific actions in the order stand out to you? 

The standout actions are the ones that basically say, “Let’s come up with new standards for AI safety and security.” That’s not a bad thing. We’re not going to get it right on the first try, but at least even thinking about this and raising awareness around it and challenging the agencies to basically stand up to some kind of standard and accountability. That’s a very good thing. 

The section on protecting American privacy is also very good because it actually brings in issues of when do we transgress, what is OK, and what is not. It makes a valid topic of discussion, that the government cannot just go about it without thinking about the consequences. 

Advancing equity and civil rights check the box in terms of sensitizing everyone to the fact that these algorithms could be used for their own purposes. 

The parts that have to do with promoting research and promoting understanding and promoting accessibility that can be positive as well. 

Where do you think the directive falls short? 

It fell short in actually spelling out actual numbers. Nothing could stop the White House from saying, “We want to see at least, I don’t know, some number — 5%, 10%, 20% — some number of resources dedicated to this area.” That becomes very meaningful. You can easily issue something that says, “I want to see at least 5% of the resources spent by this government agency or by every government agency into this category” as an example. 

Northeastern Global News, in your inbox.

Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.