The White House released a set of comprehensive guidelines for artificial intelligence through an executive order Oct. 30 aiming to establish new standards for AI safety and security.
According to a White House press release, the goal of the executive order is to take advantage of AI’s promises to drive innovation, economic growth and societal progress, while managing its associated risks.
The executive order supports responsible AI practices such as auditing algorithms, generating standard reports for AI models, providing credentials for complex AI systems, implementing thorough risk mitigation strategies, ensuring transparency for the public and acknowledging the negative impacts of AI predictions on people.
The executive order relies on the National Institute of Standards and Technology's AI risk management framework, which was introduced in January 2023, to guide its directives on responsible AI practices.
The executive order directs businesses and leaders to examine weaknesses in broad AI models, commonly referred to as large language models, like OpenAI’s ChatGPT and DALL-E. Companies developing AI systems that may potentially impact national security, public health or the economy will be required to undergo rigorous testing and share the results with NIST, according to the executive order. This testing will help ensure accountability, transparency and that AI does not produce harmful or inappropriate content.
The order also addresses AI-generated content, which is realistic-looking content made by AI such as deepfake videos or text. AI-generated content can be difficult to distinguish from authentic content, leading to the dissemination of misleading information.
The executive order instructs federal agencies like the Department of Commerce to use AI watermarking technology to clearly mark AI-generated content, helping curb attempts of fraud and the spread of misinformation.
Saúl Blanco, assistant professor at IU’s Luddy School of Informatics, Computing and Engineering, said it is incredibly hard to trace the source of AI-generated content and was glad the order establishes some guidelines.
“Everybody is using AI for a lot of things but there is no guarantee that whatever it is doing is appropriate,” Blanco said. “Hopefully, this will force the companies to be transparent about what they are doing.”
Anjanette Raymond, professor of business law and ethics at IU’s Maurer School of Law, said the order is a bigger step in the right direction compared to previous efforts from prior administrations. Past examples include Obama’s National AI Research and Development Strategic Plan in 2016, which outlined priorities and goals for AI research to maintain U.S. leadership in the field, and Trump’s 2019 American AI Initiative that focused on investing in AI research, enhancing workforce skills and promoting international collaboration.
She said the government typically takes action on AI with good intent but often falls short while trying to implement standards. She said if the federal government and other agencies under the umbrella of the order can begin to implement regulations, they could serve as examples for other institutions, and it could be good for the AI landscape.
“When you jump into the regulation of anything in technology, there have to be questions that need to be answered first because in most instances, there are no clear guidelines as to what questions to formulate,” Raymond said. “This executive order provides those guidelines and ensures that we don’t run the risk of overregulating anything without fully understanding the details.”
However, despite these positive advancements, the presidential directive fails to deal with the absence of federal legislation on data protection and privacy. The data AI collects is used for training and improving its own algorithms, but concerns arise as misuse of this data can lead to privacy breaches.
Despite the order urging Congress for privacy laws, the lack of a legislative framework limits the order's ability to strengthen data privacy in AI companies. There have been attempts such as the Consumer Online Privacy Rights Act (COPRA) and the American Data Dissemination (ADD) Act, but they haven't been passed.
Furthermore, the order recognizes the importance of algorithmic transparency, a principle that emphasizes the factors influencing algorithmic decisions should be visible to those impacted by them.
Raymond said the algorithms behind AI also have an impact on citizens and more people need to understand the executive order in this context. She said universities can play a major role in educating the public on responsible AI practices. Universities like IU serve as an incubation hub to develop a workforce that can address various AI issues long-term, she said.
Isak Asare, the associate director of the Cybersecurity and Global Policy Program at IU, said the order relies on the goodwill of companies and a vast majority of its enactment requires these businesses to play along. He also said while the order does a good job of signaling the changes many want to see, it is not going to change the face of the AI economy in any immediate way and will bear fruit only when the standards defined by the order get applied in practice.
Asare said if the U.S. wants comprehensive laws on privacy, Congress must act promptly. He said privacy is something that often gets overlooked by citizens and even students, and Congress will not pass any legislation unless there is a stronger collective voice advocating for fundamental rights such as privacy. People should be concerned about their data privacy, he said, because if the data is not properly regulated, malicious external parties can extract, identify and exploit sensitive personal data.
Asare said if the goal is to make responsible AI, people from all walks of life need to be involved in this discussion and universities need to continue to act as a cog in the process, combining different perspectives to shape responsible practices.
“It’s like putting very nice curtains on a house that has a cracked foundation,” Asare said.