The White House has issued principles for regulating the use of artificial intelligence that call for as little government interference as possible and offer only broad guidance to federal agencies. In fact, the principles might deter regulation of AI at a time when many think it is increasingly needed.
Michael Kratsios, chief technology officer of the United States, is set to announce the principles on Wednesday at CES in Las Vegas. They arrive at a critical moment for the development of AI and for America’s position as the global standard bearer.
The guidelines have the potential to shape the development of a broad swath of valuable and critical technologies, from autonomous vehicles to new medical imaging tools. They arrive amid growing worry over the unchecked spread of AI tools, especially facial recognition, and as other nations seek to establish their own norms around the technology, for example by building autonomous weapons or pervasive surveillance infrastructure. Some experts question whether the principles may be too vague to do much good.
Ahead of the announcement, Kratsios told WIRED the guidelines are meant to ensure that AI is developed safely, transparently, and in ways that reflect American principles. This is crucial, he says, when authoritarian rivals such as China and Russia are pursuing aggressive AI agendas of their own. “The values that we hold dear are baked into them,” Kratsios says.
The principles state that when drawing up regulations, “federal agencies must consider fairness, non-discrimination, openness, transparency, safety, and security.” They also call for as little regulation as possible, with a “risk assessment and cost-benefit analyses” prior to any regulatory action. And they stipulate that any regulation must reflect “scientific evidence and feedback from the American public.” After a 90-day period for public input, agencies will have 180 days to come up with plans for implementing the principles.
The White House apparently wants America alone to define the rules of the road when it comes to AI. The US has so far rejected working with other G7 nations on a project known as the Global Partnership on AI, which seeks to establish shared principles and regulations.
Kratsios says he hopes other nations will follow America’s lead when developing their own regulations for AI. The White House has suggested that the G7 plan would stifle innovation with bureaucratic meddling. “The best way to counter authoritarian uses of AI is to make sure America and its international partners remain global hubs of innovation, advancing technology and manners consistent with our values,” Kratsios says.
But some observers question how effective the principles will be. “A lot of this is open to interpretation to each individual agency,” says Martijn Rasser, a senior fellow at the Center for New American Security and the author of a recent report that calls for greater government investment in AI. “Anything that an agency produces could be shot down, given the vagueness,” Rasser says. “It would be easy for anyone to lobby against what’s proposed.”
Rasser also says the US may have less influence if it insists on going alone. “It’s potentially worrying,” he says. “There’s a downside to us going down a different path to other nations.”
The principles are just the latest US effort to shape AI globally, coming just days after the Commerce Department announced new export controls on US AI software. These rules, announced on Friday, prohibit US companies from selling “software specially designed to automate the analysis of geospatial imagery” outside the US except in Canada.
AI algorithms are adept at interpreting aerial imagery, and the controls are meant to limit the ability of rivals such as China to use US software to develop military drones and satellites. But regulating the export of software is notoriously difficult, especially when many key algorithms and data sets are open source.