Microsoft President Brad Smith backed calls for the U.S. government to create a new agency to license major artificial-intelligence systems, amid growing support for regulation of an industry that is moving aggressively to commercialize powerful new tools such as ChatGPT.
“We are absolutely committed to ensuring that [AI] serves people well, that it brings real benefits, that it’s kept under human control,” Smith told The Wall Street Journal ahead of a Thursday speech in Washington, where he will make the case for regulation. “But I don’t think at the end of the day, we’re best served solely by a system that says, ‘Take the word of a large company.’ ”
A new federal agency to oversee AI’s development is the “most sensible” course, Smith said in response to a question, echoing similar comments from Sam Altman, CEO of ChatGPT creator OpenAI, made to a congressional panel last week.
Smith’s comments come as Washington considers how to respond to rapid consumer adoption of ChatGPT and other so-called generative AI systems, which have humanlike abilities to converse, create media, write computer code and more.
Members of Congress are discussing bipartisan legislation to set safeguards on AI, and on Tuesday the Biden administration sought public input on a national AI strategy that could lead to new regulations. Policy makers have raised concerns about a range of potential downsides of the technology, such as the potential for AI systems to supercharge hacking capabilities or manipulate voters.
One reason policy makers are so focused on the issue is that Microsoft has placed powerful AI tools in the hands of millions of its customers, benefiting its bottom line. Microsoft in January signed a $10 billion deal with OpenAI that would allow the tech giant to own 49% of OpenAI’s for-profit arm, the Journal has reported, citing investor documents.
ChatGPT runs on Microsoft’s Azure cloud-computing platform, and Microsoft has incorporated ChatGPT and other so-called generative AI systems into an array of products including its Bing search engine, a move it hopes will peel away market share from Google.
Microsoft and OpenAI’s rapid rollout led Google, a subsidiary of Alphabet, to launch a counteroffensive. It has made its own chatbot, called Bard, widely available to consumers and earlier this month announced plans to add AI systems to dozens of products.
Meta Platforms, owner of Facebook and Instagram, is looking to cash in on its own chatbot program.
Microsoft, Google and Meta all say they are rolling out the systems with safeguards, such as by limiting the questions chatbots will answer.
Smith said Microsoft has been advocating for guardrails around AI for years, noting that it backed a Washington state law regulating the use of facial-recognition technology.
“It would be a problem if we were advocating for regulation that only we could satisfy—that is not the case,” Smith said. “We are advocating for the kinds of laws and regulations that, I would argue, anyone who wants to be serious in the world of AI can and should meet.”
The new regulatory regime should also place obligations on companies that provide apps based on powerful AI systems, Smith said in a blog post set to be published Thursday along with his speech in Washington to an audience of government officials and policy experts.
For example, companies should have a responsibility to know who their customers are in case the technology is misused, and should be required to label or mark when a piece of digital content has been created by AI rather than a human being, the company said.
Policy-making efforts in the U.S. are in early stages. Lawmakers on Capitol Hill are tied up in a high-stakes fight over the debt limit, with little time for legislation on other topics. The Biden administration can impose some checks on AI systems under existing law, but those are generally limited to after-the-fact law enforcement actions rather than pre-emptive safety rules.
Smith said governments should give priority to safeguards around AI systems involved in “critical infrastructure,” such as a power grid or city traffic system. The administration should also issue an executive order declaring that any company selling AI tools to the government should have to implement the voluntary AI risk-management framework recently published by the National Institute of Standards and Technology, he said.
In the absence of government action, Smith said Microsoft has talked with other industry players about a voluntary set of standards for AI systems, but those discussions remain informal.
“There is an opportunity for the industry to share best practices, common principles, and also even adopt a set of standards,” Smith said, but he added that the best choice is for the government to take a leading role.