On Tuesday, Google unveiled revisions to its Artificial Intelligence (AI) Principles, which articulate the company’s strategic approach to AI technology. Previously, the Mountain View-based company had outlined four specific application areas where it would refrain from designing or deploying AI technologies, including weapons, surveillance tools, and systems that could lead to significant harm or violate human rights. The latest update to the AI Principles has notably eliminated this section, raising questions about the company’s future intentions in these contentious domains.
Google Revises Its AI Principles
The AI Principles were first introduced in 2018, at a time when AI was still emerging as a widely recognized field. Since then, Google has made periodic updates to this document; however, the categories deemed too dangerous for AI development had remained unchanged until now. The recent revision, which omits the previous prohibitive section, has garnered attention.
An archived version from last week confirms the original section titled “Applications we will not pursue.” This section listed four key areas, starting with technologies that either cause or have the potential to cause widespread harm, followed by weapons technologies that inflict injury on individuals.
The company also pledged not to engage in AI development for surveillance systems that breach international standards or that circumvent laws and human rights. The removal of these commitments has sparked concerns that Google may now be considering ventures into these previously restricted areas.
In a related blog post, Demis Hassabis, Co-Founder and CEO of Google DeepMind, along with James Manyika, Senior Vice President for Technology and Society, addressed the rationale behind the changes.
The executives pointed to the rapid evolution of the AI landscape, heightened competition, and a “complex geopolitical landscape” as factors influencing the update to the AI Principles.
“We believe democracies should steer the development of AI, underpinned by fundamental values such as freedom, equality, and respect for human rights. We also advocate for collaboration among companies, governments, and organizations that share these principles, aiming to create AI that safeguards people, fosters global development, and enhances national security,” the post noted.