Google Updates AI Principles, Ditches Commitment to Not Use AI for Weapons and Surveillance




Google updated its Artificial Intelligence (AI) Principles, a document highlighting the company’s vision around the technology, on Tuesday. The Mountain View-based tech giant earlier mentioned four application areas where it would not design or deploy AI. These included weapons and surveillance as well as technologies that cause overall harm or contravene human rights. The newer version of its AI Principles, however, has removed the entire section, hinting that the tech giant might enter these previously forbidden areas in the future.

Google Updates Its AI Principles

The company first published its AI Principles in 2018, a time when the technology was not a mainstream phenomenon. Since then, the company has regularly updated the document, but over the years, the areas it considered too harmful to build AI-powered technologies have not changed. However, on Tuesday, the section was spotted to be entirely removed from the page.

An archived web page on the Wayback Machine from last week still shows the section titled “Applications we will not pursue”. Under this, Google had listed four items. First was technologies that “cause or are likely to cause overall harm,” and the second was weapons or similar technologies that directly facilitate injury to people.

Additionally, the tech giant also committed to not using AI for surveillance technologies that violate international norms, and those that circumvent international law and human rights. Omissions of these restrictions have led to the concern that Google might be considering entering these areas.

In a separate blog post, Google DeepMind’s Co-Founder and CEO Demis Hassabis and the company’s Senior Vice President for Technology and Society, James Manyika explained the reason behind the change.

The executives highlighted the rapid growth in the AI sector, the increasing competition, and the “complex geopolitical landscape” as some of the reasons behind why Google updated the AI Principles.

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” the post added.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Opera Air Browser With Mindfulness Exercises and Boosts Feature Launched for Windows, Mac







About The Author