ScienceTech- BusinessWorld

Google Lifts Ban On Using Its AI for Weapons and Surveillance

0
The Google Logo

In 2018 Google published principles barring its AI technology from being used for sensitive purposes. Barely months  in office for his second term , those guidelines are being overhauled by President Donald Trump.

Google announced not so long ago that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. 

The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

On February 4th, two of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled.

In an effort to calm internal disagreements over the company’s choice to develop a drone program for the US military, Google originally released the principles in 2018. In retaliation, it published a set of guidelines to direct future applications of its cutting-edge technology, including artificial intelligence, and denied to extend the government contract. The principles said, among other things, that Google will not create technology that violate human rights, weaponry, or specific surveillance systems.

However, Google renounced those commitments in a statement on February 4. A list of prohibited uses for Google’s AI initiatives is no longer available on the updated webpage. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Google also now says it will work to “mitigate unintended or harmful outcomes.”

James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the company’s prestigious AI research lab, wrote: “We believe that democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” They also stated that Google will still concentrate on AI projects “that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.”

Multiple Google employees expressed concern about the changes in conversations with WIRED. “It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,” says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.

Google spokesperson Alex Krasov says the changes have been in the works for much longer, but the return of US President Donald Trump last month has prompted many companies to update their policies promoting equity and other liberal ideals.

According to Google, its new objectives include developing ambitious, accountable, and cooperative AI projects. Phrases like “be socially beneficial” and “maintain scientific excellence” are no longer used. “Respecting intellectual property rights” is also mentioned. 

About seven years after releasing its AI principles, Google established two teams to assess whether initiatives throughout the organisation were fulfilling their promises. 

Google’s major functions, including search, advertisements, Assistant, and Maps, were the subject of one. Another dealt with Google Cloud services and offerings. Early last year, as the corporation rushed to create chatbots and other generative AI tools to compete with OpenAI, the unit devoted to Google’s consumer business was divided.

Timnit Gebru, a former colead of Google’s ethical AI research team who was later fired from that position, claims the company’s commitment to the principles had always been in question. “I would say that it’s better to not pretend that you have any of these principles than write them out and do the opposite,” she told WIRED.  

Three former Google employees who had been involved in reviewing projects to ensure they aligned with the company’s principles say the work was challenging at times because of the varying interpretations of the principles and pressure from higher-ups to prioritize business imperatives.

Google still has language about preventing harm in its official Cloud Platform Acceptable Use Policy, which includes various AI-driven products. The policy forbids violating “the legal rights of others” and engaging in or promoting illegal activity, such as “terrorism or violence that can cause death, serious harm, or injury to individuals or groups of individuals.”

However, when pressed about how this policy squares with Project Nimbus—a cloud computing contract with the Israeli government, which has benefited the country’s military — Google has said that the agreement “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”

Google spokeswoman Anna Kowalczyk told WIRED in July that the Nimbus contract is for workloads that are run on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy.

Google Cloud’s Terms of Service also prohibit any applications that break the law or “lead to death or serious physical harm to an individual.” Regulations for some of Google’s consumer-focused AI services also prohibit unlawful uses as well as some potentially offensive or harmful uses.

Gamuchirai Mapako

New ZIFA Executive Pledges to Revamp National Teams

Previous article

Musician Jah Prayzah set to receive Luxury 2025 range rover gift

Next article

You may also like

Comments

Leave a reply

Your email address will not be published. Required fields are marked *

More in Science