Google pledged not to use AI for weapons, monitoring. technology

Tech giants say in the update ethics policy that he will use AI in accordance with ‘international law and human rights’.
Google has demolished the pledge not to use artificial intelligence for arms or monitoring in its updated morality policy on powerful technology.
In its previous version of “AI Principles”, California -based Internet giant included a commitment not to pursue AI technologies, which, including “integral damage, including” intelligent norms “violating” internationally sanctioned norms “, including” overall damage There is a possibility of becoming “.
Google’s revised policy on Tuesday announced that the company AI corresponds to “from responsibility” and “widely accepted principles of international law and human rights”, but does not include the previous language about weapons or monitoring.
“We believe that democracy should lead to AI development, directed by main values such as freedom, equality and honor for human rights,” James Manyika, Senior Vice President of Damis Hasabis and Research Labs, the head of Google Deepmind, made updated policy Announcement said in a blog post.
“And we believe that companies, governments and organizations that share these values should work together to make AI that protects people, promotes global development, and supports national security.”
Google did not immediately respond to Al Jazeera’s request to comment.
Google published its list of AI principles in 2018, when employees protested against the firm’s participation in the United States’ Defense Department Project Maven, which used AI to help military identity goals for drone strikes Checked
Google opted not to renew its contract with the Pentagon after the backlash, which saw several staff members resigned and thousands of others signed a petition to reduce the company’s participation in the project.
The tech veteran later announced that he would no longer compete with the Pentagon for a $ 10bn cloud computing contract because it would not “be confident” that it would align with its AI principles.
Google’s updated morality policy original company Alfabet Inc.’s CEO After Sundar Pichai, Amazon founder Jeff Bezos and Meta Zuckerberg of Meta, Mark Zuckerberg, should be included in technical leaders after attending the inauguration of US President Donald Trump on 20 January, including Mark Zuckerberg of Meta Zuckerberg, Mark Zuckerberg of Meta. Went.
Hours after assuming office, Trump canceled an executive order by former US President Joe Biden, which established some railings for rapidly developing technology.
Biden’s order required companies developing AI to share the results of security tests with the government before releasing new techniques to the public.