In a blog post that sought to make clear Alphabet / Google's priorities in the emerging AI space, chief executive Sundar Pichai made it clear that Google AI software will not be permitted for use in weapons systems and other controversial programs. This follows recent internal protests at Google about some of the projects it was involved in, including one that used AI to identify objects in drone footage.
Responding to the understandable concern that such technology could be used for more efficient killing of human beings, Pichai made it clear that the firm will not design or deploy AI for technology that is likely to cause overall harm.
"Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," Pichai writes.
He went on to specifically rule out the use of Google Ai in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
In addition to addressing concerns about the military uses for AI, Pichai also wrote that its software will not be used in technologies that gather or use information for surveillance violating internationally accepted norms.
Read the blog post at: blog.google
Written by: James Delahunty @ 8 Jun 2018 3:51