Saturday, April 7, 2018

Experts Raise Red Flags on Google Pentagon Joint AI for Terror Hoax Profits

RT | Apr 7, 2018

© U.S. Air Force
Hundreds of Google employees are up in arms over the company's partnership with the Pentagon in AI technology, fearing it may be used for war. Experts told RT the "questionable" alliance could result in "disaster for humanity."

Google employees wrote a letter to the company's CEO, Sundar Pichai, calling on the US tech giant to immediately pull out of a controversial program that many fear could be used for warfare.

"We believe that Google should not be in the business of war," the letter obtained by The New York Times and published earlier this week stated.

Gizmodo broke the news about Google's partnership with the US Department of Defense (DoD) last month, adding that Project Maven, whose stated mission is to "accelerateDoD's integration of big data and machine learning," was established in April 2017. The project will see Google developing AI surveillance to help the US military scrutinize video footage captured by US government drones "to detect vehicles and other objects, track their motions, and provide results to the Department of Defense."

Google claims that the technology is human-friendly and is actually designed to "save lives" and "scoped to be for non-offensive purposes." But Noel Sharkey, Emeritus professor of AI at Sheffield University, told RT that the fears of Google employees "are correct."

Comment: Google's claims are lies, the military is in business of killing and bombing because that is how they keep their corporations in profits and maintain the terror hoax so that the build up of their constant war false flag keeps strengthening.This is why Russia has considered creating a separate internet as the one we have is going to the dogs.

The Maven program "is all about bringing AI to the immediate conflict zone" he argued, adding that Google may simply be too naïve here about the real use of its technology.

"Once you start working with the military, you have no control over what they use your product for, and that's very worrying," Professor Sharkey said.

He cautioned that while drones now have human operators, which are at least "looking at the target, engaging with the target and trying to calculate its legitimacy," things can take a drastic turn.

"If Google's imagery is very good, they will stop using that operator, allow robots to go out on their own, find their own targets and kill them without human intervention. And this is a disaster for humanity."

And there is another concern here – privacy.

Google is a global company and is working for the Pentagon now, and the Pentagon is the United States. For me, in Britain, it means it's a foreign power. How far will they slide into bed with the Pentagon?" Sharkey said.

"Google own most of our data, and I don't want the Pentagon having my data."

The US Department of Defense spent a whopping $7.4 billion on AI-related areas last year, according to the Wall Street Journal.

The experts who spoke to RT say the million-dollar question is whether "this going to lead to saving lives, or is it going to lead to more use of the technology, more drone strikes, more countries engaging in this use of the technology?"

It's really "questionable," physicist and arms control researcher at the University of North Carolina Dr. Mark Gubrud told RT.

"It's very exciting to see a movement arise among Google employees of concern about their company's contribution in the world's drift towards autonomous weapons, killer robots"

According to the Intercept, Google is busy developing technology that will allow drone analysts to "interpret the vast image data vacuumed up from the military's fleet of 1,100 drones to better target bomb strikes against the Islamic State."

Comment: Know when they say they are going to be bombing the Islamic state that something fishy is going on. Read Washington and Riyadh's Terror Enterprise

This April marks five years since the launch of Campaign to Stop Killer Robots. Its supporters object to "permitting machines to determine who or what to target on the battlefield," pointing to numerous problems, including ethical and legal.

"Bold action is needed before technology races ahead and it's too late to preemptively ban weapons systems that would make life and death decisions on the battlefield," Steve Goose, arms division director at Human Rights Watch, and co-founder of the Campaign to Stop Killer Robots, said in a statement in November.

Comment: Strengthening the military at this point in time is insane as there are no wars that should be occurring at this stage that are not psyops that are designed to build regime changes and maintain the terror hoax for profits.

No comments:

Post a Comment