on July 05, 2017 at 2:26 PM
In science fiction and real life alike, there are plenty of horror stories where humans trust artificial intelligence too much. They range from letting the fictional SkyNet control our nuclear weapons to letting Patriots shoot down friendly planes or letting Tesla Autopilot crash into a truck. At the same time, though, there’s also a danger of not trusting AI enough.
As conflict on earth, in space, and in cyberspace becomes increasingly fast-paced and complex, the Pentagon’s Third Offset initiative is counting on artificial intelligence to help commanders, combatants, and analysts chart a course through chaos — what we’ve dubbed the War Algorithm (click here for the full series).
But if the software itself is too complex, too opaque, or too unpredictable for its users to understand, they’ll just turn it off and do things manually. At least, they’ll try: What worked for Luke Skywalker against the first Death Star probably won’t work in real life. Humans can’t respond to cyberattacks in microseconds or coordinate defense against a massive missile strike in real time. With Russia and China both investing in AI systems, deactivating our own AI may amount to unilateral disarmament.
Abandoning AI is not an option. Never is abandoning human input. The challenge is to create an artificial intelligence that can earn the human’s trust, a AI that seems transparent or even human.
“Clausewitz had a term called coup d’oeil,” a great commander’s intuitive grasp of opportunity and danger on the battlefield, said Robert Work, the outgoing Deputy Secretary of Defense and father of the Third Offset, at a Johns Hopkins AI conference in May. “Learning machines are going to give more and more commanders coup d’oeil.”
Conversely, AI can speak the ugly truths that human subordinates may not. “There are not many captains that are going to tell a four-star COCOM (combatant commander) ‘that idea sucks,’” Work said, “(but) the machine will say, ‘you are an idiot, there is a 99 percent probability that you are going to get your ass handed to you.’”
Before commanders will take an AI’s insights as useful, however, Work emphasized, they need to trust and understand how it works. That requires intensive “operational test and evaluation, where you convince yourself that the machines will do exactly what you expect them to, reliably and repeatedly,” he said. “This goes back to trust.”
Trust is so important, in fact, that two experts we heard from said they were willing to accept some tradeoffs in performance in order to get it: A less advanced and versatile AI, even a less capable one, is better than a brilliant machine you can’t trust.
source of featured graphic:
Music and Artificial Intelligence (1993)
by Chris Dobrian
“… it’s safe to conclude that AI will be a mandatory part of every new technology start-up within the next two years. It’s also safe to conclude that there won’t be a sector of economy untouched by AI…..”
https://medium.com/@johnrobb/how-the-ai-revolution-creates-new-work-b523986a0886 [this is the framework, by John Robb: a very good read]