Artificial Intelligence

AI 'Thinks' Using Deep Neural Networks, But They're Still A Mystery

It's not clear how some artificial intelligence machines arrive at their conclusions.

AI 'Thinks' Using Deep Neural Networks, But They're Still A Mystery
Getty Images
SMS

Artificial intelligence based on deep neural networks is fascinating, to say the least. It can converse, drive cars, beat video games, even paint pictures and detect some types of cancer

But how these machines do the things they do is mystifying even for the scientists who created them.

Here are the basics of how we know a deep neural network is constructed:

Layers: There's an input layer, an output layer — sorting through images or numbers or words — and at least one hidden layer in between.

Function: Each layer performs specific types of sorting and ordering data.

It's More Than Robots — AI Is Getting Closer To Artificial Humans
It's More Than Robots — AI Is Getting Closer To Artificial Humans

It's More Than Robots — AI Is Getting Closer To Artificial Humans

Artificial intelligence used to mean Roombas and robots with facial expressions. We're pretty far beyond those now.

LEARN MORE

Analysis: Deep neural networks can work with unstructured data and reach conclusions or make predictions with it.

Deep neural networks "learn" from one data analysis to the next. After a network has "learned" from thousands of sample dog photos, let's say, it can identify dogs in new photos as accurately as people can.

Identifying a dog might sound really simple, but, remember, we're talking about a computer here — a machine, completely self-taught. And, using AI, the machine could learn to recognize different types of dogs.

Multiply that analysis thousands of times over, and consider this example from The New York Times: A computer program using a deep neural network deciding whether to give you a loan would look at your income, credit history, marital status and age. But given millions of cases to consider, along with their outcomes, the network could  figure out when, for example, to give more weight to age and less to income — until it is able to analyze a range of situations and accurately predict how likely each loan is to default.

But here's the challenge. It's not clear how deep learning algorithms arrive at their conclusions. The lack of transparency makes it hard to root out bias or algorithmic errors.

Think of doctors. They need to know how a program arrives at a cancer diagnosis. Or any diagnosis for that matter. Can't just take it on faith. Companies are pushing for "explainable AI" or "transparent AI." The goal is deep neural networks that can explain the way they "think."

We already know there will not be a one-size-fits-all solution because of the variety of network designs. There are, however, two basic procedures. The network can either show which variables led to a decision, or the users could adjust the data and see if that changes the conclusion.

Beyond the "wow" factor of AI, we need a high-confidence factor before giving machines the keys to defense, medicine and finance.

It'll cost a pretty penny for their thoughts. But it's invaluable as our futures become increasingly conjoined.