IBM Watson Advises Death, Amazon A.I. Criminalizes Members of U.S. Congress

What are the biggest problems with Artificial Intelligence? Humans.

There is a big problem with AI. Mainly, humans are the one who provide them data and often our data is incomplete and flawed. Two recent examples exemplify this growing trend.

ToolBox.com reported that:

"Hopes that artificial intelligence would usher in a new era of improved health care were severely blunted this week as it was revealed that IBM Watson had frequently suggested unsafe and incorrect treatments for cancer patients."

Apparently, IMB Watson’s Oncology software advised that a lung cancer patient with severe bleeding take a medically dangerous drug closely associated with increasing hemorrhaging. Obviously following the advisement could have likely ended in death. One doctor “reportedly labeled the product ‘a piece of Sh*t.’” IBM then contended that the data that was given to Watson was biased by doctors which led to the improper prognostication.

In another example of a significant AI blunder, the American Civil Liberties Union tested Amazon’s face-ID software, Rekognition, to try to persuade Congress to forbid the software from being used by law enforcement. The ACLU identified that the software suffered from racial biases from the people who created it and they were concerned that it would falsely identify people wrongly based on these biases.

According to Mashable, the ACLU “tested the real-time face identification software (which can identify every single face in a crowd) by comparing photos of member of Congress to mugshots, and 28 members were misidentified as the people in the mugshots. Of the 28 false positives in the study, six of the images were those from the Congressional Black Caucus. Furthermore, even though people of color representatives only comprise around 20 percent of Congress, they still comprised more than 40 percent of the false positives.”

What all of this data firmly shows is that if technologists don’t have a way to work through their biases then their tech will likely only propagate their shortcomings.

Reality Changing Observations:

Q1. What do you think can be done to create less biased technological creations?

Q2. When a machine radically misdiagnoses a patient, where do you think the blame should lie?

Q3. What do you think technology companies can do to better prevent racial bias?

Comments
No. 1-1
Johannes
Johannes

Away from very special AI, more on the path to human-like AI. With ability to question all, working with probabilities, has congeniality and likeness for the patient, and can research in scientific publications by itself. In pattern the Robot Sophia intelligence. Also if such a system must get some human rights

Stories