There is a big problem with AI. Mainly, humans are the one who provide them data and often our data is incomplete and flawed. Two recent examples exemplify this growing trend.
"Hopes that artificial intelligence would usher in a new era of improved health care were severely blunted this week as it was revealed that IBM Watson had frequently suggested unsafe and incorrect treatments for cancer patients."
Apparently, IMB Watson’s Oncology software advised that a lung cancer patient with severe bleeding take a medically dangerous drug closely associated with increasing hemorrhaging. Obviously following the advisement could have likely ended in death. One doctor “reportedly labeled the product ‘a piece of Sh*t.’” IBM then contended that the data that was given to Watson was biased by doctors which led to the improper prognostication.
In another example of a significant AI blunder, the American Civil Liberties Union tested Amazon’s face-ID software, Rekognition, to try to persuade Congress to forbid the software from being used by law enforcement. The ACLU identified that the software suffered from racial biases from the people who created it and they were concerned that it would falsely identify people wrongly based on these biases.
According to Mashable, the ACLU “tested the real-time face identification software (which can identify every single face in a crowd) by comparing photos of member of Congress to mugshots, and 28 members were misidentified as the people in the mugshots. Of the 28 false positives in the study, six of the images were those from the Congressional Black Caucus. Furthermore, even though people of color representatives only comprise around 20 percent of Congress, they still comprised more than 40 percent of the false positives.”
What all of this data firmly shows is that if technologists don’t have a way to work through their biases then their tech will likely only propagate their shortcomings.
Reality Changing Observations:
Q1. What do you think can be done to create less biased technological creations?
Q2. When a machine radically misdiagnoses a patient, where do you think the blame should lie?
Q3. What do you think technology companies can do to better prevent racial bias?