Algorithms Can't Fix Us

Algorithms are not a societal cure-all; we must first address the underlying social ills.

Last week at the AI Now symposium in New York, researchers emphasized that the application of algorithms to provide solutions to social issues is not an adequate response by itself. They are only a stopgap measure; the root of the problem must also be treated.

Algorithms have been used with negative consequences in facial-recognition systems and government-run healthcare, thus highlighting underlying inequalities and existing inefficiencies. Although algorithms can be used to eliminate individual biases, systematic biases find their way into the algorithms and can be even more destructive.

In an article for Quartz, Dave Gershgorn writes that we should not assume that AI develops independently of its creator’s biases. The inequality and discrimination that a person or system holds will invariable show up in the algorithm. For example, Sherrilyn Ifill, president of the NAACP Legal Defense Fund, proposes that if authorities focused on addressing bias in law enforcement and criminal justice systems before facial recognition is implemented, then tax dollars will be used more efficiently on creating an AI that is equitable. Ifill says that facial-recognition systems have already shown to be less accurate in identifying people with darker skin; the results would be therefore be inaccurate if it were combined as is with an offender database that was primarily persons of color.

Another area where algorithms are not the cure can be seen in Pennsylvania’s Allegheny County foster care system. They use an algorithmic screening tool to evaluate calls made to report child endangerment. The system measures racial bias on the amount of calls that get turned in to investigations. Virginia Eubanks, author of the book Automating Inequality, says that algorithmic screening tool misses the existing societal problem completely, which is that black and biracial families are reported 350% more than white families for child endangerment. While the percentage of cases investigated might be equal, the number of cases opened on people of color is much greater.

As AI moves into our everyday lives, it is important that we organize against unchecked influence and impact of AI systems. Unfortunately, even though public awareness is growing about the need for oversight and transparency, faulty systems are being implemented at a rapid pace.

Read more HERE.

Reality Changing Observations:

Q1. What can public health, welfare, and education do to ensure their algorithms are free from biases?

Q2. What can we do to foster equality in our neighborhoods and local government?

Q3. Why is it so hard for developers to design algorithms without biases?

Comments