Meet Norman, the world’s first psychopathic A.I. based on Reddit

Leave it to Reddit to make AI go full psychopath.

The artificial intelligence revolution has begun. Soon enough, we'll all be unwitting slaves to machines in a Matrix-style body farm.

Well... Maybe not. But it's a real fear that many people have had. Stephen Hawking talked about what he thought the dangers of artificial intelligence were, as has Elon Musk. But in order to understand how to prevent (or, let's be honest - delay) the impending robot apocalypse from occurring, we need to see what happens when AI goes rogue.

What better way to do that than by simply letting AI explore the internet?

Meet Norman. Norman was made by a team of scientists at MIT. Norman was given biased data to see how it would affect his behavior. And affect his behavior it did.The team showed Norman pictures from the darkest corners of Reddit to understand how Norman "sees" pictures.

So, logically, they went to subreddits like r/watchpeopledie (NSFW, obviously).

After seeing truly horrifying images (seriously - there's a reason I didn't directly link to the subreddit here), Norman was given a Rorschach test, and asked to caption the images he was presented. According to Inverse.com:

“The first rule of this subreddit is that there must be a video of a person actually dying in the shared post,” the team explained. “Due to ethical and technical concerns and the graphic content of the videos, we only utilized captions of the images (which are matched with randomly generated 9K inkblots similar to Rorschach images), rather than using the actual images that contain the death of real people.”

Damn, Norman. That "angsty teenage emo" phase hit you hard.

I highly doubt A.I. like Siri or Alexa will ever go this dark this easily, but the experiment with Norman really raises a lot of questions.

What's interesting to me is how easily influenced Norman was by seeing some of the worst images and videos humanity has to offer. It raises questions as to how easily AI can be influenced by humans, but also if AI can "re-learn" things without having to have the system reset.

Would a computer program, something that values objectivity over everything, be capable of finding value in a human life?

How would AI view and talk to other AI programs?

This reminds me of a great episode by Radiolab where they discussed driverless vehicles. In the episode, they discuss how autonomous cars will save thousands of lives every year. However at some point, someone will be killed in an automobile accident while the computer is driving.

Who's to blame for instances like that? What happens when there's a situation where the car has to make a decision between hitting just 1 person or a group of 5 people?

It's a version of the trolley problem that we as a society will have to find an answer for sooner rather than later.

What do you think?

Comments

Stories