Ray Kurzweil Predicts Machines Will Reach Human-like Intelligence by 2029

If machines reach Artificial General Intelligence (AGI), what will that mean for humanity?

In a recent streaming broadcast with Peter Diamandis for Singularity University, Ray Kurzweil predicted that AI will reach human intelligence by the end of the next decade. What does that mean? From the beginning of AI development in the mid-50’s, researchers defined a litmus test for AI to match human intelligence. The idea was that if an AI entity could interact with a human and the human could not tell whether it was AI, then it would have reached human intelligence or Artificial General Intelligence (AGI). The Turing Test, as it was known, is defined as the ultimate test for AI. Essentially, Kurzweil is saying that by that time, we will not be able to fully distinguish AI from human intelligence. How that will manifest itself is not clear. This could be in a humanoid, digital assistant, phone app or another device that does not exist.

Is his forecast too aggressive? He has a track record to back up his claim. In the early 90’s he foresaw the explosive growth of the Internet and that AI would beat a human being in chess by 2000 (which happened in 1997 with IBM’s Deep Blue beating Kasparov). For a complete list of predictions for 2019 click here. Needless to say, when Kurzweil makes a prediction, the tech world listens.

If we are truly a little over a decade from AGI, the discussion about what that means for humanity needs to start now. At its essence is the idea of sentience. If machines reach AGI, whether they are or not, they will certainly act like they are sentient. How does that change our definition of humanity? Can sentience (consciousness) exist apart of biology? Can an “inanimate” object actually be “animate”? In this scenario, how do we define boundaries between biology and technology?

Furthermore, if AI is to become human-like, then we must also re-think our relationship to machines. We have approached them as servants to our will. Should that stay the same in an AGI scenario? Given our track record in caring for nature, our hope for ethical treatment of machines is bleak.

These are just a some of the questions sparked by this extraordinary advance in human technology. The more segments in society enter this conversation, the better chance we have that AGI will not only be impressive but that it also will be good.

Are we up to the challenge?

Reality Changing Observations:

Q1. Do you believe Kurzweil is right? Why or why not?

Q2. Does reaching general artificial intelligence make machines sentient? Why or why not?

Q3. What other ethical challenges are raised by the emergence of AGI?

Comments