Spoiler: Not much.

I’ll quote Sam Harris in this little talk:

All you have to grant to get your fears [about AI] up and running is that:

  • We will continue to make progress in hardware and software design (unless we destroy ourselves some other way), and
  • There’s nothing magical about the wetware we have running inside our heads, and that an intelligent machine could be build of other material.

[…]

Once you’ve granted those two propositions, you’ll now be hard pressed to find some hand hold with which to resist yor slide into real concern about where this is all going.

I’d like to do a little back-of-the-envelope calculation to establish a rough timeline for our worries:

We can start with 2050 for an estimate of median time for a 50% chance of achieving AGI-level-capability

Then, it is just a matter of a few years for the AGI (Artificial General Intelligence, human level) to become an ASI (Artificial SuperIntelligence, superhuman). See How long before superintelligence

For a more thorough analysis, see:

It looks like that we will have to deal with this in our own lifetimes…

To finish, just a few reminders:

“By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”

Introducing a superintelligence in our society would be akin to having aliens stopping-by.

  • Peter Thiel quote:

“The first question we would ask if aliens landed on this planet is not, ‘What does this mean for the economy or jobs?’ It would be, ‘Are they friendly or unfriendly?”



To sum up: