I think self driving cars are a pretty good benchmark. Driving a car is something that most humans are capable of, and a task that many can do with little engagement of their actual high level intelligence. The ability of the human brain to process visual data, to spot patterns and consolidate that into usable information is remarkable. Self driving cars nearly exist, but they do so by the use of all sorts of additional sensors and processing and they can’t master the things like getting stuck in a traffic jam and knowing when and when not to push in to traffic. Driving hasn’t really changed much and they have been at it for many years with huge resources and they haven’t cracked it yet. Even with Moore’s law I think we are a long way from true AI, probably at least 50 years.
What I think is most interesting is that humans are inherently selfish and greedy, this is an obvious corollary of survival of the fittest evolution. This can’t really be changed because we are not in control of our own evolution (though this could change in the future). An AI on the other hand, as we conceive it now, as a complex digital algorithm, would not be constrained in the same way. It would theoretically be separated from its hardware and could run on different hardware, it could live forever, it could choose to remember something forever or permanently delete something, it could clone itself and remarkably it might even be able to understand how it itself works. It’s a fascinating thing, because if an AI can control its own evolution, it can choose to make itself ‘smarter’ or create its own new AI. It wouldn’t need to grow and learn it can simply inherit knowledge from its predecessor. It could essentially evolve incredibly rapidly. Based on our current understanding of the universe the speed of light is fixed and that limits the speed of information and this would be the only cap on the AIs intelligence. What all the doom mongers seem to think is that this limit would be close to humans and that would lead to a fairly even battle for survival. I think this is pretty unlikely, much more realistic is that the AI limit is many many magnitudes higher than human intelligence. Then the AI has a choice, exterminate the human race or not, if it chooses to exterminate then it will not be a battle it will just happen almost immediately and we probably won’t see it coming.