It’s not surprising that bots like Open AI can beat human players– it’s not like a computer program is going to misclick. Computers do really well at playing defined games and accomplishing carefully specified tasks. Computers don’t do well at having emotional states, or handling logical contradictions (hypocrisy, cognitive dissonance).
1) Computers don’t have desires. They might have a desire for self-preservation, but it isn’t clear that they would. If an AI had a preference for self-preservation, it would only be as a means to achieving the end of its programmed goals. (A pancake-serving robot would only want to remain alive in order to keep serving pancakes.) The lack of preferences and desires is the central emotional difference between humans and robots.
2) Computers work very well in clearly defined systems. They’re excellent at playing games like chess, go, and DOTA. They probably wouldn’t do well at “shooting hoops” or “ring around the rosie,” where the purpose of the game is to “just chill out” or “have fun and be happy.” They might eventually get to the point where they can solve problems by thinking “outside the box,” but the biggest concern with AI is that the first few attempts at “thinking outside the box” will result in disaster, because the computer may do tremendous damage in the course of achieving a simple goal.
I don’t fear a robot uprising because I don’t expect robots to want to rise up. That is an incredibly animal –and especially human—desire: to seek to overthrow power and become powerful. I don’t think that robots will arrive at a sense of justice or self-respect of their own accord. (Though it would be very interesting if they did, I do not find any convincing argument that this would happen.)
The biggest concern isn’t a sentient, self-aware, self-repairing, self-replicating robot that inflicts retribution upon humanity for their collective sins. The much more realistic problem with AI is the likelihood of the kinds of problems we experience all the time with computers, just compounded to more dangerous scenarios (e.g., someone will die because the robot operating on them had a glitch or a system crash).