Elon Musk’s Open AI beats Pro DOTA Players

It’s not surprising that bots like Open AI can beat human players– it’s not like a computer program is going to misclick. Computers do really well at playing defined games and accomplishing carefully specified tasks. Computers don’t do well at having emotional states, or handling logical contradictions (hypocrisy, cognitive dissonance).

1) Computers don’t have desires. They might have a desire for self-preservation, but it isn’t clear that they would. If an AI had a preference for self-preservation, it would only be as a means to achieving the end of its programmed goals. (A pancake-serving robot would only want to remain alive in order to keep serving pancakes.) The lack of preferences and desires is the central emotional difference between humans and robots.

2) Computers work very well in clearly defined systems. They’re excellent at playing games like chess, go, and DOTA. They probably wouldn’t do well at “shooting hoops” or “ring around the rosie,” where the purpose of the game is to “just chill out” or “have fun and be happy.”  They might eventually get to the point where they can solve problems by thinking “outside the box,” but the biggest concern with AI is that the first few attempts at “thinking outside the box” will result in disaster, because the computer may do tremendous damage in the course of achieving a simple goal.

I don’t fear a robot uprising because I don’t expect robots to want to rise up. That is an incredibly animal –and especially human—desire: to seek to overthrow power and become powerful. I don’t think that robots will arrive at a sense of justice or self-respect of their own accord. (Though it would be very interesting if they did, I do not find any convincing argument that this would happen.)

The biggest concern isn’t a sentient, self-aware, self-repairing, self-replicating robot that inflicts retribution upon humanity for their collective sins. The much more realistic problem with AI is the likelihood of the kinds of problems we experience all the time with computers, just compounded to more dangerous scenarios (e.g., someone will die because the robot operating on them had a glitch or a system crash).

Advertisements

Why I am not Scared of Hollywood’s Image of the Robot Apocalypse: Something Much Worse is Much More Realistic.

Professor John Searle is noted for the example of the “Chinese Room” as a way to demonstrate something that artificial intelligences lack that humans seem to posses. Computers can detect certain strings of characters, but cannot grasp meaning, purpose, or significance. Just as a person could translate between two languages without understanding them, computers only relate symbols according to a set of instructions given to them. AI does not grasp meaning or significance. It does not act out of will, but out of code. Combined with (related?) difficulties of whether AI could have something akin to consciousness, I am not worried about the Hollywood image of the Robot Uprising.

The greater worry is the dependence we have on a system that can vanish. Our lives are made of user IDs and passwords. I have over 50 now. Some I don’t have to memorize: passport number, driver’s license. Sometimes I have to look up my social security number, which is unthinkable to my parents who did not have to hold 5 e-mail account passwords, 3 social media passwords, 2 computer logins, 3 videogame passwords, and 2 bank account logins  in their heads.

The real fear is not that the system will awaken to self-conscious and the robots will rise up, but that someone will trip over a mainframe plug and suddenly jerk the system offline. As more businesses go paperless and more of our data and information is stored “on the cloud,” I think questions of system security and system integrity are far more pressing than the concern of whether the system will become sentient, develop a will, and then turn that will against biological life.

Privacy (as the Withholding of Information) in the Information Age

Business professionals in e-commerce talk about information like it is today’s fundamental commodity. Yet information— raw data— is less helpful than we tend to think. Privacy becomes harder to maintain in an era in which business and government think that more data is always better and that accruing data will solve problems. Information is necessary, but not sufficient, to solving problems and pushing progress along.

Lots of entities want information: governments want information about their citizens, employers want information about their employees, corporations want information about their consumers, etc. Such entities have always wanted information, but only recent technological developments have made it reasonable to obtain and organize that information. The biggest remaining barrier to such information collection is the ethical and legal concept of privacy. My contention is that the mere gathering of data is less helpful than the gatherers might think.

One way to think of this issue is to see human action as having two components: 1) an internal motivation or attitude and 2) an external display of action. So, if I purchase a large supply of plastic drinking cups, the store computers may recognize my purchase and correlate it to the kinds of other items people purchase with drinking cups: plastic cutlery, snack food, soda, and so forth. The store wants to predict my motivation by examining my action and correlating my action with similar actions and using inductive reasoning to sell me more things. But what if my motivation in buying many cups is to have a cup stacking competition? Or to have a 2nd grade class plant lima beans? The problem with relying heavily on gathering information is that you can only make guesses about the internal state of the actor.

The debatable assertion is this: Humans cannot be captured by data sets. Some (who probably favor Hume) may say they can, but it must be conceded that the data set must become extremely, extremely large. Perhaps more importantly, some elements essential to that data set cannot be collected through transaction records, e-mails, Facebook “likes”, tweets, and all other collectable data. Seen in this way, a reasonable fear emerges: as entities gather data, they act on that data as though it is a more complete picture than it actually is. Another way to state this issue is “data does not explain itself.”

There are a few important takeaways about the limits of the power of data:

1) You don’t get to know people from their Facebook profiles.

2) Stores know what people buy, but not always why they buy them.

3) Privacy can protect both parties from an incomplete picture.

4) Data is a raw material. It must be processed with understanding, refined through meaning and context, and crafted with wisdom into usable information and then into intelligence.

5) Computer systems can record observations of fact and interact according to algorithm, but cannot “understand” any “significance” or “meaning” of any data.

NOTE: There is so much to this subject! I expect to return (probably repeatedly) to this subject in more specific settings to explore deeper nuances and applications of issues.