Professor John Searle is noted for the example of the “Chinese Room” as a way to demonstrate something that artificial intelligences lack that humans seem to posses. Computers can detect certain strings of characters, but cannot grasp meaning, purpose, or significance. Just as a person could translate between two languages without understanding them, computers only relate symbols according to a set of instructions given to them. AI does not grasp meaning or significance. It does not act out of will, but out of code. Combined with (related?) difficulties of whether AI could have something akin to consciousness, I am not worried about the Hollywood image of the Robot Uprising.
The greater worry is the dependence we have on a system that can vanish. Our lives are made of user IDs and passwords. I have over 50 now. Some I don’t have to memorize: passport number, driver’s license. Sometimes I have to look up my social security number, which is unthinkable to my parents who did not have to hold 5 e-mail account passwords, 3 social media passwords, 2 computer logins, 3 videogame passwords, and 2 bank account logins in their heads.
The real fear is not that the system will awaken to self-conscious and the robots will rise up, but that someone will trip over a mainframe plug and suddenly jerk the system offline. As more businesses go paperless and more of our data and information is stored “on the cloud,” I think questions of system security and system integrity are far more pressing than the concern of whether the system will become sentient, develop a will, and then turn that will against biological life.