AI on the Horizon- a New Dawn of Data Science?

Writing about Horizon: Zero Dawn in the context of data privacy is a bit of a joke. It’s looking straight past the elephant in the room to comment on an interesting lamp. The central premise of the story is that humanity created powerful artificial intelligences and put them in extremely advanced robotic structures, and this pretty directly led to the total annihilation of all life on Earth by 2065 (the robots used biomass as fuel). With barely a year left, humans recognized their impending destruction and hastily banded together to create several new AI systems that would be equipped to reboot life (and humanity) after the machines destroyed life and then themselves lost all power. It’s a fascinating science-fiction story that suggests questions about the complex interconnections of economics, ecology, technology, and humanity. Artificial intelligence is deeply at the core of this story, as both lifegiver and deathbringer; as both the thing that is controlled and the thing that controls.

 

AI As The Child That Cannot Define Reality

AI is very dependent on its creator. The basic framework of its realities and possibilities are defined by its creator. So far, we make AIs that basically do what we do on a bigger scale. We teach the program about correlation, and it starts doing the math– using larger data sets than we can manage, and computing much more quickly than we can. But it is fundamentally locked into the system of objectives, limits, and tools that we define (or fail to define) for it. There is some question of whether AI will think categorically differently than humans. AI is very good at finding correlations and probabilities. I don’t know if AI will ever conceptualize causation the way that humans do. I don’t think that humans are good at answering “why” when asked about the world, and I don’t think AI will do substantively better. In fact, AI is likely to do worse, if only because it doesn’t need to care. If AI can find a sufficiently high correlation between two things that the AI can reliably predict an outcome, it has no reason to ask why the things correlate or why they are good for predicting an outcome.

 

Can AI Superiority in Statistics Outshine Human Capacities for Understanding?

Data science usually tries to serve 1 of 2 purposes: Find a correlation or provide an explanation. Statisticians are pretty good at the first one- and computers can be really fantastic at it. But the second requires “understanding” the data (and exploring that key word quickly drives to some of the core questions about philosophy of mind and human identity at the heart of AI). Humans are often not good at truly understanding the data they perceive. I’m not convinced that AI will be able to understand data (at least in my lifetime), and I suspect that explanations require understanding. AI will be able to detail the correlations that they find—they can show their work, in essence. But except for the most dedicated disciples of Hume, humans suspect that causation is different from correlation. In a fun play on words, I expect that AI will be quite Hume-ean for the next several decades.

 

I’m Not Against Big Data, But…

The reason I’m consistently skeptical of the belief in Big Data as some kind of panacea of the future is twofold: 1) A loss of context (how the individual creates the data), and 2) A loss of application (how the data applies to the individual). Big Data is prone to certain types of misinterpretation based the lack of context that is a by-product of reducing lives to their data points. This makes Big Data difficult to understand and apply, especially when AI is analyzing it. AI works very comfortably with data points, but it does so without ever asking about the nature or meaning of those data points.

 

Correlation Describes, Understanding Predicts

It’s tempting to think it doesn’t matter how Big Data and AI solve problems, as long as they solve the problem. A marketing department doesn’t need to know why people in Exampletown buy their product- they just need to know where they’re selling well! That’s fine as a reporting tool, but what about as a predictive tool? And what about as a feedback tool for future product development? If the consumers love the packaging, but marketing thought it was the flavor that keep their product flying off the shelves, marketing could be completely dumbfounded when their new packaging rolls out and sales plummet. Getting something to work is all well and good—but you need to know why things are working if you want to repair, improve, expand, or optimize.

 

Conclusion: “We Shape Our Tools, Then Our Tools Shape Us.”

Because AI and Big Data are poorly disposed to explaining the world, people (especially in business) are more likely to play to the tools’ strengths of describing the world by finding correlations. People will follow the example of the AI and demur from seeking understanding or meaning in the data.

In Horizon: Zero Dawn, the threat of AI was really its control of enormous robots that consumed life. I still don’t see the problems with AI being quite so theatrical. Rather, I have concerns about the power of AI to limit the plasticity and creativity of the human mind. There is a lot of danger that AI will start teaching people to think more in terms of correlation and less in terms of explanation. Correlation is a very useful statistical tool, and there are a lot of projects that can be accomplished with it. Explanation is not always necessary or appropriate, but it is still a very important tool in the toolkit of human thought.

 

Leave a comment