Tooth And Tail: Lessons in Planning With Realistic Expectations

Tooth and Tail is simple. It has to be simple because the game designers had a very challenging goal: Make a Real Time Strategy game that is reasonably playable on a console. Real-Time Strategy games are notorious for needing high-speed and complex inputs (professional Starcraft players’ fingers perform over 400 actions per minute) that are simply not possible with the constraints of a console controller (even with all of the buttons they’ve added after Nintendo produced the perfect game controller in 1990). But the designers were smart, and they looked realistically at the constraints of the system, and they crafted the game to fit those constraints. The result is a playable, enjoyable game about a Soviet-revolution inspired rodent uprising on a farm. The designers of in-house corporate programs and databases need to learn to be realistic about the actual uses of their programs.

I. Lesson in Project Design: Accept the Probabilities of Disaster so you can Plan for Prevention; Don’t Plan for Immortality and Invulnerability. (#dontbeateen)

In the digital age, there is an increased focus on preventing and eliminating problems/errors. The promised outcomes of flawless perfection are enticing, but the realities of inevitable problems require more effort be put into managing problems and recovering from disasters.

Computers amplify the speed and scale of what people can do. This makes it easier for people to do more, and to do more, faster. This includes making mistakes bigger. Years after a British woman got 15 minutes of fame for accidentally ordering 500kg of chicken wings, Samsung accidentally made a $105 billion ghost.

Samsung Securities Co (a financial services company owned by conglomerate Samsung Group) tried to pay a dividend to their employees, but accidentally gave the employees shares instead. The 1,000 WON dividend became a 1,000 SHARE distribution- creating over $100 billion in new sharesThen some employees immediately sold those shares. There were a lot of safety measures that failed in this story. The program should have been able to calculate that this order totaled over 1 trillion WON, more than 30 times the entire company value. A second human should have checked over the work for simple, obvious errors when there is a potential for this level of damage (anything at a company-wide level for a publicly-traded international corporation would certainly qualify). Several departments should have reviewed the work (compliance, risk, accounting, finance, legal—almost anyone!). Samsung’s own internal compliance should have also prevented the sale of the ghost shares.

II. A Lesson in Categorical (Or Macro) Errors: Some Mistakes are Annoying, Others Are Fatal. Design to Catch and Prevent, Not Headline and Damage Control. (#dontbeaceleb)

Mistakes happen a lot when computers are involved. Sometimes it’s the user, sometimes it’s a problem in the code. But when a user catches a problem, they can assess the problem in a broader context, and determine just how bad a mistake is. A bigger mistake is just more obvious to a human than a computer.

Many years ago, a friend of mine got on a flight and found someone else sitting in his designated seat. Not wanting to cause trouble, he simply took the empty seat next to his designated one and prepared for the flight.  As the crew prepared for taxi and takeoff, a flight attendant welcomed passengers to their non-stop service to their destination city. Upon hearing this announcement, the woman next to my friend hurriedly gathered her belongings and fled the plane.

She wasn’t in the wrong seat. She was on the wrong plane.

Computer programs don’t intuitively differentiate between the severity of errors:  the wrong plane and the wrong seat are just two errors if you’ve never flown and don’t know have a broad concept of travel or the context of moving around a world. To a computer, being in the right seat is still pretty good, just like executing a financial order with the correct number is pretty good – even if the number is in the wrong field or tied to the wrong variable. What humans easily grasp, computers are often unlikely to infer. The right detail at a micro-level cannot remedy a catastrophic error at a macro-level.

User errors are inevitable. Programming errors are likely. The more we rely on computers, programs, and apps for the things that allow our lives to function, the more likely it is that our lives will be disrupted by programmer or user errors.

III. The Solution: Make The Programs Flexible, and Make Problems Fixable.

Tooth and Tail’s success is rooted in the realism of its game designers, who sacrificed dreams of a more complex game (that would have been unplayable) for the right game that fit the actual constraints and experience of the player. Designing with the actual user’s experience in mind—with special consideration for what can go wrong—is more important for project designers and programmers every day.

There is an increasing drive to try to use computers to prevent any errors, mistakes, or problems. However, these solutions only make problems worse because they decrease flexibility in and around the program. The solutions is to move in the opposite direction: programs need to play less of a role in trying to self-regulate and self-repair, while users and programmers take a larger role in guiding and overseeing the programs.

But wouldn’t this much red-tape bureaucracy be time-consuming? Wouldn’t it be inefficient to invest so much effort in a simple dividend payment? It would take time and resources, yes—but efficiency measurement is relative to scope (among other factors): it certainly appears inefficient if 6 people spend 10 minutes each to look at the same work and find no error. Here, we would conclude that a full hour of productivity was wasted. However, if 6 people took 10 minutes each and found a problem that would have cost 1,000 hours of productivity had it not been discovered, we conclude that we have a net gain of 999 hours of productivity.

Although problems like these cannot be entirely prevented or eliminated, they can be contained and managed. If a person is on the wrong plane, they can quickly determine the outcome of their choice and work on a solution. People will still get in the wrong city from time to time, but they don’t have to end up in the wrong city as a result. Similarly, employees will make occasional typos or errors in their accounting and payroll from time to time, but that doesn’t mean that financial markets have to be rocked as a result.

Advertisements

Regulating The Internet? Not the Tubes Themselves…

If Net Neutrality is an argument about economics (and federal administrative law), Content Regulation is an argument about ethics and culture.

Net Neutrality is becoming an old hobby horse for a lot of people. It gets a lot more attention than most telecommunications policy issues. Even though questions about copper wire lines vs fiber optic cables actually affects more people, the internet is generally united by the fact of its own existence.  This is about regulation at the highest level, determining the equality and/or equity of access to content. No one online is indifferent to the internet—the only debate about net neutrality is which policies are best for the consumer and the telecommunications marketplace (or, in the United States, “telecommunications marketplace”).

But there is another layer of regulation that is quickly gaining attention. If Net Neutrality is about the form of the internet (its structure and broad organization), there is a growing need to consider questions about the regulation of the content of the internet. Over the years, the internet has been a vector for some amazingly good and amazingly bad actions by humans. The differences in the kind of regulatory concept at play are hard to understate. Rather than comparing it to different video games, I would compare it to the difference between a video game and a tabletop game.

1) I’ve always been fascinated by the dawn of the computer age. My childhood was the tail-end of a world in which homes did not have internet access. By the start of law school, everyone looked up famous cases and Latin phrases on Wikipedia during class (except for the people who did the reading the night before- they looked it up before class). I’ve often compared the early days of the internet to a kind of Wild West setting: a lawless frontier where fundamental questions about the mold of civilization were not yet settled. I thought most of those questions would be settled by 2015. We are not close to a consensus on rules. Indeed, we are still testing what types of rules are feasible or desirable.

Video games are literally made of rules: the computer code that constitutes the game itself. Tabletop games are made of… usually cardboard, or some kind of paper. (Occasionally, they have some plastic – or even metal if you got the collector’s edition.) This may sound like a silly or vacuous distinction, but it has important ramifications for the kinds of problems that can happen in a game, and the kinds of solutions that will (or won’t) be effective.

2) Lawlessness can lead to problems. This was probably not known until 2 decades of unfettered internet, but now we know. Free to do anything, people have tried very hard to do everything. Every app, platform, hosting site, game, or program online that gets big enough eventually starts to experience just about every problem type that humans can present. From intellectual property disputes to death threats, from fraud to manslaughter, the internet has been a way for people to discover criminal behaviors that past generations could never have the opportunity to access. The unethical choices of both multi-national companies and village simpletons are available for repeated viewing.

In a video game, the code can sometimes glitch and create problems for players. The code can also execute perfectly, but there may be complaints about the design of the game itself (a level being too difficult or some power or tactic being of an unsuitable level of power). With some difficulty, players can cheat by actually breaking the code, but more games can detect this (and especially so in professional e-sports settings). In a tabletop game, anyone can cheat, the rules may be wrongly applied (or not applied at all), and all manner of chaos can ensue. DDoSing an opponent during a game might be a little bit akin to literally flipping a table during a game of Monopoly or checkers,

3)  YouTube’s takedown system is already an example of an effort to regulate content, and it already shows some of the challenges with instituting a content regulation system: people will find ways to game that system. Any system of regulation will have two negative outcomes: it will penalize the innocent, and it will be dodged by the guilty. The most you can hope for is that it will protect most of the innocent and it will penalize most of the guilty. The US justice system, even when working as intended, will sometimes produce undesirable results: a guilty person will go free, and an innocent person will go to prison. The hope is that this happens very infrequently.

The most common reaction to bad behavior online has been for authoritative parties to do nothing. The most common reaction by authoritative parties to actually do something has been to ban the bad actor. The most common reaction to this ban is to come back with a different username or account.

In video games, cheaters are often banned (if they are making the game worse for other players). But in table top games, people who ruin the game are just not invited back. No one will play with them anymore. People might hang out with someone less if they behaved in a wildly unacceptable way during a casual weekend game of Risk or Werewolf. In a video game, bad behavior has very limited consequences. In a tabletop game, bad behavior can have lots of meaningful implications.

 

4) What would it look like to regular content? Getting it wrong is easy — which is the primary reason that’s what’s going to continue to happen. Whether trying to penalize criminals or regulate behavior online, creating a fair and ethical system that consistently produces more good results than bad ones is difficult. One problem is that incentives are at odds: most platforms want to turn a profit, and if bad behavior yields a net gain, the platform needs a solution that will actually make more money than the current bad behavior (plus the cost of implementing the remedy). Another problem is that platforms tend to think of regulating their content the way that most Americans think about regulations: an appointed governing authority (or combination of authorities).

 

Conclusion

You can’t make people be good, but you can keep deleting all of their manifestations of their behavior on the internet: You can suspend or ban accounts, and eventually IP addresses. You can automatically censor strings of characters, and continually update the list of banned strings. These will continue to be the solutions offered, and they will continue to mostly fail while they almost half-succeed.

Over a decade ago, Lawerence Lessig asserted that laws are of four types: market, cultural, legal, and architectural. It turns out that enforcing the legal type of law in a digital space is very difficult. But cultural norms practically enforce themselves. And architectural laws are always already enforced. Market rules can be fickle, but persuasive. A lot of efforts to regulate content will fail because they will hinge on the concepts of legal enforcement.

The lack of rules and regulations is what made the internet a place where amazing things could happen. Without rules to stop imagination and creativity, people created art, solved problems, built positive communities, and enriched themselves and each other. In that same landscape: without rules to stop hate and anger, people created harassment and bullying, invaded privacy, ruined lives, occasionally killed people, and destroyed a lot of good in the world. Lawless frontiers are the best opportunity for the most beautiful, important, and inspiring expressions of humanity. They are also the best opportunities for the most despicable, dangerous, and damaging expressions of humanity. What the internet becomes will be decided—has always been decided—by what people bring to it.

Horizon: The Dawn of Zero Privacy?

Horizon: Zero Dawn is a problem because I don’t know which game I have to slide out of my top 5 in order to fit it into that list. (It might be have to replace “Child of Light,” which pains me, but replacing any would pain me… maybe “Outlaws” will move to #6 …) It’s an incredible game in its own right, with beautiful artwork, well-written characters, and genuinely fun gameplay. I find its story especially fascinating—and particularly relevant as we grapple with a framework for governing and living in an age of digital information and interconnected devices. Though its central technological focus is on Artificial Intelligence and the future of humanity, it touches a multitude of topics- including data privacy.

Although Judge Richard Posner famously decried privacy as a way for bad people get away with bad things, privacy is important for personal development and free association. Privacy is essential to our culture, and it is only valuable inasmuch as it is protected and reliable. Our expectations of privacy follow us into our digital extensions. However, one of the best methods of securing privacy is impractical in the face of consumer demands for interconnection and convenience.

I. Can We Have Privacy by Design When We Demand Designs that Compromise our Privacy?

The Federal Trade Commission’s favored method for protecting Privacy is “Privacy By Design.” In simple terms, this often means designing a product to rely as little on privacy as possible. After all, if no data is collected, there is no data to steal. However, there are serious questions about the feasibility of this approach in the face of consumer expectations for interconnected devices.

Privacy by Design is a much better idea than the sophomoric idea of increasing security measures. Designing a house not to be broken into is better than trying to just put a good lock on the front door. To put it another way: Think of it as building a dam without holes rather than trying to plug all of the holes after you finish building.

I’ve heard tech entrepreneurs talk about “The Internet of Things” at conferences for many years, now. They talk about it like it’s a product currently in development and there’s an upcoming product launch date that we should be excited about- like we can line up for outside of a retail store hours before the doors open so we can be the first to get some new tech device. This is not how our beloved internet was created. Massive networks are created piece by piece- one node at a time, one connection at a time. The Internet of Things isn’t a tech product that will abruptly launch in Q3 of 2019. It’s a web of FitBits, geolocated social media posts, hashtags, metadata, smart houses, Alexas and Siris, searches, click-throughs, check-ins, etc. The “Internet of Things” is really just the result of increasingly tech-savvy consumers living their lives while making use of connected devices.

That’s not to diminish its significance or the challenges it poses. Rather, this highlights that this “Coming Soon” feature is really already here, growing organically. Given that our society is already growing this vast network of data, Privacy by Design seems like an impossible and futile task. The products and functions that consumers demand all require some collection, storage, or use of data: location, history, log-in information- all for a quick, convenient, personalized experience. One solution is for consumers to choose between optimizing convenience and optimizing privacy.

II. A Focus on Connected Devices

Horizon: Zero Dawn is a story deliberately situated at the boundary of the natural world (plants, water, rocks, trees, flesh and blood) and the artificial world (processed metals, digital information, robotics, cybernetics). As a child, Aloy falls into a cavern and finds a piece of ancient (21st century) technology. A small triangle that clips over the ear, this “Focus” is essentially a smart phone with Augmented Reality projection (sort of… JawBone meets GoogleGlass and Microsoft Hololens). This device helps to advance the plot, often by connecting with ancient records that establish the history of Aloy’s world (it even helps with combat and stealth!).

It’s also a privacy nightmare. The primary antagonist first sees Aloy -without her knowledge- through another character’s Focus. Aloy’s own Focus is hacked several times during the game. A key ally even reveals that he hacked Aloy’s Focus when she was a child and watched her life unfold as she grew up. (This ultimately serves the story as a way for the Sage archetype to have a sort of omniscience about the protagonist.) For a girl who grew up as an outcast from her tribe, living a near-solitary life in a cabin on a mountain, with the only electronic device in a hundred miles, she manages to run into a lot of privacy breaches. I can’t imagine if she tried to take an Uber from one village to the next.

Our interconnected devices accumulate deeply astonishing volumes of data- sometimes, very personalized data gets captured. In a case heard by the Supreme Court this month, a man in Ohio has his location determined by his cell phone provider. The police obtained this information and used it as part of his arrest and subsequent prosecution. The Supreme Court recently heard a case about the use of warrants for law enforcement to access cell phone data. (This is different from the famous stalemate between the FBI and Apple after the San Bernadino shooting, when Apple refused an order to unlock the iPhone of a deceased criminal.)  As connected devices become omnipresent, questions about data privacy and information security permeate very nearly every side of every facet of our daily lives. We don’t face questions about data the way that one “faces” a wall; we face these questions the way that a fish “faces” water.

From cell phone manufacturers to social media platforms, the government confronts technology and business in a debate about the security mechanisms that should be required (or prohibited) to protect consumers from criminals in myriad contexts and scenarios. In this debate, the right answer to one scenario is often the wrong answer for the next scenario.

Conclusion: Maybe We Don’t Understand Privacy In a New Way, Yet

The current cycle of consumer demand for risky designs followed by data breaches is not sustainable. Something will have to shift for Privacy in the 21st century. Maybe we will rethink some part of the concept privacy. Maybe we will sacrifice some of the convenience of the digital era to retain privacy. Maybe we will try to rely more heavily on security measures after a breakthrough in computing and/or cryptography. Maybe we will find ways to integrate the ancient privacy methods of the 20th century into our future.