Popularizing Formats For Sitting At a Table and Having a Spirited Discussion

Mediation has a surprising amount in common with the tabletop game Dungeons and Dragons.

1) Most people know very little about either one.

2) People who have heard of it often think it’s a waste of time, and may deride those who support it.

3) Neither are promoted in mainstream culture.

4) The formats bear some similar appearance: Several people sit around a table. One person seems to be “in charge,” but really, that person is just helping the other people at the table actually make meaningful decisions by providing structure and clarity for the process.

5) Neither one has a final, decisive ending that declares a winner. Rather, the purpose for both activities is to have a mutually satisfying experience and outcome; everyone wants to walk away from the table feeling like it was a worthwhile investment of 3 hours (… or 5 hours… or 18 hours…).

6) The enemy that must be defeated is abstract in both cases. For D&D, it’s the… well, the Dungeons and Dragons that must be overcome (it’s extremely clear naming). In mediation, it’s the conflict itself that is the enemy– not the other person.

More people than ever are playing D&D- and even filling theaters to watch professionals play it. Can mediation find the same increased acceptance in our culture?

 

The Wizardry of Brand Management

D&D surged in popularity in the last few years. The owner of the game and the brand, Wizards of the Coast (WotC), has rebuilt and redesigned the rules and format several times since taking over the trademark in 1993. When launching the 5th edition of the game in 2014, WotC leveraged social media to demonstrate how the game worked. The 5th edition was easier to understand, easier to play, and easier to watch than any previous edition. These changes made it more inviting for new players and also made it much more of a spectator event, which fit with the use of streaming services like Twitch and YouTube. Enthusiasts started to publish their own gaming sessions online, effectively turning their gaming product into a TV show—sort of a strange inverse of how most children’s cartoons worked in the 80s and 90s to sell toys. Like so many video games that now comprise the esports corpus, D&D became a game that collected an avid fan base and consistent spectators to fill streams and theaters. Podcasts, streams, and live performances have introduced thousands of new players to the game, as well as rekindled the imaginations of those who have not rolled a twenty-sided die in decades.

Despite their broad similarities, mediation has not exactly kept pace with D&D’s surge in popularity. Despite the overwhelming difference in cost, time, end (arguably) effectiveness, litigation remains the gold standard for dispute resolution in matters of legal consequence in the US.

Courtroom drama television shows, (and “procedurals,” generally) have done well in the US. A regular program centered on mediation could easily do as well as any long-running legal procedural show. Wizards of the Coast brought D&D out of derision and obscurity (even dismissing alleged satanic affiliations) by making it comprehensible and accessible. They used every possible tool to present an alien an esoteric game structure in a way that was engaging and entertaining, while at the same time gently informing viewers who simply watched the process.

 

Two Obstacles To Mediation’s Popularity

There is a snag in the economics of promoting mediation:  Wizards of the Coast is financially incentivized to promote their D&D product. A lot of wealthy people and companies are not necessarily incentivized to promote mediation as a primary form of dispute resolution. Trials can be incredibly expensive, and their complexity and cost often favors the side with more money to hire more experienced attorneys. Those with advantages of any kind, in any setting, are typically unwilling to give up those advantages. If the US legal system creates any advantage for those with power or wealth, it is easy to see why power and wealth would not be used to promote an alternative method of dispute resolution.

The other primary obstacle is the lack of cohesive ownership over mediation. D&D is a gaming product owned by a single company, and so decisions surrounding its brand management are made by a single entity. Mediation is a broad structure of dispute resolution, not owned by any particular body. Indeed, it is not the kind of thing that is subject to trademark or patent protection. There are trade groups and individual specialists who would like to see mediation increase in popularity, but there is no single entity with resources and authority over mediation. It is not comparable to the relationship of a company with its product. The lack of a trademark or ownership makes branding extremely difficult. Wizards of the Coast is able to manage D&D carefully, shutting down counterfeit products and distinguishing itself in the gaming market. Mediation is not the kind of thing that is subject to trademark protection.

 

The Cultural Boost for Competitive over the Cooperative

If popularity is about brand management, mediation seems condemned to obscurity because that brand can’t be effectively managed.

But how did litigation get popular without a trademark and a livestream? Perhaps the adversarial attitudes in litigation fit naturally with a competitive culture. Litigation so often becomes about beating the other side, rather than beating the conflict itself. Mediation is most successful when each side sees the obstacle as the conflict itself, and everyone works together to defeat that problem—not to defeat each other.

Despite the epithet of “rules lawyer” to describe many D&D players, a society that played more cooperative tabletop games would probably be less litigious. Taking a few hours to learn to work with someone who has different personal objectives from your own is an unusual activity in our culture, but learning to listen and cooperate might have value in an increasingly interconnected and networked society.

Advertisements

Evil Vines Choking Out Unenumerated Protections (An Afterthought on Legislating for Changing Technologies)

Legislation always faces a problem of enforcement. That problem can take many shapes: lower courts or police may refuse to enforce the law, citizens may refuse to obey the law en masse, or crafty schemers may look for loopholes and technicalities so they effectively break the law without penalty. There are multiple laws, cases, opinions, and all other legal indications that children merit special and particular protection online and in digital interactions. However, there is no law specifically forbidding inflicting digital violence on a child’s avatar in a game until the child pays non-digital money— and I’m almost surprised it took so long for someone to find that opportunity. I think Penny Arcade misunderstands the problem. The problem is that all of those legal efforts to protect children could never cover every possible way that someone might try to exploit a child in a digital setting. When someone wants to exploit people for money, they only worry about the law in three ways: not getting caught, not getting tried, and not getting convicted.

This kind of example raises concerns not just in the video game industry, but across industries affected by the new General Data Protection Regulation. It would be unfairly cynical to even hypothesize that every company is nefarious, of course. A good many companies have a genuine desire to uphold the GDPR rights of their users, and their task is to work toward official compliance with the GDPR requirements– a few will even go beyond that minimum and take further measures for privacy and security. Notwithstanding, some controllers and processors still want to exploit their users, and their task is now to figure out how to sneak over, around, or through the GDPR.

 

In Both Overcooked And The GDPR, Execution Matters More Than Ingredients

I deliberately avoided playing Overcooked for a long time because so many review joked about the fights it causes with friends. Now that I’ve played it, I barely understand why it’s such a divisive experience for so many people. The game is charming and delightfully fun. Players work together in kitchens filled with obstacles (food and tables often move during the round, forcing players to adapt) to prepare ingredients and assemble meals for a hungry restaurant– though the diners are sometimes floating on lava floes and sometimes… the diners are penguins. The game is about coordinating and communicating as you adapt to changes within the kitchen. Maybe the reason so many people throw rage fits during this game is that they are not good at coordinating an effort and communicating effectively. In any case, the game isn’t about food so much as it’s about kitchens (especially in restaurants). So the game doesn’t focus so much on the ingredients as it teaches the importance of working together in chaotic situations.

People are focusing  a lot on the ingredients of the new EU data privacy law– particularly the consumer protection rights enumerated in it. However, there is very little talk about the bulk of the law, which is aimed at the effort to coordinate the enforcement and monitoring mechanisms that will try to secure those consumer rights. The rights listed in the GDPR are great ingredients– but as Overcooked teaches, it takes both execution and ingredients to make a good meal.

Supervisory Authority: How We Get From Ingredients to Meal

I’ve read a lot of articles about the General Data Protection Regulation, and I notice two common points in almost all of them: 1) the GDPR lists data privacy rights for consumers, 2) this is a positive thing for consumers. However, after reading the entire law, I think this is a gross oversimplification. The most obvious point that should be added is overwhelming portion of the statute that is devoted to discussing “Supervisory Authorities.” The GDPR may list a lot of consumer rights, but it also specifically details how these rights are to be enforced and maintained. This law prescribes a coordinated effort between controllers, processors, supervisory authorities, and the EU Board.

As described in Article 51, 1, a supervisory authority is a public authority “responsible for monitoring the application of this Regulation, in order to protect the fundamental rights and freedoms” that the GDPR lists. Each member of the EU is required to “provide for” such an authority. I can only speculate that this would look like a small, specialized government agency or board. This supervisory authority is required to work with the various companies that hold and process data (“controllers” and “processors” in the GDPR) to ensure compliance and security. The supervisory authority is responsible for certifications, codes of conduct, answering and investigating consumer complaints, monitoring data breaches, and other components of a comprehensive data privacy program. The supervisory authority must be constantly and actively ensuring that the rights in the GDPR are made real.

If the supervisory authority can’t coordinate the effort with the controllers and processors, the rights in the GDPR are just delicious ingredients that were forgotten about and burned up on the stove.

Computers Are Not Problem Solvers- Computers Are the Problem We Must Solve.

The New Checkout Cashier That Doesn’t Care If You Starve

There is an effort to use a simple AI at the office where I work. Some slick salespeople sold the building 2 cutting edge, top-of-the-line, automated checkout machines. These machines have a camera that stares at a designated check-out square. People simply select the items they wish to purchase and place it in the designated area. The camera recognizes the items, registers the purchases, and the person then swipes their card and completes the purchase process. However, the camera sometimes does not recognize the item- and there’ s no other method for buying the item when this happens. I leave my snack or drink by the incredibly expensive and completely useless machine. Betrayed by technology and the salespeople who sold the devices to the facilities management, I walk back to my desk in anger and disgust.

It’s a simple story, but an increasingly common one: we start to rely on technology, and when it fails, we just hit a wall. It’s not clear to me what advantages the camera offers over a scanner (which is used elsewhere in the same cafeteria for self-checkout). This kind of story will be more common as more people rely on smart homes, smart fridges, smart dishwashers, smart alarm clocks, etc. The “smartness” behind each of these is rudimentary AI- recognizing patterns and sometimes making simple predictions. The hope is that the technology will understand its role and take a more proactive approach to helping humans.

However, the technology doesn’t understand its role, and it really doesn’t care about helping humans. When AI encounters an error, it doesn’t go into “customer service mode” and try to help the human achieve its goal. It doesn’t try to resolve the problem or work around it. It just reports that there was an error. If a retail employee did this, it would be the equivalent of being told “I can’t ring up this item,” and then the employee just walks off to the break room. Most people wouldn’t return to a store that had that level of customer service. People born before 1965 would probably even complain to the manager or local community newspaper.

These problems can be resolved, but the fixes are rarely designed into the technology at release. I’ve had this problem with the checkout machines at work about 7 times over 7 months (I don’t even try to use them more than about once a week)- I am aware of no effort to improve the situation. Because the designers probably never use the machines, there’s a good chance no one in a position to fix the problem is aware of the problem.

More Dangerous Places to Put AI: Cars and Financial Markets

The fundamental problems for AI are annoying and disappointing when they deny us snacks or try to sell us shoes that we already bought. But these problems are amplified from “annoying” to “tragic” and “disappointing” to “catastrophic” when they manifest in vehicles and financial markets. If our AI checkout machine doesn’t care if people can purchase food, what else are we failing to get AI to care about in other applications?

AI is the newest technology, which means it is subject to all of the failures of previous technology (power outage, code errors, physical tech break) and also the new failures of technology (AI-specific problems that sometimes actively resist resolution).

None of this is anti-technology- on the contrary, I think AI is a fantastic development that should be used in many applications. But that doesn’t make it a great (or even acceptable) tool for every application. A warning that hammers should not be used to put screws through windows is not a diatribe against hammers, screws, or windows. It’s just a caution that those things may not mix in a way that will yield optimal results.

“Extra! Extra! Trademarks Show Consumers Sources!” What Telecommunications Can Learn From IP Law To Combat “Fake News”

Formal news broadcasts play a role in a lot of games that focus on story: Deus Ex: Human Revolution centers itself around the news broadcaster, and the game culminates in the decision of which news story to broadcast. Starcraft II’s Terran campaign allows the player to explore the storyline by watching news broadcasts. Borderlands 2 allows players to follow news broadcasts from different sources as the main storyline progresses. In each case, there is always a gap between the news story that is presented and the information the player has. Whether the news is explicitly propaganda, merely biased, or simply missing information, each game underscores the fallibility of the news as a primary source for information.

Subjectivity, bias, and context can change the interpretation of a news story. Words themselves can also be subject to changes in context and intent. The term “Fake News” gained fame when used by Trump to accuse CNN of, essentially, being left-of-center. However, it has more recently been used to refer to Russian hackers spreading propaganda and disinformation on Facebook and other social media under the guise of non-biased, traditional-style newsmedia.

Trademarks: How We Know What is From Whom

The goal of trademarks is to reduce consumer confusion by establishing clear connection between goods/services and the manufacturer/provider. This consumer knowledge is considered essential to a healthy marketplace and – in many cases—to consumer safety. Applying the same fundamental concepts of trademark law to telecommunications law might have a positive effect on combating certain forms of so-called “Fake News.” By requiring each news source to register digital certificates with social media platforms, consumers could be more confident in the source of their information. The information may still carry the biases of the institution, editor, or author of the news piece, but the consumer would be aware of that possibility from the initial contact with the article or video. Just as trademarks do not enhance the quality of a good or service, digital certification would not ensure high-quality, un-biased news containing only perfect information. Similarly, even under robust trademark law, counterfeiting (and other violations) do occur. There would be a risk of various hacking attacks that would allow “Fake News” to be published under the name of a news source that did not actually produce it. However, such a hack can be addressed and corrected in ways that are not possible in a news marketplace without identifying information for news releases.

Trademark law may even be brought to bear directly on the Fake News problem. News outlets often develop their own styles and designs that remain consistent over time, eventually becoming associated in the minds of consumers with the outlet. This could be interpreted as trade dress, and a case could be made that this is a type of intellectual property subject to legal protection. Enforcement of this would likely be very difficult against foreign, anonymous violations, but creating a culture of more regimented, clearly defined news outlets would be beneficial in helping consumers spot outliers that don’t fit the known news providers—and treat such new providers with appropriate scrutiny and supplemental research.

 

 

AI on the Horizon- a New Dawn of Data Science?

Writing about Horizon: Zero Dawn in the context of data privacy is a bit of a joke. It’s looking straight past the elephant in the room to comment on an interesting lamp. The central premise of the story is that humanity created powerful artificial intelligences and put them in extremely advanced robotic structures, and this pretty directly led to the total annihilation of all life on Earth by 2065 (the robots used biomass as fuel). With barely a year left, humans recognized their impending destruction and hastily banded together to create several new AI systems that would be equipped to reboot life (and humanity) after the machines destroyed life and then themselves lost all power. It’s a fascinating science-fiction story that suggests questions about the complex interconnections of economics, ecology, technology, and humanity. Artificial intelligence is deeply at the core of this story, as both lifegiver and deathbringer; as both the thing that is controlled and the thing that controls.

 

AI As The Child That Cannot Define Reality

AI is very dependent on its creator. The basic framework of its realities and possibilities are defined by its creator. So far, we make AIs that basically do what we do on a bigger scale. We teach the program about correlation, and it starts doing the math– using larger data sets than we can manage, and computing much more quickly than we can. But it is fundamentally locked into the system of objectives, limits, and tools that we define (or fail to define) for it. There is some question of whether AI will think categorically differently than humans. AI is very good at finding correlations and probabilities. I don’t know if AI will ever conceptualize causation the way that humans do. I don’t think that humans are good at answering “why” when asked about the world, and I don’t think AI will do substantively better. In fact, AI is likely to do worse, if only because it doesn’t need to care. If AI can find a sufficiently high correlation between two things that the AI can reliably predict an outcome, it has no reason to ask why the things correlate or why they are good for predicting an outcome.

 

Can AI Superiority in Statistics Outshine Human Capacities for Understanding?

Data science usually tries to serve 1 of 2 purposes: Find a correlation or provide an explanation. Statisticians are pretty good at the first one- and computers can be really fantastic at it. But the second requires “understanding” the data (and exploring that key word quickly drives to some of the core questions about philosophy of mind and human identity at the heart of AI). Humans are often not good at truly understanding the data they perceive. I’m not convinced that AI will be able to understand data (at least in my lifetime), and I suspect that explanations require understanding. AI will be able to detail the correlations that they find—they can show their work, in essence. But except for the most dedicated disciples of Hume, humans suspect that causation is different from correlation. In a fun play on words, I expect that AI will be quite Hume-ean for the next several decades.

 

I’m Not Against Big Data, But…

The reason I’m consistently skeptical of the belief in Big Data as some kind of panacea of the future is twofold: 1) A loss of context (how the individual creates the data), and 2) A loss of application (how the data applies to the individual). Big Data is prone to certain types of misinterpretation based the lack of context that is a by-product of reducing lives to their data points. This makes Big Data difficult to understand and apply, especially when AI is analyzing it. AI works very comfortably with data points, but it does so without ever asking about the nature or meaning of those data points.

 

Correlation Describes, Understanding Predicts

It’s tempting to think it doesn’t matter how Big Data and AI solve problems, as long as they solve the problem. A marketing department doesn’t need to know why people in Exampletown buy their product- they just need to know where they’re selling well! That’s fine as a reporting tool, but what about as a predictive tool? And what about as a feedback tool for future product development? If the consumers love the packaging, but marketing thought it was the flavor that keep their product flying off the shelves, marketing could be completely dumbfounded when their new packaging rolls out and sales plummet. Getting something to work is all well and good—but you need to know why things are working if you want to repair, improve, expand, or optimize.

 

Conclusion: “We Shape Our Tools, Then Our Tools Shape Us.”

Because AI and Big Data are poorly disposed to explaining the world, people (especially in business) are more likely to play to the tools’ strengths of describing the world by finding correlations. People will follow the example of the AI and demur from seeking understanding or meaning in the data.

In Horizon: Zero Dawn, the threat of AI was really its control of enormous robots that consumed life. I still don’t see the problems with AI being quite so theatrical. Rather, I have concerns about the power of AI to limit the plasticity and creativity of the human mind. There is a lot of danger that AI will start teaching people to think more in terms of correlation and less in terms of explanation. Correlation is a very useful statistical tool, and there are a lot of projects that can be accomplished with it. Explanation is not always necessary or appropriate, but it is still a very important tool in the toolkit of human thought.

 

Bonus Content: Privacy’s Meaningful Purpose

A few years ago, I dreamed up a concept of “meaningful privacy” to better define the discussion around the broad topic of privacy. I noticed that not every piece of data is equal. Some things are kept private because there is a concern of actual harm if the information is publicized. Some other things are kept private because of societal or cultural norms and traditions. Privacy is not and end in itself- we have it for the purpose of protecting information. However, different data has different value. Therefore, the value of privacy is relative, varying according to the data in question. One effect of this concept is to treat different breaches according to the type (or value) of data in question.

There is a huge and illuminating problem with this idea of “meaningful privacy”: Just because someone didn’t steal anything from your house doesn’t mean you feel comfortable about a break-in. Although privacy is not an end in itself, it is intrinsically upsetting when our privacy is violated. The biggest fear is the potential for future violations of privacy: just because no harm occurred as a result of one violation, there is no guarantee about future violations. Furthermore, a past violation of privacy indicates a vulnerability and thus the potential for future violations. With a diminished expectation of privacy, there is diminished privacy. Privacy is of little use if it cannot be relied upon.