Regulating The Internet? Not the Tubes Themselves…

If Net Neutrality is an argument about economics (and federal administrative law), Content Regulation is an argument about ethics and culture.

Net Neutrality is becoming an old hobby horse for a lot of people. It gets a lot more attention than most telecommunications policy issues. Even though questions about copper wire lines vs fiber optic cables actually affects more people, the internet is generally united by the fact of its own existence.  This is about regulation at the highest level, determining the equality and/or equity of access to content. No one online is indifferent to the internet—the only debate about net neutrality is which policies are best for the consumer and the telecommunications marketplace (or, in the United States, “telecommunications marketplace”).

But there is another layer of regulation that is quickly gaining attention. If Net Neutrality is about the form of the internet (its structure and broad organization), there is a growing need to consider questions about the regulation of the content of the internet. Over the years, the internet has been a vector for some amazingly good and amazingly bad actions by humans. The differences in the kind of regulatory concept at play are hard to understate. Rather than comparing it to different video games, I would compare it to the difference between a video game and a tabletop game.

1) I’ve always been fascinated by the dawn of the computer age. My childhood was the tail-end of a world in which homes did not have internet access. By the start of law school, everyone looked up famous cases and Latin phrases on Wikipedia during class (except for the people who did the reading the night before- they looked it up before class). I’ve often compared the early days of the internet to a kind of Wild West setting: a lawless frontier where fundamental questions about the mold of civilization were not yet settled. I thought most of those questions would be settled by 2015. We are not close to a consensus on rules. Indeed, we are still testing what types of rules are feasible or desirable.

Video games are literally made of rules: the computer code that constitutes the game itself. Tabletop games are made of… usually cardboard, or some kind of paper. (Occasionally, they have some plastic – or even metal if you got the collector’s edition.) This may sound like a silly or vacuous distinction, but it has important ramifications for the kinds of problems that can happen in a game, and the kinds of solutions that will (or won’t) be effective.

2) Lawlessness can lead to problems. This was probably not known until 2 decades of unfettered internet, but now we know. Free to do anything, people have tried very hard to do everything. Every app, platform, hosting site, game, or program online that gets big enough eventually starts to experience just about every problem type that humans can present. From intellectual property disputes to death threats, from fraud to manslaughter, the internet has been a way for people to discover criminal behaviors that past generations could never have the opportunity to access. The unethical choices of both multi-national companies and village simpletons are available for repeated viewing.

In a video game, the code can sometimes glitch and create problems for players. The code can also execute perfectly, but there may be complaints about the design of the game itself (a level being too difficult or some power or tactic being of an unsuitable level of power). With some difficulty, players can cheat by actually breaking the code, but more games can detect this (and especially so in professional e-sports settings). In a tabletop game, anyone can cheat, the rules may be wrongly applied (or not applied at all), and all manner of chaos can ensue. DDoSing an opponent during a game might be a little bit akin to literally flipping a table during a game of Monopoly or checkers,

3)  YouTube’s takedown system is already an example of an effort to regulate content, and it already shows some of the challenges with instituting a content regulation system: people will find ways to game that system. Any system of regulation will have two negative outcomes: it will penalize the innocent, and it will be dodged by the guilty. The most you can hope for is that it will protect most of the innocent and it will penalize most of the guilty. The US justice system, even when working as intended, will sometimes produce undesirable results: a guilty person will go free, and an innocent person will go to prison. The hope is that this happens very infrequently.

The most common reaction to bad behavior online has been for authoritative parties to do nothing. The most common reaction by authoritative parties to actually do something has been to ban the bad actor. The most common reaction to this ban is to come back with a different username or account.

In video games, cheaters are often banned (if they are making the game worse for other players). But in table top games, people who ruin the game are just not invited back. No one will play with them anymore. People might hang out with someone less if they behaved in a wildly unacceptable way during a casual weekend game of Risk or Werewolf. In a video game, bad behavior has very limited consequences. In a tabletop game, bad behavior can have lots of meaningful implications.


4) What would it look like to regular content? Getting it wrong is easy — which is the primary reason that’s what’s going to continue to happen. Whether trying to penalize criminals or regulate behavior online, creating a fair and ethical system that consistently produces more good results than bad ones is difficult. One problem is that incentives are at odds: most platforms want to turn a profit, and if bad behavior yields a net gain, the platform needs a solution that will actually make more money than the current bad behavior (plus the cost of implementing the remedy). Another problem is that platforms tend to think of regulating their content the way that most Americans think about regulations: an appointed governing authority (or combination of authorities).



You can’t make people be good, but you can keep deleting all of their manifestations of their behavior on the internet: You can suspend or ban accounts, and eventually IP addresses. You can automatically censor strings of characters, and continually update the list of banned strings. These will continue to be the solutions offered, and they will continue to mostly fail while they almost half-succeed.

Over a decade ago, Lawerence Lessig asserted that laws are of four types: market, cultural, legal, and architectural. It turns out that enforcing the legal type of law in a digital space is very difficult. But cultural norms practically enforce themselves. And architectural laws are always already enforced. Market rules can be fickle, but persuasive. A lot of efforts to regulate content will fail because they will hinge on the concepts of legal enforcement.

The lack of rules and regulations is what made the internet a place where amazing things could happen. Without rules to stop imagination and creativity, people created art, solved problems, built positive communities, and enriched themselves and each other. In that same landscape: without rules to stop hate and anger, people created harassment and bullying, invaded privacy, ruined lives, occasionally killed people, and destroyed a lot of good in the world. Lawless frontiers are the best opportunity for the most beautiful, important, and inspiring expressions of humanity. They are also the best opportunities for the most despicable, dangerous, and damaging expressions of humanity. What the internet becomes will be decided—has always been decided—by what people bring to it.


The High Volume of Online Harassment

I. Robin Hood’s Legend Rings Loud in our Moral Ears

Five years ago, Mike Bithell released a simple game about personal identity and friendships. Expanding his scope from “Thomas Was Alone,” Bithell gives us new questions about social justice and law in “Volume.”  As protagonist Rob Locksley, a player navigates stealth-based challenges simulated by an artificial reality system called The Volume. He broadcasts his depictions of stealing from the homes and offices of powerful public officials (particularly one Guy Gisborne), who are corrupt and tyrannical.

What people often forget is that Robin Hood was, fundamentally, a criminal. We glorify him as an outlaw because he forcibly carved out social justice from authoritarian injustice. In some ways, he was the precursor to Thoreau’s vision of Civil Disobedience (though our evaluation of Robin Hood follows naturally from Thomas Aquinas’ definition of a law). We often find the moral justification of such acts in our conception of ends-based morality: any minor evil is justified if it is done in an attempt to stop a worse evil (and evil laws are undeserving of regard or obedience). But we aren’t comfortable with the idea of Robin Hood stealing from just anyone (especially if he doesn’t give that wealth to someone less poor than the victim of the theft).

In “Volume,” Locksley’s actions seem deliberately inclined to incite crime on a level that would raise a very close question for the extent of first amendment protections (if the game were set in the US, instead of futuristic England). Interestingly, it doesn’t seem that Locksley’s behavior is prohibited by the Computer Fraud and Abuse Act or the Stored Communications Act, (though more information about the technical details behind his operation might cover that: he seems to have stolen his blueprint information while employed, which might be comparable to the case “US v.  Sergey Aleynikov,” which was decided as a trade secret theft.) However, Locksley does announce personal information about public officials, which likely falls under 18 U.S.C. 119. Whether he has an “intent” to “incite the commission of a crime” would be a question of fact for the court.

“Volume” raises questions about online harassment and who is justified in attacking whom. The internet is often seen as a tool to “level the playing field” for business, the arts, social discourse, and other important social dimensions. It can also be a tool that makes it easy to attack any other human being, often without much risk of retribution. What is the difference between a level playing field and a frontier beyond the protective boundaries of society? Is a wild frontier an opportunity for freedom or an opportunity for predation? In the context of the current state of the internet, these questions are timely, to say the least.

II. Does Online Harassment only Occupy Digital Space, or Does it Fill a Real, Physical Volume?

Some citizens of the internet promote a panacea to cyberbullying:  leave the website, or turn off the computer. It’s so easy that Cavemen already did it! Exploration of this solution reveals that some harassment can be ignored, but some cannot. I have muted many strangers in online games over the years, and I have left more than a few chatroom and forum threads that made me needlessly unhappy. Sometimes, walking away works.  However, these are consistently not the kinds of situations that make headlines. Cyberbullying that leads to teen suicide is often vindictive and deliberate, including unwanted contact by bullies. One does not simply log-off from SWAT teams knocking in your door, or from the nonconsensual publication of intimate photographs. Some online harassment stays online— but very often, it bridges from virtually to reality.

When Rob Locksley is raided and arrested for his thinly-veiled anarchist broadcast, his defense is not “Well, if Gisborne doesn’t like me telling people how to rob his house, he should just not watch broadcast.” That is the equivalent of attempting a defense against defamation by asserting, “If the plaintiffs don’t like bad things about them in the newspaper, they should just not read the newspaper.” I don’t know if anyone ever tried using this defense, but I bet it has never worked in a US Federal Court.

III. Turning Down the Volume On the Discussion About Harassment

This week, organizers of SXSW Interactive announced they would cancel two panels on the subject of online harassment because they received harassment about it online. As event affiliates pulled their support in response to the cancellations, I have to wonder how thrilled the trolls must feel with their new found power.

Technology is always dangerous. Technology lets people do things. It lets more people do more things, more efficiently, more often, more easily. If people want to do good things, technology is great: more good things will get done. Despite Lincoln’s plea, people do not always listen to the better angels of their nature. Indeed, many online participants seem determined to adhere to the moral edict: “we must be enemies.” The internet can create and foster relationships, but it clearly has as much power to destroy discourse as facilitate it.

Should the promise of free speech protect those who want to silence the speech of others? If the question seems difficult in the abstract, it seems much simpler for us when we can narrow the question to a single case—one where we can easily identify the good guy and the bad guy.

IV. Hearing on a “Case By Case” Basis is Rarely the Case

When people use the phrase “case by case basis” they often mean to indicate a flexible structure of evaluation, in which a variety of factors may be considered and weighed. Despite the use of the term “case” in law, judicial systems do not aim for the kind of flexibility that this phrase often suggests.

Courts evaluate cases to determine whether a certain law applies to a certain set of facts. Unless there is something in that law which provides for some kind of flexibility, or the weighing of countervailing circumstances, there is no flexibility in the court’s finding. Nor is there meant to be. Most of the judiciary is invested in building consistency, predictability, and reliability in the law: the same case should always come to the same result. (Hence, the task of many attorneys is to argue which cases are similar or different.)  When an action is justified by the particulars of the circumstance, individuals often recognize those particulars and make a positive moral evaluation. However, the legal question is often about the larger structure in which that action is carried out, because the law is concerned about the application of that structure to many thousands of other persons and circumstances.

Rob Locksley might have some moral justification for his broadcast (e.g, Gisborne is evil and dangerous and cannot be stopped any other way). But these justifications arise from the particulars of the circumstance. The legal question is whether other people should be permitted to make similar broadcasts about other targets. Without some kind of clause in the relevant laws about “excusability” or “justifiability” (such as those found in homicide laws to permit self-defense, or to recognize extenuating circumstances), the law cannot abide a good use of a socially impermissible kind of act. It does not matter that “in this case, there was a moral reason to do something illegal,” because moral reasoning is deliberately kept distinct from legal findings, and a law needs a specific clause for exceptions.

V. How to Blow Out a Speaker: Too Loud for Too Long

I am increasingly concerned about the problems of online harassment. It is moderately concerning that internet “hate communities” exist (with settlements in Reddit, 4chan [their capital city], and Tumblr), but it’s a phenomenon that has been noted and described before. What is far more concerning is that these communities are mobilizing their hatred to affect the world. I don’t know if hate is winning right now, but I’m not altogether prepared to rule that out that possibility. I don’t know if hate and harassment will eventually destroy social media, or the internet entirely.

I do know that this volume of threat, violence, and malice is not sustainable. Very soon, governing bodies will have to decide whether to curtail speech in the name of preserving the common good. This is controversial enough, but the technical challenges of internet anonymity and instant broadcasting will make it even more difficult to craft and execute appropriate laws. However, I think this project is neither impossible nor dispensable.