Popping Caps in CS:GO and Cable Cutters

The big selling point for capitalism is usually “innovation and progress.” When folks compete in a free market, they try to make the best product at the lowest cost, and thereby win the customers and the money. The market rewards those who can find new ways to make a product more efficiently, or who can simply provide a better overall service. The winner is the one who can do the best job, and when your society is full of the best possible products and services, everyone is a winner.

But economists never count on some of the alternative strategies available. Sure, you can try to win more customers by making a better product—or you can surround your competitor’s store with lava. That’s another way to win.

Cheese or Cheating?

In the CS:GO quarterfinals of DreamHack 2014, Fnatic was losing a match to LDLC. Fnatic stunned the audience—and even the shoutcasters—when they performed a previously unknown “boost” maneuver that allowed them to see most of the map. Using this vantage point, Fnatic went on to stage an amazing comeback and win the quarterfinals round. LDLC filed a complaint with DreamHack administrators, arguing that the specific “boost” performed was not legitimate. DreamHack administrators eventually agreed, and determined that the match should be replayed (Fnatic declined to replay the match and LDLC advanced to the semifinals round, eventually winning the tournament).

The legitimacy of the boost remains an extremely controversial topic. Some argue that players should be permitted to do anything that the game allows them to do, provided that they do not modify the actual code of the game. Others argue that the effect of this technique gave clear evidence that it was a game flaw (to those who are familiar with the game), and Fnatic should have known that its use would not be permitted by the tournament rules. (Specifically, the use of the boost made some wall textures transparent and the boost was considered “pixel walking.”) Along with a lot of implications for game developers and esport tournaments, a central question here is: what is the difference between cheese and cheating?

Cheese is the use of an unorthodox or surprising strategy or tactic to attempt to win a game in a way that avoids the standard methods of play. It is often considered bad manners or unsportsmanlike, but finds some level of tolerance in competitive game play. (Cheese strategies are prone to backfire badly, as they often require a very drastic “all-in” decision which leaves little room for recovery if not successful.) Cheating also avoids standard methods of play, but does so through a violation of established rules.

Data Capping or Kneecapping?

Comcast supplies cable as well as internet. Thanks to the smorgasbord of entertainment options available on the internet, people don’t need 17,000 cable channels when they want to engage in one of America’s most popular past-times: doing “nothin’.” Many Americans are cancelling their cable subscription services (“Cutting the Cord“) because they can get the entertainment the need from the internet. Comcast might have noticed the drop in their cable subscriptions, because they started imposing data caps on some cities. The effect is that people can’t watch unlimited Netflix if they only get 100GB/month, so they have to go back to cable if they want to watch shows and movies. Comcast is using its power as an ISP to “leverage” its revenues as a cable provider—not by making its own product better, but by interfering with its customer’s ability to access a competitor’s product.

So, is Comcast bending rules or breaking them? There is no law against ISPs imposing data caps on customers. Comcast’s merger with NBC-Universal was approved by the Department of Justice. Comcast’s market conditions are not like the capitalist’s ideal free market: Comcast has the incentive to interfere with entertainment-content providers, and they have very few competitors who would prevent them from doing so. The effect might not be the kind of innovation that capitalists hope to see from competition, but it’s still led to an innovative way to undermine competition.

I imagine that either the FCC or the DoJ will have to examine this behavior and decide whether this constitutes a violation of antitrust law or is unduly harmful to consumers. It seems easy to make the case that it undermines innovation and competition, but because these regulators have approved all of the conditions that caused this activity, it will require a lot of regulatory untangling to explain why the natural result of several legal decisions turns out to be illegal.

The Long Road to an Ever-changing Future to Return Again to the Past: A 14th Century Solution to the 21st Century Digital Renaissance Problem of Law and Economics

This is my longest post yet, so I’ll give a tl;dr: Copyright law is immovable and unavoidable, and we keep talking about because things around it change constantly. Navigating copyright for the next century can’t look like successful navigation of the last century’s copyright- but it might look a lot like something from 7 centuries ago, and it might shift some of the focus from Copyright to its older sibling, Trademark.

 

I love the history of copyright because I can’t separate it from the history of technology. The core thrill of copyright law is the thrill of technological possibilities warping and toying with long-standing concepts of objects and economics.

It’s too bad I don’t have the graphic design tools to put a timeline up, with the legal progressions listed on one side and the technological milestones listed on the other side. But here’s a text version:

Laws and Philosophy:

The printing press was invented in 1440. Statute of Anne was passed in 1709.  Immanuel Kant wrote “On the Wrongfulness of the Unauthorized Publication of Books,” 1785. The US Constitution was written in 1787, with a clause establishing copyright as a federal law, followed by the copyright act of 1790. In 1831, 1909, 1962-74, 1976, and 1998, the US government passed modifications to US copyright law. Throughout the 20th century, photographs, moving pictures, radio broadcasts, phonographic records, videocassette tapes, and internet search caches are each brought face to face with copyright law.

Technologies:

1837 Samuel Morse sent the first telegraph message. In 1878, a moving picture of a horse at a gallop is recorded. Gugliemo Marconi transmitted radio signals 1.5 miles in 1895. In 1926, Kenjiro Takayanagi created the first television receiver; Philo Farnsworth worked on an improved television the following year in 1927-1928.  Raymond Tomlinson sent the first e-mail on ARPANET in 1971. Tim Berners-Lee published the first web page in 1991. Microsoft released Windows Media DRM software in 1999; Napster also launched in 1999. YouTube launched in 2006. In 2014, a monkey took a selfie.

In February of 2016, YouTube channels and personalities asked: #WTFU. (Which spurred me to write about copyright yet again.)

 

The Times are Always Changin’.

It’s a long history to arrive at such a contentious and unsettled point. Contract, torts, and property law are so much more settled and uncontroversial (particularly in the ways that affect average citizens in our daily lives). Why has copyright always been a recurring issue? Why does it seem to be getting less settled and stable, despite the increase in attention from jurists and scholars?

The problems are not going away because their two main causes aren’t going away. Technological progress isn’t going away. The drive of human creativity isn’t going away. But if we can move copyright law through the end of the 20th century, we might be able to reconcile law and art.

From the Ayssirian Tablet to Bob Dylan, human civilization has repeatedly confronted the distance between “old” and “new.” Generations are defined by the space between them that cannot be bridged. History bears out Marshall McLuhan’s observation that, particularly with regard to new technology, “we march backwards into the future.” But when we arrive in the future, we have to grapple with its residents and their customs and culture. There are always “The New Kids.”

The New Kids: Popcorn Time and Social Media “Prosumers.”

One fine afternoon last year, Gabe and Tycho talked about how terrible piracy was, and how funny it was that the ESA was going to allow Social Media Mavens to attend their E3 show alongside the press. This whole podcast is about these two topics, and the two of them seem unaware that the same theme actually permeates the entire discussion. These are two examples of how new media and technology shape culture in a way that dictates how established industries must change – two industries in particular. Though one of these industries was established 83 years before the other, they both face upheaval from the effects of the internet.  The ubiquitous availability of devices that connect the world is the result of a collection of forces that has – and will – entirely change society.

In their comic, “The New Kids” are ostensibly the “Prosumers,” set to arrive at E3 and replace the Old Guard, Traditional-Role Press. But there’s a layer built into this that Mike and Jerry don’t even know about: “The New Kids” are the technologies and media and cultural shift that change ESA’s thinking about who should be at E3. The New Kids are all of the reasons Popcorn Time can exist and even thrive, and why AMC needs to think very fast about how to avoid the fate of Borders Books. A society always has New Kids. Progress doesn’t happen without New Kids.

One Reason Copyright Discussions Never End: They Go the Wrong Direction

Copyright affects a lot of people on the internet, so it gets a lot of attention and discussion. Too much has already been said about copyright law – most of it is pretty unhelpful. Comparisons to the theft of physical objects only invite a hyperfocus on the distinction between copying and theft, which is just misunderstanding the issue in a different way. Arguing one misunderstanding against another will not lead to a better solution, just a different, less obviously-bad problem.

I think a better analogy is in spaying the goose that lays the golden* egg, or gelding some equally bounteous and mythical stallion. Analogies about terminating reproductive capacities are sometimes slow to catch on, for some reason—but maybe we could at least speak of taking an engine out of a car.

Ultimately, I think all of these analogies are really the wrong route. The most significant and salient point is lost in the effort to analogize: the way that digital media allows the manipulation of art is entirely unlike what human civilization has seen so far. It just isn’t like tools or farm animals or agriculture or cars or anything else to which we are tempted to analogize. The digital replication and transmission of images, text, and sound is entirely unlike the things that have happened in last 5 millennia (or 20 millennia) of recorded human history.

The internet, and the bundle of technological developments that have come with computing and telecommunication, fundamentally changes the potentials for human expression and connection. A fruitful discussion about copyright needs to consider how we got to this point, and where we can, must, and mustn’t go next.

 

Technology Giveth, and Technology Taketh Away.

Justice is a tricky thing, because it seems so obviously favorable and desirable when it’s on your side. The raw, unrestrained, unadulterated, unfiltered, concentrated justice is very difficult and very dangerous – much of the role of the legal and political process is to temper that justice with reason and mercy.

There is an important truth in this discussion which does not get mentioned often enough: through new possibilities in efficiency and distribution, technology made artists and entertainers wealthier and more famous than they could have been without those advances. There was once a time when an actor had to perform every single time the actor wanted to be paid. Now, the actor performs, and then enjoys the rewards of technology repeating that actor’s performance—hundreds of thousands of times, for millions of people. (Not to mention the role that technology plays in editing or reusing art!) No content creators complained when the technology allowed them to make more money for less work, and they aren’t worried about any potential benefits they now reap from increased exposure and dissemination of their products.**

Reaping benefits from digital technology is no justification for the violation of copyrights, of course—but it is important to see the broad picture of how technology has interacted with artistic creation and distribution, and consider at least three important facets of this realization. First and foremost, no one wants to argue that the technology is inherently bad. Anyone concerned about the protection of their works has profited from the efficiency of some technology – even the same technology that threatens to harm them.

Second, it raises questions about what “fairness” really means in this scenario: as we move into the future, how should we evaluate the benefits for creators against the costs to the audience? Who ought to benefit from the powers of digital technology, and what harms and benefits should be considered? There is a very big picture here, and evaluations of fairness will change as one’s values narrow or expand the scope of one’s view. A good discussion can only happen when the whole picture is really considered.

Third, the power of new technology makes us consider what is now possible: the separation of fame from fortune. As I have discussed, the internet allows someone to become famous without becoming wealthy. In ages past, the opportunity to gain fame usually required a lot of money, but now, propagating art does not require the same mountain of resources that it once did. As we move toward new structures to support art and entertainment, fame will become a prerequisite for wealth.

 

The Way Forward: The Return to Patronage.

IndieGoGo launched in 2008. Kickstarter launched in 2009. GoFundMe launched in 2010.  Patreon launched in 2013. It’s harder to demonstrate mathematically, but I will make the wild assertion that game pre-orders have been more heavily promoted and used in the last 10 years than in the preceding 30 years. (I would love to know if pre-orders are proving more successful than DLC or MicroTransactions as a business model.)

When people pay the creator up front, the creator is less concerned about piracy, because the money is already guaranteed. Presumably, the farmer cares less about the goose that has already filled a basket with golden eggs than the one that is expected to eventually fill a basket.

In the world of patronage, reputation (sub-categories: hype, public relations, image, trust) is everything. Creators rely on their history of quality and integrity to secure funding for their next project. Creators who fail to deliver quality products, or who demonstrate shady or unsavory business practices, will suffer for their failings in their future endeavors. Some artists and companies are already carving out their reputations, through repeated successes, unfortunate failures, public statements, and choices.

Navigating copyright in the conditions of Digital Patronage will be shaped by a different power dynamic than the familiar, one-to-many, gate-kept, closely-owned media structures of the 20th century. Clutching at straws of hard-line, traditional copyright enforcement will not secure survival. Thriving will require earning trust through performance. Creators must give more consideration to next year’s potential earnings than to next quarter’s bottom line. They must create a functional, interactive, cooperative, collaborative relationship with their audience. The successful creators of the 21st century will be those who treasure their reputation as they will rely on the good will of others.

… And reputation and good will are what Trademark Law is all about…

 

 

 

*“By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas” (p. 558) Harper & Row, Publishers, Inc. v. Nation Enterprises 471 U.S. 539 (1985)

 

** Those who manufactured physical products did not enjoy this same boon through the 20th century. Advances in 3d printing now give them a direct stake in the outcome of this transformation. There’s room for everyone at this party— I can’t wait for Physical Objects to show up with their partner, Patents!

 

Explaining Myself Through Mini Metro: Making Lots of Connections

I’ve always been a fan of the minimalist art style. As an art style and a category of interior design, it gets a lot of adjectives like “clean,” “crisp,” “pure,” “uncluttered,” and “bright.” I’d have to agree that Mini Metro is a game with a minimalist art style. But the aesthetic isn’t the only thing that appeals to me. The game mechanic is about connecting: making a metro system that is as efficient as possible as a city places ever-increasing demands on the network.

I love the concept of connection. I love to connect ideas and words, and I have spent most of my life studying and forming such connections. Careful, structured explanations of connection and disconnection are at the heart of the practices of both philosophy and law. Like most humans, I also cherish my close connections with others. At every level, and in every sense, connection thrills and amazes me.
Mini Metro is a game that is a design model for making connections— So it’s fitting that I use it as a model to connect the areas of law in which I am interested.

The railway network itself is the telecommunications infrastructure. The people that travel on the network are the entertainment content of the digital age: text, pictures, audio, movies, games—almost all of it subject to copyright law. The signage around train stations tells people about the places: it helps people make choices based on comparative information. I admit this is the biggest stretch in the analogy, but I’m comparing that to trademarks because of the informative function that aims to dispel confusion. And of course, there are safety concerns around all public transportation. Cybersecurity, by and large, is the safety structure for the internet: it is the area of law that tries to get everyone to navigate the system without tragic injury. And just as trains are regulated, this digital structure enjoys some oversight by the FCC (in the form of general regulatory rules) and FTC (in the form of consumer protection enforcement).

One of my favourite moments in Mini Metro is when a station appears on a line I have already built. I don’t really know if this is just the RNG-gods smiling down upon me, or if there is a definite structure and these moments are signs that I have designed optimally. In the effort to connect law and technology, sometimes a new device or idea appears that can force a re-drawing of the legal lines. Part of me wants to think that a law can be created with the future in sight, but the speed and direction of technological developments are so amazing that I don’t know if policy design can do better than hope for luck.

Mini Metro can be used to explain how my areas of interest relate to one another. It can also explain why I love these things, too. In the abstract, the game is about making it possible for people to go places. It is about how large-scale design decisions affect humble individuals. Technology and law are connected to each other—and both are connected to individual lives and to society, generally. The magic of connection is that it makes each individual node matter to the other nodes with which it connects. A single idea, or law, or device, or person—nothing is all that interesting, meaningful, or exciting until it is connected to other things in the world. Then both the connector and the connected affect and transform one another as they interact. In this way, the relationship between law and technology is like a relationship between people. Whether they are friends or enemies, they will shape each other because they are connected.

 

I never said I was super good at the game. But it's still fun.

Just trying to help the Parisians get through the day.

Darkness In The Dungeon of the Mind

Unknowable Darkness

Humans are instinctively afraid of the dark because it hides – indeed, it is – the unknown. Lovecraft’s mythos is horrific because of the themes of the unknowable and incomprehensible. His most terrifying monstrosities are not horrible in their descriptions, but in their defiance of description. As subjects for the unspeakable, Lovecraft included entities from unknowable dimensions, beings of size and power that operated on cosmic scales and geologic time frames. The heart of his weird fiction was the powerlessness and smallness of humanity in comparison to the size and age of the universe. The difference in scope is highlighted by his use of characters of science and academia – those who focused on intellectual pursuits, those best suited to understanding, describing, and explaining anything in existence – and their utter inability to psychologically approach the weirdness confronted in the tale.

We can get through life because we know that Lovecraft’s writings are fiction, and the threat of Cuthulu’s or Azathoth’s indifference evaporates when we turn off the screen of our e-reader. But there is something as unfathomably vast, as untouched by scientific comprehension, and as potentially horrifying that we live with each day: the human mind.

 

How To Explore The Darkest Dungeon

Darkest Dungeon is a mechanically and graphically simple game. It is a roguelike dungeon-crawler (one of the oldest genres of computer games), with turn-based combat and a small world (though each visit to one of the five locations is a different procedurally-generated iteration). The general structure of the game is not a novel concept: you arrange, equip, and direct a party of adventurers through an unexplored region in which you randomly encounter monsters, traps, or treasure. True to the game’s name, the dungeons are always dark, and so the party must bring a supply of torches in order to see. The level of the light provided by the torch is one major factor that will impact the stress your characters suffer. As the stress of your characters builds, they may become overwhelmed and develop traits which undermine their performance on the adventure.

I’m not very plugged into the survival horror genre, but I remember thinking that the sanity bar in “Amnesia” was a clever idea because of the affect it had on actual game play: making the screen blur and making running actually more difficult. Developers have to make careful decisions about how to guide game play functionality, because too many factors on game play with make the game confusing and disorienting. Because game play is, categorically, what sets games apart from other media, those factors which underpin game play are core to any careful analysis of a game.

 

Stress and Psychological Damage As a Game Play Mechanic

In Darkest Dungeon, characters overwhelmed by stress will develop a random trait (e.g., Paranoia, Hopelessness, Fearfulness, etc.). This will cause them to have a chance of being uncontrollable—they might refuse to act during combat or act of their own (impaired) volition. The brilliance is making the mental state of the character directly impact game play: rather than telling me that my archer is overwhelmed by the descent through gloomy and perilous ruins, the game shows me that my combat-trained adventurers cannot connect their will to their actions via their mind—that their mind cannot function in that expected capacity.

My most horrifying moment in this game was when my warrior cut himself with his own sword during combat, while madly raving about his need to bleed. Maybe it struck a dormant chord with my memories of people I knew in high school who struggled with self-mutilation as coping effort for their depression and anxiety, but I stared dumbly at my screen for long moments after that turn. Stunned and aghast, I suddenly understood what the game was actually about: the struggle with one’s own mind in coping with a terrifying, hostile, dangerous world. This game is about watching adventurers break and falter as stress overwhelms them, and trying to save them from the total destruction of succumbing to psychological injury while pressing toward a noble objective.

 

Darkest Dungeon is an Exploration of Something Universally Terrifying: Our Human Psyche.

In the game, you are summoned to your ancestral estate, bequeathed to you by a relative who explored forbidden depths beneath the grounds. As you explore an ancient estate, festering with a recently unleashed and mysterious evil, the real exploration is of the corners of the human mind. As stress illuminates those recesses of impermissible thought and taboo contemplation, characters are set upon by their own inexplicable urges, vices, and fears. However, in an inspired and inspiring design decision, there is occasionally a heroic reaction to the overwhelming stress. A minority of the time, when a character is overwhelmed by the stress of their circumstance, a positive trait (in place of a negative one) bursts forth, imbuing that character with additional power and capacity to carry on.

The exploration of the mind mirrors the exploration of the dungeon: as you explore the unknown, you are likely to encounter danger and harm, but occasionally, you find treasure. Stress becomes the torch by which you discover the parts of yourself that otherwise remain hidden and unknown.

Darkest Dungeon is an impressive example of a game that incorporates mental health directly into the core game play and story without being either patronizing or pitying about it. Indeed, the entire mechanism seems so obvious: would repeatedly wandering into dangerous, scary places have a noticeable impact on your mental functioning? Probably! Yet most RPG-adventures and dungeon-crawlers have the kinds of heroes who are impervious to fear or stress, so this kinds of interaction is scarcely considered in most games.

 

Dismissing Horror With the Light of Science

Andrew Scull documented the shift in Western social approaches to mental illness over the course of the last few centuries. The general shift was from the view of mental illnesses as supernatural and unknowable to scientific, psychological, and neurobiological. As science began to understand the brain, mental illness became a thing that could be understood and addressed. This progress continues to steadily lessen the fear and stigma around depression, OCD, schizophrenia, autism, and other diseases and conditions whose bearers would previously have been ushered out of functioning society entirely. As afflictions of the mind are understood more as chemical imbalances or neurological disconnections, rather than as demonic possessions or as indications of a subhuman status, the terror of the unknown recedes, as shadows from a torch.

In Darkest Dungeon, keeping your torch at the brightest level minimizes the stress your party absorbs. As much as darkness imposes fear, light invites confidence. Just as Lovecraftian horrors would lose their terror if they could be seen, understood, or described, mental illness is losing its own grip of social terror as science begins to see, understand, and describe the tremendously complex organ that is the human brain.

May your torch burn bright.

 

 

How You Play The Game Doesn’t Matter If You’re Losing the Sport.

This year started with the gaming news that Blizzard bought MLG. With Overwatch in beta, Hearthstone and Heroes of the Storm enjoying steady, casual game play, and Warcraft capping off its gaming legacy with a transition to a different medium, Blizzard is in an interesting place to double-down on its efforts to dominate the eSports market.

I’m skeptical of the prospect of Blizzard creating the “ESPN of eSports,” of course. The NFL doesn’t own ESPN. If they did, who would get prime air time when football and baseball season overlap? Blizzard is incentivized to promote their own products over the products of their competitors. I don’t think there’s anything wrong or shameful about that, but it should be pretty obvious that there is a glaring conflict of interest in Blizzard prioritizing between tournaments for Overwatch and DOTA2 (owned by Valve).

 

Games: Sports :: Art: Entertainment. (Remember the SAT? Wait, they removed the analogy section?)

I’ve written a little about the distinction between art and entertainment before. While they can overlap, they really have different goals: art wants to explore or express something about the world, while entertainment wants to sell something (usually itself, sometimes also a sponsor). Games want to be played; sports want to be won.

Games* are meant to be fun in themselves, and they are played well whenever they are enjoyed by the player. Features such as scores and objectives can orient the player within the game, and provide context and direction, but a game need not rely on these features to achieve delight. Playing a game is, at its core, an aesthetic experience**, and how well you are playing can be judged largely by the extent to which you are aesthetically engaged.

Sports might be fun to play, but their raison d’être is “play to win.” The joy of sports is derived from victory, not from the mere act of competing in them. Features like scores and objectives are core to the experience, and their absence would be disorienting and entirely destroy the endeavour. The activity itself doesn’t need to be enjoyable, and there are right and wrong ways to play. A good sport might also function as a good game, but it must function as good entertainment in order to be successful. A stronger delineation between games and sports would allow developers to understand and focus on the proper goals and objectives.

 

2016: The Year of the Mouse?

With the year starting with some esports hype, and steady growth in esports for the last 5 years, will this year be the year of esports? No. It will be a year of esports, but not the year of esports. There are still the same barriers for eSports that Extra Credits noted almost 4 years ago, and an ESPN of eSports won’t solve those problems. Indeed, a true ESPN of eSports (with even half of that level of cultural penetration) can only be possible after overcoming most of those barriers. The photo at the start of The Guardian’s article is pretty telling: the photo itself clearly captures a massive logo that reads “ALL-STARS,” and the caption calls it the World Championship finals in Paris (not to mention that the Paris finals were held theatre-in-the-round style, which the photograph clearly does not depict). It’s a simple, harmless error, but I think it reveals two things about the mainstream relationship with esports at the start of 2016: 1) no one knows about it (to catch simple things obvious to anyone “in the know”), 2) no one cares about it (enough to do simple fact-checking). Esports will grow this year, but I’m not sure how much or in what ways.

EDIT:

After thinking a little more about it, I need to add something: Blizzard has some incentive to promote any eSport, because eSports is still relatively new. The NFL doesn’t get as much value from promoting other sports because most people know about traditional sports, which have over a century of history. Perhaps Blizzard could promote competitor’s games on the theory that “a rising tide lifts all ships.”

*Philosophers of Language have talked about the difficulty in defining a “game.” Wittgenstein also outlined a theory of language that treats language as a game, in which words are pieces within the game, and their meanings are the moves a piece can perform.

** Kant’s philosophy of aesthetics centers on the concept of “play” between the mental faculties of reason and imagination.

“Come At Me, Copyright Bro” –Google Legal Team, 2015

Making Trades

Most competitive games involve the concept of trading. The idea of a trade is to risk some of your resources in order to deprive your opponent of some of their resources. This is part of a smaller skirmish which is only part of the overall game. The goal is to lose less than your opponent, thus putting you ahead. For most games, successful trades require a proficiency that comes with study and experience. It requires knowing both what you and your opponent are capable of and thereby knowing what will happen. The best players are not surprised by the outcomes of their choices; they know before they act how the exchange will unfold. When chess masters think about future moves, they are performing this kind of trading calculus.

Attorneys make the same kind of considerations. Particularly, those who litigate (though many attorneys don’t) use their knowledge and experience to predict the outcomes of various legal strategies. For a master attorney, the outcomes of legal choices are as unsurprising as the outcome of a chess move is for a chess master. Good attorneys don’t pick legal battles wildly or whimsically. They know in advance what the risks are. They know the possibilities and probabilities, the parameters and requirements.

I have no doubt that YouTube’s new fair use policy comes to us after many, many hours of careful thought by many legal experts. It is bold and brazen, but calculated and deliberate. It is not, strictly speaking, a defiance of a federal law. But this new policy does cast aside some of the protections offered by the law.

Picking A Skirmish

The Digital Millennium Copyright Act (DMCA) covers a wide range of topics, including questions of copyright infringement on the internet. To incentivize websites to host material, as well as to incentivize their cooperation with the policing of copyright infringement, the DMCA offers “Safe Harbor” protections to those websites that promptly take down those materials suspected or accused of copyright infringement. The system is called “notice and take down”: When someone gives a website notice about infringing material, the website simply needs to take it down. This is why so many US-based companies are quick to take down content when a copyright claim is filed: the compliance of the host protects them from a lawsuit for the copyright infringement.

For many years, YouTube took advantage of the protections offered by this law. When a copyright infringement claim was filed, YouTube promptly removed the content in question. It could often be uploaded again, with the content uploader asserting that the video did not infringe a copyright. The dispute would then be between the user and the [self-proclaimed] content owner, Google having excused (or protected) itself.

Google’s new policy is to reject some copyright complaints in certain cases. Those cases are those in which Google thinks that the video does not infringe copyright and is protected by the fair use doctrine. What sounds most impressive is that Google will even defend legal claims against those videos in court for up to 1 million dollars in legal costs. That isn’t actually as impressive as it sounds, because Google has left the Safe Harbor protections when it refuses to remove disputed content. In this act of defiance, Google is on the hook for copyright infringement as though they had been the ones to upload the video.*

The DMCA does not give license to content hosts to make judgments about fair use. That remains the purview of the courts. Google is relying on their legal team’s expertise to predict how a court would rule regarding a video. If they are wrong in this prediction, they could lose rather badly.

Uncertain Factors, Unpredictable Trades

The fair use doctrine is not extremely well-developed. American law schools require all students to pass certain courses, and many of these core courses** feature cases that are over 100 years old. One of the most famous cases in Contract Law is from 1854 (and from an English court, no less). The most famous cases on Fair Use are from the 1980s and 1990s, and they don’t give a thorough, detailed explication of this legal concept. They only apply fair use to some specific sets of facts.

Fair use is far less certain a legal doctrine than the two-hundred (or seven-hundred) year old precepts that guide areas of law such as property, tort, or contract. This makes it harder to predict the outcomes of taking some cases to court. There are no masters for making “trades” with fair use in court. It hasn’t gone to court enough times with different cases for anyone to know exactly what it’s capable of.

This is an incredibly exciting challenge that Google has thrown down. They have stepped out of their sanctuary. They have taken up a weapon that is uncertain and largely untested. They are risking substantial damage if they lose. And they really didn’t have to do any of it. They could have stayed safe and sound, risk-free, and followed the pattern of notice and take down. They didn’t need to change anything. I can only guess what might motivate them to make the world a better place for others. Perhaps Google decided that if they are going to control the world, they want it to be a world more worthy of their control.

(Or maybe Google is throwing their weight behind fair use now that it is it the next defense for Java APIs after a ruling earlier this year that Oracle can copyright the structure, sequence, and organization of an API.)

 

*A little over-simplified to avoid a discussion about the difference between joint and several liability.

**Copyright law is not a required course, and isn’t always even offered as a full subject by itself—making fair use a small part of a lesser-known area of law.

 

Individuals or Groups in Fallout?

Bethesda released Fallout 4 this month. It’s the sequel to one of my all-time favorite games, so I’ve talked about it with most of my friends. As with books and movies, people often ask “so, what is the game about?” I think there are two general ways to answer this question for the Fallout games, and which of those two choices you pick may reveal something important and fundamental about how you see the world. Like seeing glasses of water as half-empty or half-full, some people tend to see Fallout (and the world) as about individuals, while others understand the game and society in terms of the relationships between groups.

1) Wasteland v. Shelter

The entire Fallout Universe is set in an alternate future Earth that results from a history that diverges from our timeline around the 1950s. In The Fallout Universe, dwindling natural resources ultimately lead to global nuclear annihilation in the year 2077- though the happy-go-lucky hokey culture of the iconic 1950s middle-America never went away. Pockets of the population survived the nuclear holocaust in large underground Fallout Shelters, called Vaults. In each of the four main Fallout games, the player controls a character that emerges from one of these Vaults to explore the desolate American ruins (called “the Wasteland”) and navigate the emerging post-apocalyptic civilization.

My own interpretation is that a Fallout game is “about” an individual: the player’s character, who emerges from the vault and explores the Wasteland. The alternate understanding is that the games are about a post-nuclear war America, and the societies and choices that might exist there. I think that the design (e.g, the isolation in the player character’s generic identifier) and mechanics of the game (a first-person RPG) focus the game on the player, rather than the world. The contrast with another Fallout game, Fallout Shelter, makes this distinction even more clear.

When project lead Todd Howard announced Fallout 4 at this year’s E3, he also announced a simple game for tablets and phones: Fallout Shelter. This game allows a player to design, build, control and manage a Vault of their own. This game requires players to optimize work assignments within the vault, balance resources, manage growth, and face disasters. In contrast, Fallout 1-4 require a player to create and manage a single character. Then the player must move that character through the Wasteland to find supplies, fight enemies, and make individual decisions in their interactions with non-player characters. Other game design elements also emphasize the difference between the focuses of Fallout and Fallout Shelter. For example, Fallout Shelter continues after a Vault Dweller’s death, whereas a game of Fallout ends when the player’s character dies.

2) Kierkegaard v. Hegel

It can be difficult to talk about some things that are extremely basic to our experience. We don’t stop to think about how we could describe the primary colors or define some commonly used word, much less explain three-dimensional space or what it feels like to feel. So, most people don’t reflect on some of the axioms they use in interpreting the world. Luckily for we plebeians, it is the business of philosophers to ask questions that “normal” people never get around to asking.

Soren Kierkegaard is known as the father (or grandfather) of existentialism, as well as one of the most prolific Christian theologians. He focused much of his philosophy on a concept of “subjectivity,” or “inwardness.” While we think of “subjective” as a term to describe something uncertain, indeterminate, or disputable, Kierkegaard rarely means anything like this. His use of the term refers to individual experience and existence—the things that no one else can feel or be on another’s behalf. (See also: phenomena, ownmost) For some people, this is the fundamental operation of the world: reality is only ultimately understood as individual subjective experience. This is not to say that the rest of the world does not exist, but only that the world is understood as an individual experiencing that world. This might be more clearly understood by a comparison to an alternative view.

G.W.F. Hegel is one of the most influential philosophers in history (just look at the last paragraph of his intro on Wikipedia!). His ideas still influence most of the humanities and social sciences, and in turn influence public policy and law. His most enduring ideas— synthesis-antithesis-thesis, slave-master dialectic, and other ideas assorted the End of History—all find their basis and application in a particular understanding of the world. Hegel understood the world in terms of broad groups and populations. Though he paid more attention to nationalities and cultural groups, Karl Marx would pick up his ideas with a sharper focus on economic classes, and 20th and 21st century branches of feminism similarly rely on understandings of groups of sexes, genders, race, and so forth. Whatever they type of group, criteria of classification, or mode of organization, this view sees the world as sets of people. What matters, fundamentally, is the structures and systems that guide the interactions and relations of these groups.

Except in the most extreme cases, neither of these contexts aims to deny the existence of the other. Hegel’s view of people as masses and classes does not deny that individual humans exist or have experiences. Despite his more polemic and attention-grabbing assertions, Kierkegaard acknowledges that large groups of people may have enough in common to be grouped together for at least the purpose of discussing issues at a large scale. However, these two base concepts are so different that they can have trouble understanding one another, and apparent conflicts between them can be frustrating for both sides.

3) War Never Changes, Even on the Internet

I’ve seen a few disagreements in cyberspace. (I’ve seen them in physical reality, too; the same precepts apply, but arguments are easier to dissect and consider when they are recorded in unaltered writing… because logos.) Particularly on subjects of social or political concern, parties can reach an impasse which I think stems from the same kind of difference that I find between Kierkegaard and Hegel.

Many disagreements feature an assertion of some fact about the world (in the form of statistics or data about large groups, large scales, or general systems and structures), which finds a response in the form of a personal anecdote (a friend’s experience, a single individual counter-example, a personal story, etc.). This personal experience appears to contradict the first assertion, and both parties reaffirm their positions without exploring the difference in the kind of evidence offered. Progress is rarely made, and each combatant will leave the fight feeling certain of their own victory, and annoyed that their opponent was too stupid to even understand such a clear and convincing outcome.

One significant effect of these different viewpoint axioms is what kinds of things can constitute valid evidence. For those associated with Hegel’s position, most single, individual experiences can be dismissed as statistically outliers or generally poor basses for public policy decisions. However, for those who embrace Kierkegaard’s understanding, individual experience is of paramount importance in shaping individual thought and opinion; larger scales may certainly be considered, but can never replace personal, subjective experience.

4) Believing in the Atom: Quantum Mechanics v. Classical Physics

In Fallout, there is a religion that believes in an inherent divinity of the nature and structure of the atom. Adherents to this sect view nuclear devastation as an act of creation rather than destruction, and see nuclear radioactivity as a source of both physical and spiritual power. The fact that atoms comprise all matter and can be split to unlock tremendous energy inspires awe and wonder for these worshipers. While that is awesome, I find it more amazing that the particles which make up atoms obey entirely different laws than the objects which the atoms themselves make up.

It seems self-evident that the all of the physical world ought to be governed by the same set of laws. We expect all objects, from apples to planets, to behave the same way everywhere in the universe. The fact that sub-atomic particles don’t behave like planets is a vexing concern for many scientists (even those not spending their lives trying to resolve this contradiction by developing String Theory). What seems to annoy scientists the most is that each law clearly works in its respective domain. Neither disproves or overpowers the other, yet they remain incompatible. In the same way, viewing humanity from either the individual perspective or from a scope of large populations seems functional, and neither viewpoint disproves or obliterates the other.

I don’t know whether it’s even the right question to ask, whether Kierkegaard or Hegel was “right.” Maybe that’s the wrong way to think about the matter. But I think understanding these two approaches brings coherence to a lot of apparent noise in internet discussions, and makes comprehensible what might otherwise just appear to be deranged ranting. It will be a lot of work to bring these two worldviews into harmony, but just recognizing them might be a very fruitful first step.