Extra! Trademarks? What Telecommunications Can Learn From IP Law To Combat “Fake News”

Formal news broadcasts play a role in a lot of games that focus on story: Deus Ex: Human Revolution centers itself around the news broadcaster, and the game culminates in the decision of which news story to broadcast. Starcraft II’s Terran campaign allows the player to explore the storyline by watching news broadcasts. Borderlands 2 allows players to follow news broadcasts from different sources as the main storyline progresses. In each case, there is always a gap between the news story that is presented and the information the player has. Whether the news is explicitly propaganda, merely biased, or simply missing information, each game underscores the fallibility of the news as a primary source for information.

Subjectivity, bias, and context can change the interpretation of a news story. Words themselves can also be subject to changes in context and intent. The term “Fake News” gained fame when used by Trump to accuse CNN of, essentially, being left-of-center. However, it has more recently been used to refer to Russian hackers spreading propaganda and disinformation on Facebook and other social media under the guise of non-biased, traditional-style newsmedia.

Trademarks: How We Know What is From Whom

The goal of trademarks is to reduce consumer confusion by establishing clear connection between goods/services and the manufacturer/provider. This consumer knowledge is considered essential to a healthy marketplace and – in many cases—to consumer safety. Applying the same fundamental concepts of trademark law to telecommunications law might have a positive effect on combating certain forms of so-called “Fake News.” By requiring each news source to register digital certificates with social media platforms, consumers could be more confident in the source of their information. The information may still carry the biases of the institution, editor, or author of the news piece, but the consumer would be aware of that possibility from the initial contact with the article or video. Just as trademarks do not enhance the quality of a good or service, digital certification would not ensure high-quality, un-biased news containing only perfect information. Similarly, even under robust trademark law, counterfeiting (and other violations) do occur. There would be a risk of various hacking attacks that would allow “Fake News” to be published under the name of a news source that did not actually produce it. However, such a hack can be addressed and corrected in ways that are not possible in a news marketplace without identifying information for news releases.

Trademark law may even be brought to bear directly on the Fake News problem. News outlets often develop their own styles and designs that remain consistent over time, eventually becoming associated in the minds of consumers with the outlet. This could be interpreted as trade dress, and a case could be made that this is a type of intellectual property subject to legal protection. Enforcement of this would likely be very difficult against foreign, anonymous violations, but creating a culture of more regimented, clearly defined news outlets would be beneficial in helping consumers spot outliers that don’t fit the known news providers—and treat such new providers with appropriate scrutiny and supplemental research.




Regulating The Internet? Not the Tubes Themselves…

If Net Neutrality is an argument about economics (and federal administrative law), Content Regulation is an argument about ethics and culture.

Net Neutrality is becoming an old hobby horse for a lot of people. It gets a lot more attention than most telecommunications policy issues. Even though questions about copper wire lines vs fiber optic cables actually affects more people, the internet is generally united by the fact of its own existence.  This is about regulation at the highest level, determining the equality and/or equity of access to content. No one online is indifferent to the internet—the only debate about net neutrality is which policies are best for the consumer and the telecommunications marketplace (or, in the United States, “telecommunications marketplace”).

But there is another layer of regulation that is quickly gaining attention. If Net Neutrality is about the form of the internet (its structure and broad organization), there is a growing need to consider questions about the regulation of the content of the internet. Over the years, the internet has been a vector for some amazingly good and amazingly bad actions by humans. The differences in the kind of regulatory concept at play are hard to understate. Rather than comparing it to different video games, I would compare it to the difference between a video game and a tabletop game.

1) I’ve always been fascinated by the dawn of the computer age. My childhood was the tail-end of a world in which homes did not have internet access. By the start of law school, everyone looked up famous cases and Latin phrases on Wikipedia during class (except for the people who did the reading the night before- they looked it up before class). I’ve often compared the early days of the internet to a kind of Wild West setting: a lawless frontier where fundamental questions about the mold of civilization were not yet settled. I thought most of those questions would be settled by 2015. We are not close to a consensus on rules. Indeed, we are still testing what types of rules are feasible or desirable.

Video games are literally made of rules: the computer code that constitutes the game itself. Tabletop games are made of… usually cardboard, or some kind of paper. (Occasionally, they have some plastic – or even metal if you got the collector’s edition.) This may sound like a silly or vacuous distinction, but it has important ramifications for the kinds of problems that can happen in a game, and the kinds of solutions that will (or won’t) be effective.

2) Lawlessness can lead to problems. This was probably not known until 2 decades of unfettered internet, but now we know. Free to do anything, people have tried very hard to do everything. Every app, platform, hosting site, game, or program online that gets big enough eventually starts to experience just about every problem type that humans can present. From intellectual property disputes to death threats, from fraud to manslaughter, the internet has been a way for people to discover criminal behaviors that past generations could never have the opportunity to access. The unethical choices of both multi-national companies and village simpletons are available for repeated viewing.

In a video game, the code can sometimes glitch and create problems for players. The code can also execute perfectly, but there may be complaints about the design of the game itself (a level being too difficult or some power or tactic being of an unsuitable level of power). With some difficulty, players can cheat by actually breaking the code, but more games can detect this (and especially so in professional e-sports settings). In a tabletop game, anyone can cheat, the rules may be wrongly applied (or not applied at all), and all manner of chaos can ensue. DDoSing an opponent during a game might be a little bit akin to literally flipping a table during a game of Monopoly or checkers,

3)  YouTube’s takedown system is already an example of an effort to regulate content, and it already shows some of the challenges with instituting a content regulation system: people will find ways to game that system. Any system of regulation will have two negative outcomes: it will penalize the innocent, and it will be dodged by the guilty. The most you can hope for is that it will protect most of the innocent and it will penalize most of the guilty. The US justice system, even when working as intended, will sometimes produce undesirable results: a guilty person will go free, and an innocent person will go to prison. The hope is that this happens very infrequently.

The most common reaction to bad behavior online has been for authoritative parties to do nothing. The most common reaction by authoritative parties to actually do something has been to ban the bad actor. The most common reaction to this ban is to come back with a different username or account.

In video games, cheaters are often banned (if they are making the game worse for other players). But in table top games, people who ruin the game are just not invited back. No one will play with them anymore. People might hang out with someone less if they behaved in a wildly unacceptable way during a casual weekend game of Risk or Werewolf. In a video game, bad behavior has very limited consequences. In a tabletop game, bad behavior can have lots of meaningful implications.


4) What would it look like to regular content? Getting it wrong is easy — which is the primary reason that’s what’s going to continue to happen. Whether trying to penalize criminals or regulate behavior online, creating a fair and ethical system that consistently produces more good results than bad ones is difficult. One problem is that incentives are at odds: most platforms want to turn a profit, and if bad behavior yields a net gain, the platform needs a solution that will actually make more money than the current bad behavior (plus the cost of implementing the remedy). Another problem is that platforms tend to think of regulating their content the way that most Americans think about regulations: an appointed governing authority (or combination of authorities).



You can’t make people be good, but you can keep deleting all of their manifestations of their behavior on the internet: You can suspend or ban accounts, and eventually IP addresses. You can automatically censor strings of characters, and continually update the list of banned strings. These will continue to be the solutions offered, and they will continue to mostly fail while they almost half-succeed.

Over a decade ago, Lawerence Lessig asserted that laws are of four types: market, cultural, legal, and architectural. It turns out that enforcing the legal type of law in a digital space is very difficult. But cultural norms practically enforce themselves. And architectural laws are always already enforced. Market rules can be fickle, but persuasive. A lot of efforts to regulate content will fail because they will hinge on the concepts of legal enforcement.

The lack of rules and regulations is what made the internet a place where amazing things could happen. Without rules to stop imagination and creativity, people created art, solved problems, built positive communities, and enriched themselves and each other. In that same landscape: without rules to stop hate and anger, people created harassment and bullying, invaded privacy, ruined lives, occasionally killed people, and destroyed a lot of good in the world. Lawless frontiers are the best opportunity for the most beautiful, important, and inspiring expressions of humanity. They are also the best opportunities for the most despicable, dangerous, and damaging expressions of humanity. What the internet becomes will be decided—has always been decided—by what people bring to it.

AI on the Horizon- a New Dawn of Data Science?

Writing about Horizon: Zero Dawn in the context of data privacy is a bit of a joke. It’s looking straight past the elephant in the room to comment on an interesting lamp. The central premise of the story is that humanity created powerful artificial intelligences and put them in extremely advanced robotic structures, and this pretty directly led to the total annihilation of all life on Earth by 2065 (the robots used biomass as fuel). With barely a year left, humans recognized their impending destruction and hastily banded together to create several new AI systems that would be equipped to reboot life (and humanity) after the machines destroyed life and then themselves lost all power. It’s a fascinating science-fiction story that suggests questions about the complex interconnections of economics, ecology, technology, and humanity. Artificial intelligence is deeply at the core of this story, as both lifegiver and deathbringer; as both the thing that is controlled and the thing that controls.


AI As The Child That Cannot Define Reality

AI is very dependent on its creator. The basic framework of its realities and possibilities are defined by its creator. So far, we make AIs that basically do what we do on a bigger scale. We teach the program about correlation, and it starts doing the math– using larger data sets than we can manage, and computing much more quickly than we can. But it is fundamentally locked into the system of objectives, limits, and tools that we define (or fail to define) for it. There is some question of whether AI will think categorically differently than humans. AI is very good at finding correlations and probabilities. I don’t know if AI will ever conceptualize causation the way that humans do. I don’t think that humans are good at answering “why” when asked about the world, and I don’t think AI will do substantively better. In fact, AI is likely to do worse, if only because it doesn’t need to care. If AI can find a sufficiently high correlation between two things that the AI can reliably predict an outcome, it has no reason to ask why the things correlate or why they are good for predicting an outcome.


Can AI Superiority in Statistics Outshine Human Capacities for Understanding?

Data science usually tries to serve 1 of 2 purposes: Find a correlation or provide an explanation. Statisticians are pretty good at the first one- and computers can be really fantastic at it. But the second requires “understanding” the data (and exploring that key word quickly drives to some of the core questions about philosophy of mind and human identity at the heart of AI). Humans are often not good at truly understanding the data they perceive. I’m not convinced that AI will be able to understand data (at least in my lifetime), and I suspect that explanations require understanding. AI will be able to detail the correlations that they find—they can show their work, in essence. But except for the most dedicated disciples of Hume, humans suspect that causation is different from correlation. In a fun play on words, I expect that AI will be quite Hume-ean for the next several decades.


I’m Not Against Big Data, But…

The reason I’m consistently skeptical of the belief in Big Data as some kind of panacea of the future is twofold: 1) A loss of context (how the individual creates the data), and 2) A loss of application (how the data applies to the individual). Big Data is prone to certain types of misinterpretation based the lack of context that is a by-product of reducing lives to their data points. This makes Big Data difficult to understand and apply, especially when AI is analyzing it. AI works very comfortably with data points, but it does so without ever asking about the nature or meaning of those data points.


Correlation Describes, Understanding Predicts

It’s tempting to think it doesn’t matter how Big Data and AI solve problems, as long as they solve the problem. A marketing department doesn’t need to know why people in Exampletown buy their product- they just need to know where they’re selling well! That’s fine as a reporting tool, but what about as a predictive tool? And what about as a feedback tool for future product development? If the consumers love the packaging, but marketing thought it was the flavor that keep their product flying off the shelves, marketing could be completely dumbfounded when their new packaging rolls out and sales plummet. Getting something to work is all well and good—but you need to know why things are working if you want to repair, improve, expand, or optimize.


Conclusion: “We Shape Our Tools, Then Our Tools Shape Us.”

Because AI and Big Data are poorly disposed to explaining the world, people (especially in business) are more likely to play to the tools’ strengths of describing the world by finding correlations. People will follow the example of the AI and demur from seeking understanding or meaning in the data.

In Horizon: Zero Dawn, the threat of AI was really its control of enormous robots that consumed life. I still don’t see the problems with AI being quite so theatrical. Rather, I have concerns about the power of AI to limit the plasticity and creativity of the human mind. There is a lot of danger that AI will start teaching people to think more in terms of correlation and less in terms of explanation. Correlation is a very useful statistical tool, and there are a lot of projects that can be accomplished with it. Explanation is not always necessary or appropriate, but it is still a very important tool in the toolkit of human thought.


Bonus Content: Privacy’s Meaningful Purpose

A few years ago, I dreamed up a concept of “meaningful privacy” to better define the discussion around the broad topic of privacy. I noticed that not every piece of data is equal. Some things are kept private because there is a concern of actual harm if the information is publicized. Some other things are kept private because of societal or cultural norms and traditions. Privacy is not and end in itself- we have it for the purpose of protecting information. However, different data has different value. Therefore, the value of privacy is relative, varying according to the data in question. One effect of this concept is to treat different breaches according to the type (or value) of data in question.

There is a huge and illuminating problem with this idea of “meaningful privacy”: Just because someone didn’t steal anything from your house doesn’t mean you feel comfortable about a break-in. Although privacy is not an end in itself, it is intrinsically upsetting when our privacy is violated. The biggest fear is the potential for future violations of privacy: just because no harm occurred as a result of one violation, there is no guarantee about future violations. Furthermore, a past violation of privacy indicates a vulnerability and thus the potential for future violations. With a diminished expectation of privacy, there is diminished privacy. Privacy is of little use if it cannot be relied upon.

Horizon: The Dawn of Zero Privacy?

Horizon: Zero Dawn is a problem because I don’t know which game I have to slide out of my top 5 in order to fit it into that list. (It might be have to replace “Child of Light,” which pains me, but replacing any would pain me… maybe “Outlaws” will move to #6 …) It’s an incredible game in its own right, with beautiful artwork, well-written characters, and genuinely fun gameplay. I find its story especially fascinating—and particularly relevant as we grapple with a framework for governing and living in an age of digital information and interconnected devices. Though its central technological focus is on Artificial Intelligence and the future of humanity, it touches a multitude of topics- including data privacy.

Although Judge Richard Posner famously decried privacy as a way for bad people get away with bad things, privacy is important for personal development and free association. Privacy is essential to our culture, and it is only valuable inasmuch as it is protected and reliable. Our expectations of privacy follow us into our digital extensions. However, one of the best methods of securing privacy is impractical in the face of consumer demands for interconnection and convenience.

I. Can We Have Privacy by Design When We Demand Designs that Compromise our Privacy?

The Federal Trade Commission’s favored method for protecting Privacy is “Privacy By Design.” In simple terms, this often means designing a product to rely as little on privacy as possible. After all, if no data is collected, there is no data to steal. However, there are serious questions about the feasibility of this approach in the face of consumer expectations for interconnected devices.

Privacy by Design is a much better idea than the sophomoric idea of increasing security measures. Designing a house not to be broken into is better than trying to just put a good lock on the front door. To put it another way: Think of it as building a dam without holes rather than trying to plug all of the holes after you finish building.

I’ve heard tech entrepreneurs talk about “The Internet of Things” at conferences for many years, now. They talk about it like it’s a product currently in development and there’s an upcoming product launch date that we should be excited about- like we can line up for outside of a retail store hours before the doors open so we can be the first to get some new tech device. This is not how our beloved internet was created. Massive networks are created piece by piece- one node at a time, one connection at a time. The Internet of Things isn’t a tech product that will abruptly launch in Q3 of 2019. It’s a web of FitBits, geolocated social media posts, hashtags, metadata, smart houses, Alexas and Siris, searches, click-throughs, check-ins, etc. The “Internet of Things” is really just the result of increasingly tech-savvy consumers living their lives while making use of connected devices.

That’s not to diminish its significance or the challenges it poses. Rather, this highlights that this “Coming Soon” feature is really already here, growing organically. Given that our society is already growing this vast network of data, Privacy by Design seems like an impossible and futile task. The products and functions that consumers demand all require some collection, storage, or use of data: location, history, log-in information- all for a quick, convenient, personalized experience. One solution is for consumers to choose between optimizing convenience and optimizing privacy.

II. A Focus on Connected Devices

Horizon: Zero Dawn is a story deliberately situated at the boundary of the natural world (plants, water, rocks, trees, flesh and blood) and the artificial world (processed metals, digital information, robotics, cybernetics). As a child, Aloy falls into a cavern and finds a piece of ancient (21st century) technology. A small triangle that clips over the ear, this “Focus” is essentially a smart phone with Augmented Reality projection (sort of… JawBone meets GoogleGlass and Microsoft Hololens). This device helps to advance the plot, often by connecting with ancient records that establish the history of Aloy’s world (it even helps with combat and stealth!).

It’s also a privacy nightmare. The primary antagonist first sees Aloy -without her knowledge- through another character’s Focus. Aloy’s own Focus is hacked several times during the game. A key ally even reveals that he hacked Aloy’s Focus when she was a child and watched her life unfold as she grew up. (This ultimately serves the story as a way for the Sage archetype to have a sort of omniscience about the protagonist.) For a girl who grew up as an outcast from her tribe, living a near-solitary life in a cabin on a mountain, with the only electronic device in a hundred miles, she manages to run into a lot of privacy breaches. I can’t imagine if she tried to take an Uber from one village to the next.

Our interconnected devices accumulate deeply astonishing volumes of data- sometimes, very personalized data gets captured. In a case heard by the Supreme Court this month, a man in Ohio has his location determined by his cell phone provider. The police obtained this information and used it as part of his arrest and subsequent prosecution. The Supreme Court recently heard a case about the use of warrants for law enforcement to access cell phone data. (This is different from the famous stalemate between the FBI and Apple after the San Bernadino shooting, when Apple refused an order to unlock the iPhone of a deceased criminal.)  As connected devices become omnipresent, questions about data privacy and information security permeate very nearly every side of every facet of our daily lives. We don’t face questions about data the way that one “faces” a wall; we face these questions the way that a fish “faces” water.

From cell phone manufacturers to social media platforms, the government confronts technology and business in a debate about the security mechanisms that should be required (or prohibited) to protect consumers from criminals in myriad contexts and scenarios. In this debate, the right answer to one scenario is often the wrong answer for the next scenario.

Conclusion: Maybe We Don’t Understand Privacy In a New Way, Yet

The current cycle of consumer demand for risky designs followed by data breaches is not sustainable. Something will have to shift for Privacy in the 21st century. Maybe we will rethink some part of the concept privacy. Maybe we will sacrifice some of the convenience of the digital era to retain privacy. Maybe we will try to rely more heavily on security measures after a breakthrough in computing and/or cryptography. Maybe we will find ways to integrate the ancient privacy methods of the 20th century into our future.


A Thermos Full of Aspirin For the Headache of Trademarked Words Acceptable In Scrabble

A Law of Language

Language is interesting for 3 reasons: It’s neither as stable nor unstable as we believe it is, it’s more important than we think it is, it’s the primary means of human minds interacting and yet it’s not clear what it is or how it works. A human mind exploring language is something like traversing a museum of optical illusions that is constantly reconstructing itself based on the exploration.

I think this is part of why I love trademarks. Trademarks are one of the places where boring, unimaginative people (who care only about money and the weather, but only sincerely about the first) are given an example of why it’s ok for me to care about interesting, abstract ideas like language. Trademarks (especially word marks) are about the use of language to describe and define the business world. However, law wants to be stable and static, and language sometimes wants to be fluid and miasmic. Because law is made of language, there are some challenges that come from language in every field of law- but trademark law is almost made of language puzzles.

Scrabble: A Classic Language Word Game

Sometimes I get salty when I play Scrabble. Not because I lose a lot (though… that too), but because I see dictionaries as valuable tools for describing and explaining language.

I don’t think Scrabble is actually a game about language. It is about words. Some words, at least: sequences of letters that are on an approved list. The question that underpins my frustration is “How do we decide which sequences of letters make it on that list?” I think that question is really about the difference between words and language. Words are just strings of characters that we can list. Language is a complex network of decisions about communication. The flexibility and organic nature of language is the foremost challenge in determining the official list of proper and acceptable words.  The Great Scrabble Tradition (and probably also some rules) holds that “foreign words” and “proper nouns” are not permitted. Depending on the house rules, this usually includes company names, brand names, and product names.

I recently had the opportunity to play the word “thermos.” I stopped myself- I knew the word was trademarked over a hundred years ago, which would make it an ineligible word for play. I later looked the word up, unsure if there was some “definition 2” trick that I didn’t know about. I was surprised that the word was acceptable for play in Scrabble. I leapt into research and found out that the thermos trademark was actually cancelled in 1963 as a result of a Federal Circuit ruling that the word had become generic! I was so excited to learn about a trademark cancellation by a court that I didn’t even remember to be salty that I could have won that game if I’d known I could play that word. A court ruling like that is pretty rare, so this was a very exciting find.

Genericized Trademarks: A Vibrant Afterlife for Intellectual Property

Not a lot of words have the distinction of being introduced to the world as a label with a business goal in mind, and then transform into a piece of common parlance. But when they do, it is often because the business was too successful.

In copyright, works automatically become part of the public domain after a fixed number of years (realistically, whatever time Disney tells Congress to choose, but at least Congress writes down the most recent number of years in the latest copyright law amendment). Patents expire automatically after a fixed number of years (20 years for a utility patent, 14 for design). Trademarks don’t have a built-in expiration date- they’re generally just valid until they’re no longer used in commerce. But on rare occasions, the word can become generic over time. As more people get familiar with a product, they use the special name of the product as meaning the general name of the product. In my own lifetime, “Google” has changed from one of several search engines to the verb for general online research. Google fights this, a little, but they’re going to lose. It’s a little like when people try to control  copyright violations in the context of the internet. It’s very hard to stop people from singing and drawing what they want to, even if you can curb some of their publications. But if that is hard, it’s nigh impossible to stop people from using language the way they want to.

Conclusion: Trademark Law is For Consumers as well as Business

I love the poetic irony in trademark law: when you dominate the market too completely, you lose something about what made you special. When Aspirin was introduced by Bayer to American doctors, “Bayer listed ASA with an intentionally convoluted generic name (monoacetic acid ester of salicylic acid) to discourage doctors referring to anything but Aspirin.” This somewhat underhanded marketing move contributed to a 1921 court decision that effectively cancelled Bayer’s trademark.

Trademark law is made for a thriving, competitive marketplace. Its purpose is to help consumers navigate a busy and crowded marketplace accurately, and without being deceived. When the marketplace is no longer competitive, trademark law is less necessary. The rules concerning generic trademarks emphasize that trademark law exists to protect consumers from confusion and deception. If trademark law was centered on protecting businesses*, it would not make sense to cancel the trademark of a company that had dominated the market.

Just as Scrabble is a word game, not a language game, trademark law is a consumer protection law, not a business law. The distinction seems small, but sometimes a small difference matters. Like when you decide not to play “thermos” and lose a round of Scrabble by less than 10 points. One word– and the legal and linguistic status of the word– can make a difference, for both Scrabble and trademarks.


*Trademark law does protect businesses, of course: it prevents other competitors from benefiting from the branding and goodwill of a company, and gives legal backing to the abstract notion of “goodwill” that makes it a viable, montized asset of a company.

Elon Musk’s Open AI beats Pro DOTA Players

It’s not surprising that bots like Open AI can beat human players– it’s not like a computer program is going to misclick. Computers do really well at playing defined games and accomplishing carefully specified tasks. Computers don’t do well at having emotional states, or handling logical contradictions (hypocrisy, cognitive dissonance).

1) Computers don’t have desires. They might have a desire for self-preservation, but it isn’t clear that they would. If an AI had a preference for self-preservation, it would only be as a means to achieving the end of its programmed goals. (A pancake-serving robot would only want to remain alive in order to keep serving pancakes.) The lack of preferences and desires is the central emotional difference between humans and robots.

2) Computers work very well in clearly defined systems. They’re excellent at playing games like chess, go, and DOTA. They probably wouldn’t do well at “shooting hoops” or “ring around the rosie,” where the purpose of the game is to “just chill out” or “have fun and be happy.”  They might eventually get to the point where they can solve problems by thinking “outside the box,” but the biggest concern with AI is that the first few attempts at “thinking outside the box” will result in disaster, because the computer may do tremendous damage in the course of achieving a simple goal.

I don’t fear a robot uprising because I don’t expect robots to want to rise up. That is an incredibly animal –and especially human—desire: to seek to overthrow power and become powerful. I don’t think that robots will arrive at a sense of justice or self-respect of their own accord. (Though it would be very interesting if they did, I do not find any convincing argument that this would happen.)

The biggest concern isn’t a sentient, self-aware, self-repairing, self-replicating robot that inflicts retribution upon humanity for their collective sins. The much more realistic problem with AI is the likelihood of the kinds of problems we experience all the time with computers, just compounded to more dangerous scenarios (e.g., someone will die because the robot operating on them had a glitch or a system crash).