Music Modernization Act, Artificial Intelligence, and Cryptocurrency

“Angel Pumping Gas” is not a song about copyright regulatory policy, artificial intelligence, or cryptocurrency. I’m going to use it to round up all three of those subjects in this blog post.

  1. Music Modernization Act: Not Enough Of a Good Thing

“Why won’t this moment last?”

A.

“Angel Pumping Gas” is a 1999 song by the band Lindsey Pool, the second track on the album Postal. The song was circulated around various music sharing sites and services—but it was erroneously attributed to the band The Postal Service. Even now, comments sections on YouTube express surprise concerning the song’s artist. Google’s first result for lyrics attributes the song to The Postal Service. This serves as a clear, simple example of how easily information spreads online, and how difficult it can be to correct information online. If early 2000’s music sharing used a single database held the information for every published song, such an error might have never happened. A new law requires the creation of such a database, but there’s a lot still up in the air.

The biggest open question from the Music Modernization Act is: Who is going to create and maintain the required database of songs and rights holders? The law mentions that a database will be made, presumably by the Mechanical Licensing Collective that the law also creates. This is only forces everyone to ask more questions: who will be on the board of the Mechanical Licensing Collective? What methods will this organization use to create this database?

B.

Measured by content, “Angel Pumping Gas” is little more than an unnecessarily detailed recounting of purchasing gasoline. In fact, the middle two-thirds of the song is an entirely banal description of an entirely ordinary and unremarkable transaction. Only the beginning and ending of the song (and chorus) frame the experience in terms of the romance and desire that the singer feels. It’s either beautiful post-modern appreciation of the beauty in the mundane encounters of our lives, or it’s just a little bit silly.

The Music Modernization Act is either a beautiful resolution of a pressing problem in the music industry, or it’s just a little bit too narrow to be worth caring about. The Music Modernization Act was passed unanimously by the House. Sound Exchange and the RIAA have praised it. It seems like everyone loves it, so I was surprised to learn how narrowly tailored the new law is. It is almost entirely focused on problems specific to digital streaming of music. Though there were issues that required resolution in this area, there remain enormous gaps between current copyright law and the daily use of media and technology. It is unsurprising that the problem that got addressed was one that concerned the rich and powerful (record labels, digital platforms), but they did take the opportunity to include studio professionals in the legislation—a group that has historically be neglected. Music Modernization Act is not as far behind the times as I expected: it’s not a response to Napster, it’s a response to Spotify… but I would still like a more satisfying response to Napster than the DMCA.

 

  1. Artificial Intelligence All Around Us– And We Don’t Know What It’s Doing

“You ask ‘What Can I do?’ I say ‘unleaded fuel.’ You open up my tank and start the pump.”

“Angel Pumping Gas” is a wistful ballad that describes a brief meeting with a filling station attendant, with whom the singer is immediately infatuated. Filling station attendants are rare in 48 of the 50 states (NJ and OR have laws against filling one’s own gas tank… as does the town of Huntington, NY).  The entire premise for the song is slightly alien to the tens of millions of Americans who have always pumped their own gasoline.

For most young Americans in the 90s, gas station attendants were a historical curiosity—something referenced in films in the 50s and 60s. However, for residents of NJ and OR, having someone else fuel your car was a commonplace occurrence. Today’s emerging technologies have the same impact: a device or service is either a commonplace part of your life, or it’s a foreign concept. Twitter, Facebook, Alexa, Smartphone GPS navigation, Netflix, Twitch, YouTube, AmazonPrime- all of these things are, for most Americans, either so commonplace as to be unremarkable, or are simply not part of your life. As technology becomes more integrated in our lives, the difference between so-called “haves” and “have-nots” becomes more pronounced. The very premise of the song creates a divide in the audience: there those who have encountered a filling station attendant, and those who have not.

Our relationship with technology is already creating visible divides in our population. We aren’t always sure who is a bot, though some of us are willing to pay a lot for their art. Even as AI becomes an essential tool for the largest companies that manage important aspects of our lives, the law has no idea how it will handle the legal aspects of a tool that is on a complicated trajectory. Artificial intelligence is steadily becoming more and more commonplace- but the majority of us can’t see how or where AI is being used, much less which systems use what kind of data. Like a teen in the 90’s listening to a song about a filling station attendant, most people who hear about bots and AI have to turn to movies and pop culture references to draw up a mental picture, rather than rely on our own experiences.

 

  1. Cryptocurrency’s Perpetual Hype

“You walk over my way, I didn’t know what to say… I think that I love you, or maybe it’s just the fumes.”

The song details the singer’s desire and longing, wallowing in the idea of feeling a romantic desire for someone he doesn’t know. The song juxtaposes the intensity of the singer’s amorous emotions with the brevity and shallowness of the interaction. Our popular culture mirrors this adolescent infatuation in our reactions to new technologies: sudden, intense waves of excited fervor for a world-changing device or platform that either never arrives or seems to evaporate into the past shortly after it appears. (I have written before about the hype surrounding the Internet of Things… )

Cryptocurrency prices are down, but it doesn’t feel like the hype has suffered at all. The estate of one of the Wu-Tang Clan is starting a cryptocurrency, to be named after the deceased: Dirty Coin. The strangest part of this is that I haven’t seen blockchain applied in the kinds of contexts I expected it to find more success: online games, a new kind of customer loyalty program, or other gimmicky, comparatively low-stakes settings. Perhaps the hype is fueled by risk-taking and gambling, and such settings aren’t thrilling enough. This is unfortunate, because turning down the hype would allow the technology to actually move forward in much more appropriate, smaller steps, rather than trying to change the world all at once.

Is the gas station attendant in the song the destined One True Love of the singer? It’s not impossible. Are there are a lot of fumes around gas stations? In my experience, yes- always, in fact. Will cryptocurrencies bring about a Utopian future? It’s not impossible. Do crowds tend to favor exciting hype over careful, substantive analysis? In my experience, yes- always, in fact.

Conclusion

“We share our precious moment in a glance…  and as I drive away, her memory’s here to stay—her deep blue eyes have left me in a trance.”

The singer bemoans that he needs to leave, as the road calls him away. His lack of control is an unstated axiom of the logic that he must follow. The singer is a passive pawn of forces around him: fate, the road, filling station attendant (her authority to invoke payment and her beauty), the transaction, his emotions. He begins the song by attributing the encounter to fate and concludes with the resigned acceptance that the separation is, perhaps, better for all involved. This is not a song about a person taking decisive actions; this is a song about a consumer making his way through a brief and common transaction in the life of a middle-American.

Society seems to display about as much mindfulness and self-possession in approaching technology. We owe it to ourselves to take more effort and more thought regarding our laws and our technology than an adolescent’s unapproached crush.

Advertisements

Computers Are Not Problem Solvers- Computers Are the Problem We Must Solve.

The New Checkout Cashier That Doesn’t Care If You Starve

There is an effort to use a simple AI at the office where I work. Some slick salespeople sold the building 2 cutting edge, top-of-the-line, automated checkout machines. These machines have a camera that stares at a designated check-out square. People simply select the items they wish to purchase and place it in the designated area. The camera recognizes the items, registers the purchases, and the person then swipes their card and completes the purchase process. However, the camera sometimes does not recognize the item- and there’ s no other method for buying the item when this happens. I leave my snack or drink by the incredibly expensive and completely useless machine. Betrayed by technology and the salespeople who sold the devices to the facilities management, I walk back to my desk in anger and disgust.

It’s a simple story, but an increasingly common one: we start to rely on technology, and when it fails, we just hit a wall. It’s not clear to me what advantages the camera offers over a scanner (which is used elsewhere in the same cafeteria for self-checkout). This kind of story will be more common as more people rely on smart homes, smart fridges, smart dishwashers, smart alarm clocks, etc. The “smartness” behind each of these is rudimentary AI- recognizing patterns and sometimes making simple predictions. The hope is that the technology will understand its role and take a more proactive approach to helping humans.

However, the technology doesn’t understand its role, and it really doesn’t care about helping humans. When AI encounters an error, it doesn’t go into “customer service mode” and try to help the human achieve its goal. It doesn’t try to resolve the problem or work around it. It just reports that there was an error. If a retail employee did this, it would be the equivalent of being told “I can’t ring up this item,” and then the employee just walks off to the break room. Most people wouldn’t return to a store that had that level of customer service. People born before 1965 would probably even complain to the manager or local community newspaper.

These problems can be resolved, but the fixes are rarely designed into the technology at release. I’ve had this problem with the checkout machines at work about 7 times over 7 months (I don’t even try to use them more than about once a week)- I am aware of no effort to improve the situation. Because the designers probably never use the machines, there’s a good chance no one in a position to fix the problem is aware of the problem.

More Dangerous Places to Put AI: Cars and Financial Markets

The fundamental problems for AI are annoying and disappointing when they deny us snacks or try to sell us shoes that we already bought. But these problems are amplified from “annoying” to “tragic” and “disappointing” to “catastrophic” when they manifest in vehicles and financial markets. If our AI checkout machine doesn’t care if people can purchase food, what else are we failing to get AI to care about in other applications?

AI is the newest technology, which means it is subject to all of the failures of previous technology (power outage, code errors, physical tech break) and also the new failures of technology (AI-specific problems that sometimes actively resist resolution).

None of this is anti-technology- on the contrary, I think AI is a fantastic development that should be used in many applications. But that doesn’t make it a great (or even acceptable) tool for every application. A warning that hammers should not be used to put screws through windows is not a diatribe against hammers, screws, or windows. It’s just a caution that those things may not mix in a way that will yield optimal results.

Elon Musk’s Open AI beats Pro DOTA Players

It’s not surprising that bots like Open AI can beat human players– it’s not like a computer program is going to misclick. Computers do really well at playing defined games and accomplishing carefully specified tasks. Computers don’t do well at having emotional states, or handling logical contradictions (hypocrisy, cognitive dissonance).

1) Computers don’t have desires. They might have a desire for self-preservation, but it isn’t clear that they would. If an AI had a preference for self-preservation, it would only be as a means to achieving the end of its programmed goals. (A pancake-serving robot would only want to remain alive in order to keep serving pancakes.) The lack of preferences and desires is the central emotional difference between humans and robots.

2) Computers work very well in clearly defined systems. They’re excellent at playing games like chess, go, and DOTA. They probably wouldn’t do well at “shooting hoops” or “ring around the rosie,” where the purpose of the game is to “just chill out” or “have fun and be happy.”  They might eventually get to the point where they can solve problems by thinking “outside the box,” but the biggest concern with AI is that the first few attempts at “thinking outside the box” will result in disaster, because the computer may do tremendous damage in the course of achieving a simple goal.

I don’t fear a robot uprising because I don’t expect robots to want to rise up. That is an incredibly animal –and especially human—desire: to seek to overthrow power and become powerful. I don’t think that robots will arrive at a sense of justice or self-respect of their own accord. (Though it would be very interesting if they did, I do not find any convincing argument that this would happen.)

The biggest concern isn’t a sentient, self-aware, self-repairing, self-replicating robot that inflicts retribution upon humanity for their collective sins. The much more realistic problem with AI is the likelihood of the kinds of problems we experience all the time with computers, just compounded to more dangerous scenarios (e.g., someone will die because the robot operating on them had a glitch or a system crash).

Why I am not Scared of Hollywood’s Image of the Robot Apocalypse: Something Much Worse is Much More Realistic.

Professor John Searle is noted for the example of the “Chinese Room” as a way to demonstrate something that artificial intelligences lack that humans seem to posses. Computers can detect certain strings of characters, but cannot grasp meaning, purpose, or significance. Just as a person could translate between two languages without understanding them, computers only relate symbols according to a set of instructions given to them. AI does not grasp meaning or significance. It does not act out of will, but out of code. Combined with (related?) difficulties of whether AI could have something akin to consciousness, I am not worried about the Hollywood image of the Robot Uprising.

The greater worry is the dependence we have on a system that can vanish. Our lives are made of user IDs and passwords. I have over 50 now. Some I don’t have to memorize: passport number, driver’s license. Sometimes I have to look up my social security number, which is unthinkable to my parents who did not have to hold 5 e-mail account passwords, 3 social media passwords, 2 computer logins, 3 videogame passwords, and 2 bank account logins  in their heads.

The real fear is not that the system will awaken to self-conscious and the robots will rise up, but that someone will trip over a mainframe plug and suddenly jerk the system offline. As more businesses go paperless and more of our data and information is stored “on the cloud,” I think questions of system security and system integrity are far more pressing than the concern of whether the system will become sentient, develop a will, and then turn that will against biological life.

Privacy (as the Withholding of Information) in the Information Age

Business professionals in e-commerce talk about information like it is today’s fundamental commodity. Yet information— raw data— is less helpful than we tend to think. Privacy becomes harder to maintain in an era in which business and government think that more data is always better and that accruing data will solve problems. Information is necessary, but not sufficient, to solving problems and pushing progress along.

Lots of entities want information: governments want information about their citizens, employers want information about their employees, corporations want information about their consumers, etc. Such entities have always wanted information, but only recent technological developments have made it reasonable to obtain and organize that information. The biggest remaining barrier to such information collection is the ethical and legal concept of privacy. My contention is that the mere gathering of data is less helpful than the gatherers might think.

One way to think of this issue is to see human action as having two components: 1) an internal motivation or attitude and 2) an external display of action. So, if I purchase a large supply of plastic drinking cups, the store computers may recognize my purchase and correlate it to the kinds of other items people purchase with drinking cups: plastic cutlery, snack food, soda, and so forth. The store wants to predict my motivation by examining my action and correlating my action with similar actions and using inductive reasoning to sell me more things. But what if my motivation in buying many cups is to have a cup stacking competition? Or to have a 2nd grade class plant lima beans? The problem with relying heavily on gathering information is that you can only make guesses about the internal state of the actor.

The debatable assertion is this: Humans cannot be captured by data sets. Some (who probably favor Hume) may say they can, but it must be conceded that the data set must become extremely, extremely large. Perhaps more importantly, some elements essential to that data set cannot be collected through transaction records, e-mails, Facebook “likes”, tweets, and all other collectable data. Seen in this way, a reasonable fear emerges: as entities gather data, they act on that data as though it is a more complete picture than it actually is. Another way to state this issue is “data does not explain itself.”

There are a few important takeaways about the limits of the power of data:

1) You don’t get to know people from their Facebook profiles.

2) Stores know what people buy, but not always why they buy them.

3) Privacy can protect both parties from an incomplete picture.

4) Data is a raw material. It must be processed with understanding, refined through meaning and context, and crafted with wisdom into usable information and then into intelligence.

5) Computer systems can record observations of fact and interact according to algorithm, but cannot “understand” any “significance” or “meaning” of any data.

NOTE: There is so much to this subject! I expect to return (probably repeatedly) to this subject in more specific settings to explore deeper nuances and applications of issues.