The Scare of Abandonware

It’s nice to have law in a society to bring a sense of predictability. Clear and organized laws allow us to understand the consequences of our actions. Knowing the law lets us make choices based on the expected outcomes. However, there are a few areas of law where outcomes are not so obvious. Abandonware is an interesting case of 21st century law. Copyright law simply doesn’t outline what to do when a company publishes a game and then closes its doors. It’s scary for cautious lawyers to discuss because of that uncertainty. As always, this blog post is NOT legal advice– in fact, it’s mostly about why giving legal advice about abandonware is difficult.

How Games Get Abandoned

Abandonware isn’t entirely limited to software, but the differences in technology and industry norms and structure make it a far larger problem for software than any other media. It’s no surprise that book, radio, television, film, or music industries ever needed a statute on abandoned works.

When game studios close, they are often bought by other, larger studios- or at least their IP assets are. However, sometimes the IP of a studio doesn’t get purchased – it just gets abandoned. Copyrights in the US last at least 70 years. Although courts have ruled that not every work has a recognized owner at the time of creation, courts have not definitively addressed the issue of abandoned works. (It is possible to officially declare a work abandoned and part of the public domain, but this is not automatic for IP that is simply left behind by a defunct company.)

Who Would Have The Right To Sue?

There are a few fundamentals that have to be in place for a case to even get seriously looked at by a judge. There must be an allegation of a violation of a law, for one thing. Additionally, the plaintiff must have “standing.” This means the plaintiff was harmed by the breaking of the law. A case must also be “ripe” (the allegation cannot be speculated or predicted to occur sometime later), and the case cannot be “moot” (resolving the case must make an actual difference to the injured party).

In the case of abandonware, could these fundamentals be met? Sometimes revenue is still given to developers whose companies have closed shop, but it’s unclear how often this is the case.  In most cases, it seems that no one can claim to be damaged by the unauthorized distribution of the software, because no one can claim they lost money as a result. Further, any case would be moot because ceasing the distribution would not make any difference to a non-existent competitor.

Despite the unlikely odds of an abandonware suit even getting to trial, distributing abandonware still feels a little risky for two reasons. First, unlike trademarks, copyrights are not contingent on use in commerce, and unlike abandoned property there is no law describing how to treat abandoned works. Second, it’s an unexplored area of law, which means that there isn’t precedent either to argue in court or to consider when advising a client.

Who Gets the Loot of the IP License When a Company Dies in the Dungeons?

Despite the murkiness, some abandonware cases seem clearer than others. Some games from the 80s and 90s seem well and truly abandoned. However, if a copyright is assigned to a corporation and that corporation then goes defunct or is bought, it’s sometimes unclear who owns the copyright.  Other games may carry a sort of tangential active ownership that could complicate a case. For an example of both of these complications, let’s consider a game from 1991 that featured a licensed IP to a game developer and a publisher (who are now both defunct): Eye of the Beholder.

Dungeons and Dragons was owned by TSR, Inc until that company went out of business and sold most of its D&D intellectual property assets to Wizards of the Coast (a company owned by the toy company Hasbro, Inc). Eye of the Beholder was a game made by Westwood Associates (bought by Electronic Arts and defunct since 2003), though the title screen clearly identifies it as an Advanced Dungeons and Dragons game. The game was published by Strategic Simulations, Inc (bought by Mindscape and defunct since at least 2011), who worked with TSR on dozens of licensed D&D games.

With Westwood and SSI now out of the picture, can Wizards of the Coast claim ownership in the use of their D&D mark in 30 year old games?  Wizards of the Coast would probably not prevail on a claim of direct ownership of these games. As far as I can tell, courts have not addressed a case in which a party bases a claim on IP that is inside another product. The closest cases involve the use of a person’s likeness in a game, but the plaintiffs don’t try to claim ownership over the entire product. It may be that the original license agreement puts the “D&D” IP out of the reach of claims by TSR, and therefore out of the reach of WotC.

Ideally, the licensing contract between TSR, Inc and Westwood Associates has a paragraph for just this kind of question (this is why it pays to draft contracts with the worst possibilities in mind- like your company going out of business). If a court faced the claim that WotC has a claim on the distribution and sales of games featuring D&D settings and characters, I suspect* it would rather dismiss the claim on the basis of laches rather than address the tangled mess of IP licensing claims.

Conclusion: We Can Know The Risks, If Not the Outcomes

Abandonware seems to be technically illegal, but it also seems to be nearly unenforceable. That’s an uncomfortable place to be. It’s a strange state, and there are hardly any appropriate analogies that would help explain it. The best analogy might be a comparison to an old game that, despite being technically functional, won’t run on a current operating system. Abandonware’s legal challenge might be best described by its technical challenge.

 

*There is always a small risk of a surprise in court: A court could create the principle that when a party does not exist to protect a licensed IP, the licensor may step in and act as owner of that IP for some limited purpose. Some would call that “legislating from the bench.” The judge would call it “meeting the demands of justice in the face of technological development.”

Advertisements

The Race for Data: Consumer Privacy in a Red Shell

Mario Kart hasn’t outsold the standard Mario formula, but it has been the most successful adaptation of the characters. The lack of multiplayer wasn’t a big deal for games on the original 80’s Nintendo Entertainment System; just running to the right and jumping on boxes was good enough. As demand for multiplayer games grew, Mario Kart proved to be one of Nintendo’s best ideas. Racing games don’t need a lot of explanation, and getting to steer your favorite characters to the finish line made for hours of fun for family game night, birthday parties, and college dorms. Nintendo also made their fun additions to their racing game easy to understand: banana peels make your opponents lose control and crash, mushrooms provide a speed boost to help you catch up (especially useful after a crash), and getting hit by a turtle shell the size of your cart is never good. The weaponized shells come in a few colors, but the red shell was particularly powerful because it follows its targets movements, making it nearly impossible to dodge.

So, when the minds of marketing, data science, and software development came together to create a way to track gameplay data and correlate it to advertising for each unique player, a popular video game weapon that followed a target seemed like a good fit for the name of the product. Maybe a representative from customer relations or ethics would have raised a concern about naming a product after something aggressive and destructive. That kind of name raises a red flag for some people—and  it raises two red flags if it also shares the name with a known malicious virus. Unfortunately, it fell to the players to explain that secretly targeting customers to collect data is an unpopular choice.

 

Red Shell Discovered

Earlier this year, a few Steam users discovered a tracking program hidden inside some game software. The tracking program was called Red Shell. I have not found any indication that users were informed (at least explicitly) of the presence of this tracking software within the games that consumers purchased, downloaded, and installed. The stated purpose of Red Shell is to track user data that can be matched with marketing data to optimize marketing strategies. Despite the fact that the data collected from a user is called a “fingerprint,” developer Innervate is on record as believing that the clandestine program that does not allow opt-in (or even opt-out) decisions is GDPR compliant because it does not collect personally identifying information- just a broad mass of data associated with a user.

Software companies got a different kind of marketing feedback as outraged customers spoke out on forums and social media, attacked games with negative reviews, and called for boycotts against the offending games. I did not find any evidence that Red Shell is harmful or pernicious in any way, and most users seem to agree with that assessment. But actual, or even potential, harm does not seem to be the problem. Rather, the issue seems to be that the customers feel betrayed, deceived, and… well… played.

 

Lessons from the Wreckage

In Mario Kart, red shells cause your opponents to crash. In June of this year, the program Red Shell caused player trust to crash. Red Shell may be GDPR compliant, but the scandal now serves as an example of why mere technical compliance is not always enough.

I think Red Shell would have enjoyed reasonable success if players were given the choice to opt-in. Other companies use clear, voluntary methods to collect data from users—from surveys to system scans. I understand the appeal of “having all of the data,” and the appeal of letting computers do the bulk of the gathering and processing automatically. The efficiency and scale would be hard to match – computers often outperform humans in efficiency, speed, and scale. But computers don’t understand the values of trust, preferences, and autonomy.

Innervate lost sight of the real, ultimate reason for gathering player data in the first place: improve a developer’s bottom line through a better understanding of the player. By failing to connect empathy with the notion of “understanding,” they overlooked what they were losing in exchange for the increased efficiency and scale of their product. The effort to understand brand loyalty undermined the trust and loyalty to the brand. Data that is properly collected and carefully understood in the right context can be a powerful tool for better products and better service. But taking a shortcut around your goals to try to achieve them is just driving faster with no sense of direction.

 

Red Shell Takeaways:

ALWAYS remember that data is not an end in itself- think about WHY you want data.

Other things matter besides the data you think you need- consider the competing values.

Consider ways to get data that don’t interfere with other goals. Consider ways to get to your goals that don’t rely on the data you are chasing.

Don’t lose sight of your larger goals/objectives during your search for data; don’t let your race for data undermine your quest for success.

 

 

 

 

Popularizing Formats For Sitting At a Table and Having a Spirited Discussion

Mediation has a surprising amount in common with the tabletop game Dungeons and Dragons.

1) Most people know very little about either one.

2) People who have heard of it often think it’s a waste of time, and may deride those who support it.

3) Neither are promoted in mainstream culture.

4) The formats bear some similar appearance: Several people sit around a table. One person seems to be “in charge,” but really, that person is just helping the other people at the table actually make meaningful decisions by providing structure and clarity for the process.

5) Neither one has a final, decisive ending that declares a winner. Rather, the purpose for both activities is to have a mutually satisfying experience and outcome; everyone wants to walk away from the table feeling like it was a worthwhile investment of 3 hours (… or 5 hours… or 18 hours…).

6) The enemy that must be defeated is abstract in both cases. For D&D, it’s the… well, the Dungeons and Dragons that must be overcome (it’s extremely clear naming). In mediation, it’s the conflict itself that is the enemy– not the other person.

More people than ever are playing D&D- and even filling theaters to watch professionals play it. Can mediation find the same increased acceptance in our culture?

 

The Wizardry of Brand Management

D&D surged in popularity in the last few years. The owner of the game and the brand, Wizards of the Coast (WotC), has rebuilt and redesigned the rules and format several times since taking over the trademark in 1993. When launching the 5th edition of the game in 2014, WotC leveraged social media to demonstrate how the game worked. The 5th edition was easier to understand, easier to play, and easier to watch than any previous edition. These changes made it more inviting for new players and also made it much more of a spectator event, which fit with the use of streaming services like Twitch and YouTube. Enthusiasts started to publish their own gaming sessions online, effectively turning their gaming product into a TV show—sort of a strange inverse of how most children’s cartoons worked in the 80s and 90s to sell toys. Like so many video games that now comprise the esports corpus, D&D became a game that collected an avid fan base and consistent spectators to fill streams and theaters. Podcasts, streams, and live performances have introduced thousands of new players to the game, as well as rekindled the imaginations of those who have not rolled a twenty-sided die in decades.

Despite their broad similarities, mediation has not exactly kept pace with D&D’s surge in popularity. Despite the overwhelming difference in cost, time, end (arguably) effectiveness, litigation remains the gold standard for dispute resolution in matters of legal consequence in the US.

Courtroom drama television shows, (and “procedurals,” generally) have done well in the US. A regular program centered on mediation could easily do as well as any long-running legal procedural show. Wizards of the Coast brought D&D out of derision and obscurity (even dismissing alleged satanic affiliations) by making it comprehensible and accessible. They used every possible tool to present an alien an esoteric game structure in a way that was engaging and entertaining, while at the same time gently informing viewers who simply watched the process.

 

Two Obstacles To Mediation’s Popularity

There is a snag in the economics of promoting mediation:  Wizards of the Coast is financially incentivized to promote their D&D product. A lot of wealthy people and companies are not necessarily incentivized to promote mediation as a primary form of dispute resolution. Trials can be incredibly expensive, and their complexity and cost often favors the side with more money to hire more experienced attorneys. Those with advantages of any kind, in any setting, are typically unwilling to give up those advantages. If the US legal system creates any advantage for those with power or wealth, it is easy to see why power and wealth would not be used to promote an alternative method of dispute resolution.

The other primary obstacle is the lack of cohesive ownership over mediation. D&D is a gaming product owned by a single company, and so decisions surrounding its brand management are made by a single entity. Mediation is a broad structure of dispute resolution, not owned by any particular body. Indeed, it is not the kind of thing that is subject to trademark or patent protection. There are trade groups and individual specialists who would like to see mediation increase in popularity, but there is no single entity with resources and authority over mediation. It is not comparable to the relationship of a company with its product. The lack of a trademark or ownership makes branding extremely difficult. Wizards of the Coast is able to manage D&D carefully, shutting down counterfeit products and distinguishing itself in the gaming market. Mediation is not the kind of thing that is subject to trademark protection.

 

The Cultural Boost for Competitive over the Cooperative

If popularity is about brand management, mediation seems condemned to obscurity because that brand can’t be effectively managed.

But how did litigation get popular without a trademark and a livestream? Perhaps the adversarial attitudes in litigation fit naturally with a competitive culture. Litigation so often becomes about beating the other side, rather than beating the conflict itself. Mediation is most successful when each side sees the obstacle as the conflict itself, and everyone works together to defeat that problem—not to defeat each other.

Despite the epithet of “rules lawyer” to describe many D&D players, a society that played more cooperative tabletop games would probably be less litigious. Taking a few hours to learn to work with someone who has different personal objectives from your own is an unusual activity in our culture, but learning to listen and cooperate might have value in an increasingly interconnected and networked society.

Evil Vines Choking Out Unenumerated Protections (An Afterthought on Legislating for Changing Technologies)

Legislation always faces a problem of enforcement. That problem can take many shapes: lower courts or police may refuse to enforce the law, citizens may refuse to obey the law en masse, or crafty schemers may look for loopholes and technicalities so they effectively break the law without penalty. There are multiple laws, cases, opinions, and all other legal indications that children merit special and particular protection online and in digital interactions. However, there is no law specifically forbidding inflicting digital violence on a child’s avatar in a game until the child pays non-digital money— and I’m almost surprised it took so long for someone to find that opportunity. I think Penny Arcade misunderstands the problem. The problem is that all of those legal efforts to protect children could never cover every possible way that someone might try to exploit a child in a digital setting. When someone wants to exploit people for money, they only worry about the law in three ways: not getting caught, not getting tried, and not getting convicted.

This kind of example raises concerns not just in the video game industry, but across industries affected by the new General Data Protection Regulation. It would be unfairly cynical to even hypothesize that every company is nefarious, of course. A good many companies have a genuine desire to uphold the GDPR rights of their users, and their task is to work toward official compliance with the GDPR requirements– a few will even go beyond that minimum and take further measures for privacy and security. Notwithstanding, some controllers and processors still want to exploit their users, and their task is now to figure out how to sneak over, around, or through the GDPR.

 

In Both Overcooked And The GDPR, Execution Matters More Than Ingredients

I deliberately avoided playing Overcooked for a long time because so many review joked about the fights it causes with friends. Now that I’ve played it, I barely understand why it’s such a divisive experience for so many people. The game is charming and delightfully fun. Players work together in kitchens filled with obstacles (food and tables often move during the round, forcing players to adapt) to prepare ingredients and assemble meals for a hungry restaurant– though the diners are sometimes floating on lava floes and sometimes… the diners are penguins. The game is about coordinating and communicating as you adapt to changes within the kitchen. Maybe the reason so many people throw rage fits during this game is that they are not good at coordinating an effort and communicating effectively. In any case, the game isn’t about food so much as it’s about kitchens (especially in restaurants). So the game doesn’t focus so much on the ingredients as it teaches the importance of working together in chaotic situations.

People are focusing  a lot on the ingredients of the new EU data privacy law– particularly the consumer protection rights enumerated in it. However, there is very little talk about the bulk of the law, which is aimed at the effort to coordinate the enforcement and monitoring mechanisms that will try to secure those consumer rights. The rights listed in the GDPR are great ingredients– but as Overcooked teaches, it takes both execution and ingredients to make a good meal.

Supervisory Authority: How We Get From Ingredients to Meal

I’ve read a lot of articles about the General Data Protection Regulation, and I notice two common points in almost all of them: 1) the GDPR lists data privacy rights for consumers, 2) this is a positive thing for consumers. However, after reading the entire law, I think this is a gross oversimplification. The most obvious point that should be added is overwhelming portion of the statute that is devoted to discussing “Supervisory Authorities.” The GDPR may list a lot of consumer rights, but it also specifically details how these rights are to be enforced and maintained. This law prescribes a coordinated effort between controllers, processors, supervisory authorities, and the EU Board.

As described in Article 51, 1, a supervisory authority is a public authority “responsible for monitoring the application of this Regulation, in order to protect the fundamental rights and freedoms” that the GDPR lists. Each member of the EU is required to “provide for” such an authority. I can only speculate that this would look like a small, specialized government agency or board. This supervisory authority is required to work with the various companies that hold and process data (“controllers” and “processors” in the GDPR) to ensure compliance and security. The supervisory authority is responsible for certifications, codes of conduct, answering and investigating consumer complaints, monitoring data breaches, and other components of a comprehensive data privacy program. The supervisory authority must be constantly and actively ensuring that the rights in the GDPR are made real.

If the supervisory authority can’t coordinate the effort with the controllers and processors, the rights in the GDPR are just delicious ingredients that were forgotten about and burned up on the stove.

Tooth And Tail: Lessons in Planning With Realistic Expectations

Tooth and Tail is simple. It has to be simple because the game designers had a very challenging goal: Make a Real Time Strategy game that is reasonably playable on a console. Real-Time Strategy games are notorious for needing high-speed and complex inputs (professional Starcraft players’ fingers perform over 400 actions per minute) that are simply not possible with the constraints of a console controller (even with all of the buttons they’ve added after Nintendo produced the perfect game controller in 1990). But the designers were smart, and they looked realistically at the constraints of the system, and they crafted the game to fit those constraints. The result is a playable, enjoyable game about a Soviet-revolution inspired rodent uprising on a farm. The designers of in-house corporate programs and databases need to learn to be realistic about the actual uses of their programs.

I. Lesson in Project Design: Accept the Probabilities of Disaster so you can Plan for Prevention; Don’t Plan for Immortality and Invulnerability. (#dontbeateen)

In the digital age, there is an increased focus on preventing and eliminating problems/errors. The promised outcomes of flawless perfection are enticing, but the realities of inevitable problems require more effort be put into managing problems and recovering from disasters.

Computers amplify the speed and scale of what people can do. This makes it easier for people to do more, and to do more, faster. This includes making mistakes bigger. Years after a British woman got 15 minutes of fame for accidentally ordering 500kg of chicken wings, Samsung accidentally made a $105 billion ghost.

Samsung Securities Co (a financial services company owned by conglomerate Samsung Group) tried to pay a dividend to their employees, but accidentally gave the employees shares instead. The 1,000 WON dividend became a 1,000 SHARE distribution- creating over $100 billion in new sharesThen some employees immediately sold those shares. There were a lot of safety measures that failed in this story. The program should have been able to calculate that this order totaled over 1 trillion WON, more than 30 times the entire company value. A second human should have checked over the work for simple, obvious errors when there is a potential for this level of damage (anything at a company-wide level for a publicly-traded international corporation would certainly qualify). Several departments should have reviewed the work (compliance, risk, accounting, finance, legal—almost anyone!). Samsung’s own internal compliance should have also prevented the sale of the ghost shares.

II. A Lesson in Categorical (Or Macro) Errors: Some Mistakes are Annoying, Others Are Fatal. Design to Catch and Prevent, Not Headline and Damage Control. (#dontbeaceleb)

Mistakes happen a lot when computers are involved. Sometimes it’s the user, sometimes it’s a problem in the code. But when a user catches a problem, they can assess the problem in a broader context, and determine just how bad a mistake is. A bigger mistake is just more obvious to a human than a computer.

Many years ago, a friend of mine got on a flight and found someone else sitting in his designated seat. Not wanting to cause trouble, he simply took the empty seat next to his designated one and prepared for the flight.  As the crew prepared for taxi and takeoff, a flight attendant welcomed passengers to their non-stop service to their destination city. Upon hearing this announcement, the woman next to my friend hurriedly gathered her belongings and fled the plane.

She wasn’t in the wrong seat. She was on the wrong plane.

Computer programs don’t intuitively differentiate between the severity of errors:  the wrong plane and the wrong seat are just two errors if you’ve never flown and don’t know have a broad concept of travel or the context of moving around a world. To a computer, being in the right seat is still pretty good, just like executing a financial order with the correct number is pretty good – even if the number is in the wrong field or tied to the wrong variable. What humans easily grasp, computers are often unlikely to infer. The right detail at a micro-level cannot remedy a catastrophic error at a macro-level.

User errors are inevitable. Programming errors are likely. The more we rely on computers, programs, and apps for the things that allow our lives to function, the more likely it is that our lives will be disrupted by programmer or user errors.

III. The Solution: Make The Programs Flexible, and Make Problems Fixable.

Tooth and Tail’s success is rooted in the realism of its game designers, who sacrificed dreams of a more complex game (that would have been unplayable) for the right game that fit the actual constraints and experience of the player. Designing with the actual user’s experience in mind—with special consideration for what can go wrong—is more important for project designers and programmers every day.

There is an increasing drive to try to use computers to prevent any errors, mistakes, or problems. However, these solutions only make problems worse because they decrease flexibility in and around the program. The solutions is to move in the opposite direction: programs need to play less of a role in trying to self-regulate and self-repair, while users and programmers take a larger role in guiding and overseeing the programs.

But wouldn’t this much red-tape bureaucracy be time-consuming? Wouldn’t it be inefficient to invest so much effort in a simple dividend payment? It would take time and resources, yes—but efficiency measurement is relative to scope (among other factors): it certainly appears inefficient if 6 people spend 10 minutes each to look at the same work and find no error. Here, we would conclude that a full hour of productivity was wasted. However, if 6 people took 10 minutes each and found a problem that would have cost 1,000 hours of productivity had it not been discovered, we conclude that we have a net gain of 999 hours of productivity.

Although problems like these cannot be entirely prevented or eliminated, they can be contained and managed. If a person is on the wrong plane, they can quickly determine the outcome of their choice and work on a solution. People will still get in the wrong city from time to time, but they don’t have to end up in the wrong city as a result. Similarly, employees will make occasional typos or errors in their accounting and payroll from time to time, but that doesn’t mean that financial markets have to be rocked as a result.

Computers Are Not Problem Solvers- Computers Are the Problem We Must Solve.

The New Checkout Cashier That Doesn’t Care If You Starve

There is an effort to use a simple AI at the office where I work. Some slick salespeople sold the building 2 cutting edge, top-of-the-line, automated checkout machines. These machines have a camera that stares at a designated check-out square. People simply select the items they wish to purchase and place it in the designated area. The camera recognizes the items, registers the purchases, and the person then swipes their card and completes the purchase process. However, the camera sometimes does not recognize the item- and there’ s no other method for buying the item when this happens. I leave my snack or drink by the incredibly expensive and completely useless machine. Betrayed by technology and the salespeople who sold the devices to the facilities management, I walk back to my desk in anger and disgust.

It’s a simple story, but an increasingly common one: we start to rely on technology, and when it fails, we just hit a wall. It’s not clear to me what advantages the camera offers over a scanner (which is used elsewhere in the same cafeteria for self-checkout). This kind of story will be more common as more people rely on smart homes, smart fridges, smart dishwashers, smart alarm clocks, etc. The “smartness” behind each of these is rudimentary AI- recognizing patterns and sometimes making simple predictions. The hope is that the technology will understand its role and take a more proactive approach to helping humans.

However, the technology doesn’t understand its role, and it really doesn’t care about helping humans. When AI encounters an error, it doesn’t go into “customer service mode” and try to help the human achieve its goal. It doesn’t try to resolve the problem or work around it. It just reports that there was an error. If a retail employee did this, it would be the equivalent of being told “I can’t ring up this item,” and then the employee just walks off to the break room. Most people wouldn’t return to a store that had that level of customer service. People born before 1965 would probably even complain to the manager or local community newspaper.

These problems can be resolved, but the fixes are rarely designed into the technology at release. I’ve had this problem with the checkout machines at work about 7 times over 7 months (I don’t even try to use them more than about once a week)- I am aware of no effort to improve the situation. Because the designers probably never use the machines, there’s a good chance no one in a position to fix the problem is aware of the problem.

More Dangerous Places to Put AI: Cars and Financial Markets

The fundamental problems for AI are annoying and disappointing when they deny us snacks or try to sell us shoes that we already bought. But these problems are amplified from “annoying” to “tragic” and “disappointing” to “catastrophic” when they manifest in vehicles and financial markets. If our AI checkout machine doesn’t care if people can purchase food, what else are we failing to get AI to care about in other applications?

AI is the newest technology, which means it is subject to all of the failures of previous technology (power outage, code errors, physical tech break) and also the new failures of technology (AI-specific problems that sometimes actively resist resolution).

None of this is anti-technology- on the contrary, I think AI is a fantastic development that should be used in many applications. But that doesn’t make it a great (or even acceptable) tool for every application. A warning that hammers should not be used to put screws through windows is not a diatribe against hammers, screws, or windows. It’s just a caution that those things may not mix in a way that will yield optimal results.