Privacy problems are age-old. The extravagant Sun King, Louis XIV of France, popularized the envelope as he used them to protect his letters from the prying eyes of the chambermaid or shopkeeper. In the colonial era, adhesive envelopes and patterned lining helped hide the contents of commonly intercepted mail. Along with increased regulation, these efforts created more friction against snooping and made privacy more tangible.
WIRED OPINION
ABOUT
Stephanie Thien Hang Nguyen is a research scientist at MIT Media Lab focusing on data privacy, design, and tech policies that impact marginalized populations. She previously led privacy and user experience design projects with the National Institutes of Health and Johns Hopkins’ Precision Medicine team. Lea Kissner is the chief privacy officer of Humu. She was previously the global lead of privacy technology at Google, and she earned a PhD in computer science (with a focus on cryptography) at Carnegie Mellon University.
Centuries later, invisible, untouchable, and omnipresent information about us now spreads across databases, from internet browsers to doctors’ offices. Just as the envelope was a design solution intended to prevent people from reading each other’s mail, the creators of data systems have turned to design to solve privacy challenges.
Politicians, however, have turned to regulation. Many regulatory proposals have focused on suppressing dark patterns, which are design tricks that push you the user to do things you didn’t intend to, like subscribing to newsletters or paying for extra services. In April, senators Mark Warner of Virginia and Deb Fischer of Nebraska introduced a bill to ban many of these features, such as LinkedIn’s “Add connection” button (which harvests email addresses and grows LinkedIn’s member base) and Venmo’s default public setting. In July, another senator, Josh Hawley of Missouri, introduced legislation to ban “addictive” and “deceptive” features like Instagram and Facebook’s infinite scroll and YouTube’s autoplay.
The term “dark patterns” does draw attention to maliciously intentioned organizations that hoover user data, counter to their users’ needs. Most of these regulatory proposals, however, fail to recognize that dark patterns are only a subset of opinionated design that guide users toward a goal in a particular way. Most good design is opinionated. Whether a particular design is good or bad is entirely dependent on whether one agrees with the goal, rather than on the techniques employed. For the sake of users, politicians, researchers, and technology companies alike must remember that design cannot be reduced to binary categorizations of dark or light—there is nuance.
For example, consider designs that require multiple difficult-to-find clicks can make it exceedingly difficult to cancel a subscription. But they can also better protect people from online threats like phishing and malware. Research shows that making malicious sites easier to access means that people access them and get hacked in droves. Spam filtering is also opinionated; there are different beneficial goals both at the individual and societal level. Filters protect individuals from scammy content and disincentivize society at large from producing scammy material. If, however, an app store were to filter all competing apps, people would cry foul on antitrust grounds. Same techniques, very different results.
There is more nuance than just assessing whether a feature pushes a user toward a certain outcome. There is a constant trade-off to balance user empowerment and user ease. How can companies inform users of important services (signing up for health care or paying student loans) without unintentionally creating barriers by overwhelming them with information? And how might we do this by taking the unquantifiable, context-dependent messiness of culture and societal inequities into account?
By banning features without considering the context in which they are used, we may inadvertently limit a designers’ toolbox to create privacy-protecting design. Hawley has proposed enforcing “conspicuous pop-ups to a user not less than once every 30 minutes that the user spends on those platforms.” But this will only provoke warning fatigue and mindless clickthroughs. We know that people quickly learn to click through these pop-ups without registering their message, making them useless, annoying, and an excellent way to detract from communication about other matters like security. Rather than focus solely on whether a design pattern is good or evil, we should examine whether outcomes meet user privacy needs in context. We need to measure user success: Does a user’s expected outcome match what they wanted to achieve? Does this help people live the way they want?
Even more ambitiously, we should measure peoples’ satisfaction and happiness with a system both in the short term and long term. Do they have appropriate options? Can they use those options with the proper amount of friction? Sometimes friction is appropriate. Putting a warning in front of a dangerous action (like deleting your email account or transferring a large amount of money) can help users pay appropriate heed to balancing risks. Making it too easy to factory-reset a phone or delete an account can be worse than making it too hard, leading to users accidentally losing important data like baby pictures and love letters.
There are dedicated engineering roles for people who estimate the likelihood of threat and vulnerability, and for people who create reliable systems and reduce the risk of security breaches. Similarly, we need more designers to create privacy-enhancing features in our products and services. This means that they need both the know-how and the motivation. Having external-to-industry designers who audit or expose behavioral vulnerabilities, and more importantly, develop design patterns to support respectful, privacy-protective design can both help improve the state of the art and offer nuance to those policymakers and regulators seeking to address design through law. There are traces of this work already. Signal, Duck Duck Go, Purdue University, Carnegie Mellon’s CyLab, EFF, Apple, Mozilla, Google, and Simply Secure are just a few examples where these roles are prevalent.
Still, there are more bridges to build with policymakers and regulators. We need designers both inside and outside of industry to ask nuanced data privacy questions, create prototypes, measure outcomes, and create frameworks to address these challenges. With this understanding of good practice and stronger ties to Congress, regulators will be well-armed to address bad practice.
Perhaps it would be easier to design a time machine. Like a low-budget Bill and Ted’s Excellent Adventure, we could leap back to 17th-century Versailles, grab Louis XIV off his golden chamberpot, and set him loose on the internet against busybodies, surveillers, and scammers. Until we figure that out, privacy-minded designers need to get resourceful and be prolific. Sometimes that means using so-called dark patterns to design for the common good.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.
More Great WIRED Stories
- A brutal murder, a wearable witness, and an unlikely suspect
- A detox drug promises miracles—if it doesn’t kill you first
- Artificial intelligence confronts a “reproducibility” crisis
- How rich donors like Epstein (and others) undermine science
- The best electric bikes for every kind of ride
- 👁 How do machines learn? Plus, read the latest news on artificial intelligence
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones.