Home

  • University News
  • Faculty & Research
  • Health & Medicine
  • Science & Technology
  • Social Sciences
  • Humanities & Arts
  • Students & Alumni
  • Arts & Culture
  • Sports & Athletics
  • The Professions
  • International
  • New England Guide

The Magazine

  • Current Issue
  • Past Issues

Class Notes & Obituaries

  • Browse Class Notes
  • Browse Obituaries

Collections

  • Commencement
  • The Context

Harvard Squared

  • Harvard in the Headlines

Support Harvard Magazine

  • Why We Need Your Support
  • How We Are Funded
  • Ways to Support the Magazine
  • Special Gifts
  • Behind the Scenes

Classifieds

  • Vacation Rentals & Travel
  • Real Estate
  • Products & Services
  • Harvard Authors’ Bookshelf
  • Education & Enrichment Resource
  • Ad Prices & Information
  • Place An Ad

Follow Harvard Magazine:

The erosion of privacy in the Internet era

September-October 2009

Latanya Sweeney

There is a pitched battle going on in cyberspace that pits an organized criminal ecosystem of “phishers,” “money-mules,” and “cashiers” against a jumbled array of private “take-down” firms, official domain-name registrars, and Internet service providers. As Tyler Moore, a postdoctoral fellow at Harvard’s Center for Research on Computation and Society explains in this exclusive  Harvard Magazine   web extra , the bad guys take over personal computers not for their information, but for their processing power, using “botnets” to stage “fast-flux” attacks that conceal their identity even as they steal the keys to their victims’ bank accounts. 

Imagine if you waved to someone and, without your knowledge, a high-resolution camera took a photograph of your hand, capturing your fingerprints. You might be upset. Or—if you were visiting Disneyland, where they already make an image of your fingerprint to save you from waiting in a long line—you might find the novelty of the technology, and the immediate benefits…gratifying. The ambivalence we sometimes feel about new technologies that reveal identifiable personal information balances threats to privacy against incremental advantages. Indisputably, the trends toward miniaturization and mass-market deployment of cameras, recording devices, low-power sensors, and medical monitors of all kinds—when combined with the ability to digitally collect, store, retrieve, classify, and sort very large amounts of information—offer many benefits, but also threaten civil liberties and expectations of personal privacy. George Orwell’s vision in 1984 of a future in which the government has the power to record everything seems not so farfetched. “But even Orwell did not imagine that the sensors would be things that everybody would have,” says McKay professor of computer science Harry Lewis. “He foresaw the government putting the cameras on the lampposts—which we have. He didn’t foresee the 14-year-old girl snapping pictures on the T. Or the fact that flash drives that are given away as party favors could carry crucial data on everybody in the country.”

It’s a Smaller World

Information technology changes the accessibility and presentation of information. Lewis gives talks on the subject of privacy to alumni groups in private homes, and often begins with an example that puts his hosts on the hot seat. He projects a Google Earth view of the house, then shows the website Zillow’s assessment of how much it is worth, how many bedrooms and bathrooms and square feet it has. Then he goes to fundrace.huffingtonpost.com, an interface to the Federal Elections Commission’s campaign-contributions database. “Such information has always been a matter of public record,” says Lewis, “but it used to be that you had to go somewhere and give the exact name and address and they would give you back the one piece of data. Now you can just mouse over your neighborhood and little windows pop up and show how much money all the neighbors have given.” In the 02138 zip code, you can see “all the Harvard faculty members who gave more than $1,000 to Barack Obama,” for example. “This seems very invasive,” says Lewis, “but in fact it is the opposite of an invasion of privacy: it is something that our elected representatives decided should be public.” 

Technology has forced people to rethink the public/private distinction. “Now it turns out that there is private, public, and really, really public,” Lewis says. “We’ve effectively said that anyone in an Internet café in Nairobi should be able to see how much our house is worth.” Lewis has been blogging about such issues on the website www.bitsbook.com , a companion to Blown to Bits: Your Life, Liberty, and Happiness after the Digital Explosion , the 2008 book of which he is a coauthor. “We think because we have a word for privacy that it is something we can put our arms around,” he says. “But it’s not.”

One of the best attempts to define the full range of  privacy concerns at their intersection with new technologies, “A Taxonomy of Privacy,” appeared in the University of Pennsylvania Law Review in 2006. Its author, Daniel Solove, now a professor at George Washington University Law School, identified 16 privacy harms modulated by new technologies, including: information collection by surveillance; aggregation of information; insecurity of information; and disclosure, exposure, distortion, and increased accessibility of information.

That privacy would be a concern of the legal profession is not surprising. What is surprising is that computer scientists have been in the vanguard of those seeking ways to protect privacy, partly because they are often the first to recognize privacy problems engendered by new technologies and partly because the solutions themselves are sometimes technological. At Harvard, the Center for Research on Computation and Society (CRCS) has become a focal point for such inquiry. CRCS, which brings computer scientists together with colleagues from other schools and academic disciplines, was founded to develop new ideas and technologies for addressing some of society’s most vexing problems, and prides itself on a forward-looking, integrative approach. Privacy and security have been a particular focus during the past few years.

Database linking offers one such area of concern. If you tell Latanya Sweeney, A.L.B. ’95, nothing about yourself except your birth date and five-digit zip code, she’ll tell you your name. If you are under the age of 30 and tell her where you were born, she can correctly predict eight or nine digits of your nine-digit Social Security number. “The main reason privacy is a growing problem is that disk storage is so cheap,” says the visiting professor of computer science, technology, and policy at CRCS. “People can collect data and never throw anything away. Policies on data sharing are not very good, and the result is that data tend to flow around and get linked to other data.” 

Sweeney became interested in privacy issues while earning her doctorate at MIT in the mid 1990s. Massachusetts had recently made “anonymized” medical information available. Such data are invaluable for research, for setting up early infectious-disease detection systems, and other public-health uses. “There was a belief at the time that if you removed explicit identifiers—name, address, and Social Security number—you could just give the data away,” she recalls. That dogma was shattered when Sweeney produced a dramatic proof to the contrary. 

The medical data that had been made available included minimal demographic information: zip code, birth date, and gender, in addition to the diagnosis. So Sweeney went to the Cambridge City Hall and for $25 purchased a voter list on two diskettes: 54,000 names. By linking the demographic information in the voter database to the demographic information in the publicly available medical records, Sweeney found that in most cases she could narrow the demographic data down to a single person, and so restore the patient’s name to the record. She tried this data-linking technique for then-governor William F. Weld ’66, J.D.’70. Only six people in Cambridge shared his birthday. Just three of them were men. And he was the only one who lived in the right zip code. Sweeney had reidentified someone in a putatively anonymous database of private medical information. The system had worked, yet data had leaked. Newspaper coverage of her testimony to the state legislature about what she had discovered ultimately brought a visit from the State Police. “That was my introduction to policy,” she says with a laugh. (She was recently named to the privacy and security seat of the Health Information Technology policy committee in the Obama administration.)

Later, she proved that her results were not unique to Cambridge. Fully 87 percent of the United States population is uniquely identified by date of birth, five-digit zip code, and gender, she says: “So if I know only those three things about you, I can identify you by name 87 percent of the time. Pretty cool.” In fact, Sweeney’s ability to identify anyone is close to 100 percent for most U.S. zip codes—but there are some interesting exceptions. On the west side of Chicago, in the most populated zip code in the United States, there are more than 100,000 residents. Surely that should provide some anonymity. “For younger people, that’s true,” she says, “but if you are older, you really stand out.” Another zip code skews the opposite way: it is on the Stony Brook campus of the State University of New York and includes only dormitories. “Here is a tiny population,” she says, pulling up a graphic on her computer. “Only 5,000 people.” But because they are all college students of about the same age, “they are so homogenous…that I still can’t figure out who is who.”

A potentially even more serious privacy crisis looms in the way Social Security numbers (SSNs) are assigned, Sweeney says. “We are entering a situation where a huge number of people could tell me just their date of birth and hometown, and I can predict their SSN. Why is this a problem? Because in order to apply for a credit card, the key things I need are your name, your date of birth, your address, and your SSN. Who is the population at risk? Young people on Facebook.”

Facebook asks for your date of birth and hometown, two pieces of information that most young people include on their pages simply because they want their friends to wish them a happy birthday. The problem is that SSNs have never been issued randomly—the first three digits are a state code, the second two are assigned by region within state—and the process is described on a public website of the Social Security Administration. Starting in 1980, when the Internal Revenue Service began requiring that children have SSNs to be claimed as dependents on their parents’ tax returns, the numbers started being assigned at birth. Thus, if you know a person’s date and location of birth, it becomes increasingly simple to predict the SSN.

One way or another, says Sweeney, someone is going to exploit this privacy crisis, and it “is either going to become a disaster or we’ll circumvent it.” (Canada and New Zealand, she notes, may have similar problems.) “But there are many easy remedies,” she adds. She has proposed random assignment of SSNs from a central repository. She has also devised solutions for setting up public-health surveillance systems that don’t reveal personal information, but still work as early-warning systems for infectious-disease transmission or bioterror attacks. 

Sweeney believes that technological approaches to privacy problems are often better than legislative solutions, because “you don’t lose the benefits of the technology.” One of her current projects, for example, aims to make sure that technologies like photographic fingerprint capture are implemented in such a way that personal privacy is maintained and individuals’ rights aren’t exposed to abuse.

Scientists have long been excited by the possibilities of using biometric information such as fingerprints, palmprints, or iris scans for positive identification: people could use them to open their cars or their homes. But just how private are fingerprints? With a grant from the National Institutes of Justice, Sweeney and her students have shown that inexpensive digital cameras are already good enough to capture fingertip friction-ridge information at a range of two to three feet, and image resolution and capture speed are improving all the time, even as the cost of the technology keeps dropping. As a result, because it is contactless and very cheap, photographic fingerprint capture could become “the dominant way that prints are captured in a lot of public spaces,” Sweeney explains. That means fingerprint databases are everywhere, and “you don’t have any control over the use of those prints, if somebody wanted to make a false print, or track you. It is like walking around with your Social Security number on your forehead, to an extent. It is a little different because it isn’t linked to your credit report or your credit card”—but it does not require a tremendous leap of imagination to picture a world where credit cards require fingerprint verification. 

Sweeney began working with fingerprints because of concerns that, given the huge numbers of fingerprints in linked databases, there would be false positive matches to the FBI’s crime database. “To the extent that fingerprint matching has been successful, it might be because only criminals are fingerprinted and criminals tend to repeat crimes,” she says. But she was “ridiculed a lot by law enforcement for making those statements,” until the Madrid train bombings in 2004. When a print at the scene was falsely matched by the FBI to a lawyer in California, it became clear that the science of fingerprint matching needed to be studied more deeply. (Palmprints ultimately may have a better chance at providing a unique match.) Furthermore, Sweeney points out, “What if someone advocated replacing Social Security numbers with fingerprints? If something goes horribly wrong with my number, I can get a new one. I can’t really get new fingerprints.” 

Tyler Moore, left, and Allan Friedman

A Legal Privacy Patchwork

As the Facebook/SSN interaction and the ability to capture fingerprints with digital photography illustrate, social changes mediated by technology alter the context in which privacy is protected. But privacy laws have not kept up. The last burst of widespread public concern about privacy came in the 1970s, when minicomputers and mainframes predominated. The government was the main customer, and fear that the government would know everything about its citizens led to the passage of the Privacy Act of 1974. That law set the standard on fair information practices for ensuing legislation in Europe and Canada—but in the United States, the law was limited to circumscribing what information the government could collect; it didn’t apply to commercial enterprises like credit-card companies. No one imagined today’s situation, when you can be tracked by your cell phone, your laptop, or another wireless device. As for ATM transactions and credit-card purchases, Sweeney says “pretty much everything is being recorded on some database somewhere.”

The result is that even the 1974 law has been undermined, says CRCS postdoctoral fellow Allan Friedman, because it “does not address the government buying information from private actors. This is a massive loophole, because private actors are much better at gathering information anyway.”

As new privacy concerns surfaced in American life, legislators responded with a finger-in-the-dike mentality, a “patchwork” response, Friedman continues. “The great example of this is that for almost 10 years, your video-rental records had stronger privacy protection than either your financial or your medical records.” The video-rental records law—passed in 1988 after a newspaper revealed Supreme Court nominee Robert Bork’s rentals—was so narrowly crafted that most people think it doesn’t even apply to Netflix. “Bork didn’t have much to hide,” Friedman says, “but clearly enough people in Congress did.” Medical records were protected under the Health Insurance Portability and Accountability Act in 1996, but financial records weren’t protected until the Gramm-Leach-Bliley Act of 1999. (Student records are protected by the Family Educational Rights and Privacy Act, passed in 1974, while the Children’s Online Privacy Protection Act, passed 1998, prohibits the online collection of personal information from children under the age of 13.) “Legally,” Friedman concludes, “privacy in this country is a mishmash based on the common-law tradition. We don’t have a blanket regulation to grant us protection,” as Europe does.

The End of Anonymity

Friedman co-taught a new undergraduate course on the subject of privacy last year; it covered topics ranging from public policy and research ethics to wiretapping and database anonymity. “If there is a unified way to think about what digital systems have done to privacy,” he says, it is that they collapse contexts: social, spatial, temporal, and financial. “If I pay my credit-card bill late, I understand the idea that it will affect a future credit-card decision,” he explains. “But I don’t want to live in a society where I have to think, ‘Well, if I use my card in this establishment, that will change my creditworthiness in the future’”—a reference to a recent New York Times Magazine story, “What Does Your Credit-Card Company Know about You?” It reported that a Canadian credit-card issuer had discovered that people who used their card in a particular pool hall in Montreal, for example, had a 47 percent chance of missing four payments during the subsequent 12 months, whereas people who bought birdseed or anti-scuff felt pads for the legs of their furniture almost never missed payments. These disaggregated bits of information turn out to be better predictors of creditworthiness than traditional measures, but their use raises concerns, Friedman points out: “We don’t know how our information is being used to make decisions about us.”

Take the case of someone with a venereal disease who doesn’t want the people in his social network to know. “If I go to the hospital and the nurse who sees me happens to live down the street,” says Friedman, “maybe I don’t want her peeking at my medical records.” That particular threat has always been there in charts, he notes, but problems like this scale up dramatically with online systems. Now the nurse could check the records of everyone on her street during a coffee break. He cites a related example: “Massachusetts has a single State Police records system and there have been tens of thousands of lookups for Tom Brady and other local sports stars.” Unlike celebrities, ordinary people have not had to worry about such invasions of privacy in the past, but now computers can be used to find needles in haystacks—virtually every time. There are nearly seven billion people on the planet: a big number for a human brain, but a small number for a computer to scan. “John Smith is fairly safe,” says Friedman, “unless you know something critical about John Smith, and then all of a sudden, it is easy to find him.”

Digital systems have virtually eliminated a simple privacy that many people take for granted in daily life: the idea that there can be anonymity in a crowd. Computer scientists often refer to a corollary of this idea: security through obscurity. “If you live in a house, you might leave your door unlocked,” Friedman says. “The chances that someone is going to try your front door are fairly small. But I think you have to lock your door if you live in an apartment building. What digital systems do is allow someone to pry and test things very cheaply. And they can test a lot of doors.”

He notes that computers running the first version of Windows XP will be discovered and hacked, on average, in less than four minutes, enabling the criminal to take control of the system without the owner’s consent or knowledge (see online Extra at www.harvardmag.com/extras ). Botnets—networks of machines that have been taken over—find vulnerable systems through brute force, by testing every address on the Internet, a sobering measure of the scale of such attacks. (Another measure: the CEO of AT&T recently testified before Congress that Internet crime costs an estimated $1 trillion annually. That is clearly an overestimate, says Friedman, but nobody knows how much Internet crime actually does cost, because there are no disclosure requirements for online losses, even in the banking industry.)

The durability of data represents another kind of contextual collapse. “Knowing whether something is harmful now versus whether it will be harmful in the future is tricky,” Friedman notes. “A canonical example occurred in the 1930s, when intellectuals in some circles might have been expected to attend socialist gatherings. Twenty years later,” during the McCarthy era, “this was a bad piece of information to have floating around.” Friedman wonders what will happen when young bloggers with outspoken opinions today start running for political office. How will their earlier words be used against them? Will they be allowed to change their minds?

Because personal information is everywhere, inevitably it leaks. Friedman cites the research of former CRCS fellow Simson Garfinkel, now an associate of the School of Engineering and Applied Sciences and associate professor at the Naval Postgraduate School, who reported in 2003 that fully one-third of 1,000 used hard drives he had purchased on eBay and at swap meets still contained sensitive financial information. One that had been part of an ATM machine was loaded with thousands of credit-card numbers, as was another that a supermarket had used to transmit credit-card payments to its bank. Neither had been properly “wiped” of its data. 

Data insecurity is not just accidental, however. Most Web-based data transmitted over wireless networks is sent “in the clear,” unencrypted. Anyone using the same network can intercept and read it. (Google is the only major Web-based e-mail provider that offers encryption, but as of this writing, users must hunt for the option to turn it on.) Harry Lewis smiled at the naiveté of the question when asked what software the laptop used to write this article would need to intercept e-mails or other information at a Starbucks, for example. “Your computer is all set up to do it, and there are a million free “packet sniffers” you can download to make it easy,” he said. And the risk that somebody might detect this illegal surveillance?  “Zero, unless somebody looks at your screen and sees what you are doing,” because the packet sniffers passively record airborne data, giving out no signals of their presence.

Civil libertarians are more concerned that the government can easily access electronic communications because the data are centralized, passing through a relatively few servers owned by companies that can legally be forced to allow surveillance without public disclosure. Noting that the conversation tends to end whenever privacy is pitted against national-security interests, Friedman nevertheless asks, “Do we want to live in a society where the government can—regardless of whether they use the power or not—have access to all of our communications? So that they can, if they feel the need, drill down and find us?”

Harry Lewis

Social Changes

Paralleling changes in the way digital systems compromise our security are the evolving social changes in attitudes toward privacy. How much do we really value it? As Lewis points out, “We’ll give away data on our purchasing habits for a 10-cent discount on a bag of potato chips.” But mostly, he says, “people don’t really know what they want. They’ll say one thing and then do something else.”

Noting young people’s willingness to post all kinds of personal information on social networking sites such as Facebook—including photographs that might compromise them later—some commentators have wondered if there has been a generational shift in attitudes towards privacy. In “Say Everything,” a February 2007 New York Magazine article, author Emily Nussbaum noted: 

Younger people….are the only ones for whom it seems to have sunk in that the idea of a truly private life is already an illusion. Every street in New York has a surveillance camera. Each time you swipe your debit card at Duane Reed or use your MetroCard, that transaction is tracked. Your employer owns your e-mails. The NSA owns your phone calls. Your life is being lived in public whether you choose to acknowledge it or not…. So it may be time to consider the possibility that young people who behave as if privacy doesn’t exist are actually the sane people, not the insane ones.  

Some bloggers, noting that our hunter-gatherer ancestors would have lived communally, have even suggested that privacy may be an anomalous notion, a relatively recent historical invention that might again disappear. “My response to that,” says Lewis, “is that, yes, it happened during the same few years in history that are associated with the whole development of individual rights, the empowerment of individuals, and the rights of the individual against government authorities. That is a notion that is tied up, I think, with the notion of a right to privacy. So it is worrisome to me.”

Nor is it the case that young people don’t care about privacy, says danah boyd, a fellow at the Law School’s Berkman Center for Internet and Society who studies how youth engage with social media. “Young people care deeply about privacy, but it is a question of control, not what information gets out there,” she explains. “For a lot of teenagers, the home has never been a private place. They feel they have more control on a service like Facebook or MySpace than they do at home.”

She calls this not a generational difference, but a life-stage difference. Adults, boyd says, understand context in terms of physical space. They may go out to a pub on Friday night with friends, but not with their boss. For young people, online contexts come just as naturally, and many, she has found, actually share their social network passwords with other friends as a token of trust or intimacy (hence the analogy to a safe space like a pub).

Teens do realize that someone other than their friends may access this personal information. “They understand the collapse of social context, but may decide that status among their peers is more important,” she notes. “But do they understand that things like birth dates can be used by entities beyond their visibility? No. Most of them are barely aware that they have a Social Security number. But should they be the ones trying to figure this out, or do we really need to rethink our privacy structures around our identity information and our financial information?

“My guess,” boyd continues, “is that the kinds of systems we have set up—which assume a certain kind of obscurity of basic data—won’t hold going into the future. We need to rethink how we do identity assessment for credit cards and bank accounts and all of that, and then to try to convince people not to give out their birth dates.”

Friedman agrees that financial information needs to be handled differently. Why, he asks, is a credit record always open for a new line of credit by default, enabling fraud to happen at any time? “Is it because the company that maintains the record gets a fee for each credit check?” (Security freezes on a person’s credit report are put in place only ex post facto in cases of identity theft at the request of the victim.) Friedman believes that the best way to fight widespread distribution and dissemination of personal information is with better transparency, because that affords individuals and policymakers a better understanding of the risks involved.

“You don’t necessarily want to massively restrict information-sharing, because a lot of it is voluntary and beneficial,” he explains. Privacy, in the simplest of terms, is about context of information sharing, rather than control of information sharing: “It is about allowing me to determine what kind of environment I am in, allowing me to feel confident in expressing myself in that domain, without having it spill over into another. That encompasses everything from giving my credit-card number to a company—and expecting them to use it securely and for the intended purpose only—to Facebook and people learning not to put drunk pictures of themselves online.” Some of this will have to be done through user empowerment—giving users better tools—and some through regulation. “We do need to revisit the Privacy Act of 1974,” he says. “We do need to have more information about who has information about us and who is buying that information, even if we don’t have control.”

There is always the possibility that we will decide as a society not to support privacy. Harry Lewis believes that would be society’s loss. “I think ultimately what you lose is the development of individual identity,” he says. “The more we are constantly exposed from a very young age to peer and other social pressure for our slightly aberrant behaviors, the more we tend to force ourselves, or have our parents force us, into social conformity. So the loss of privacy is kind of a regressive force. Lots of social progress has been made because a few people tried things under circumstances where they could control who knew about them, and then those communities expanded, and those new things became generally accepted, often not without a fight. With the loss of privacy, there is some threat to that spirit of human progress through social experimentation.”

Jonathan Shaw ’89 is managing editor of this magazine.

You might also like

Alexander Heffner and Governor Maura Healey

Breaking Bread

Alexander Heffner ’12 plumbs the state of democracy.

Sophia Montgomery

Reading the Winds

Thai sailor Sophia Montgomery competes in the Olympics.

e

Chinese Trade Dragons

How Will China’s Rapid Growth in the Clean Technology Industry Reshape U.S.-China Policy?

Most popular

technology invasion of privacy essay

Who Built the Pyramids?

Not slaves. Archaeologist Mark Lehner, digging deeper, discovers a city of privileged workers.

Portrait of Jeannie Suk Gersen, an elegantly dressed Korean American woman leaning against a stone archway.

Due Process

Jeannie Suk Gersen on the law, trauma, and “the rhetoric of believing”

House - Email

More to explore

Oval black and white image of Frederick Douglas with gold frame around image

American Citizenship Through Photography

How photographs promote social justice

A woman in a blue shirt smiling.

John Harvard's Journal

Harvard Philosophy Professor Alison Simmons on "Being a Minded Thing"

A philosopher on perception, the canon, and being “a minded thing” 

People gathered at a long, outdoor dining table set with wine glasses, candles, and flowers.

Food Tours and More in Pioneer Valley Massachusetts

A local-food lovers’ paradise

Privacy in an AI Era: How Do We Protect Our Personal Information?

A new report analyzes the risks of AI and offers potential solutions. 

Abstract personal private information security technology illustration

The AI boom, including the advent of large language models (LLMs) and their associated chatbots, poses new challenges for privacy. Is our personal information part of a model’s training data? Are our prompts being shared with law enforcement? Will chatbots connect diverse threads from our online lives and output them to anyone? 

To better understand these threats and to wrestle with potential solutions, Jennifer King , privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI), and Caroline Meinhardt, Stanford HAI’s policy research manager, published a white paper titled “ Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World .” Here, King describes their main findings.

What kinds of risks do we face, as our data is being bought and sold and used by AI systems?

First, AI systems pose many of the same privacy risks we’ve been facing during the past decades of internet commercialization and mostly unrestrained data collection. The difference is the scale: AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information. Today, it is basically impossible for people using online products or services to escape systematic digital surveillance across most facets of life—and AI may make matters even worse.

Second, there's the risk of others using our data and AI tools for anti-social purposes. For example, generative AI tools trained with data scraped from the internet may memorize personal information about people, as well as relational data about their family and friends. This data helps enable spear-phishing—the deliberate targeting of people for purposes of identity theft or fraud. Already, bad actors are using AI voice cloning to impersonate people and then extort them over good old-fashioned phones.

Third, we’re seeing data such as a resume or photograph that we’ve shared or posted for one purpose being repurposed for training AI systems, often without our knowledge or consent and sometimes with direct civil rights implications.

Predictive systems are being used to help screen candidates and help employers decide whom to interview for open jobs. However, there have been instances where the AI used to help with selecting candidates has been biased. For example, Amazon famously built its own AI hiring screening tool only to discover that it was biased against female hires.   

Another example involves the use of facial recognition to identify and apprehend people who have committed crimes. It’s easy to think, “It's good to have a tool like facial recognition because it'll catch the bad guys.” But instead, because of the bias inherent in the data used to train existing facial recognition algorithms, we're seeing numerous false arrests of black men . The algorithms simply misidentify them. 

Have we become so numb to the idea that companies are taking all our data that it’s now too late to do anything?

I’m an optimist. There's certainly a lot of data that's been collected about all of us, but that doesn't mean we can't still create a much stronger regulatory system that requires users to opt in to their data being collected or forces companies to delete data when it’s being misused.

Currently, practically any place you go online, your movement across different websites is being tracked. And if you're using a mobile app and you have GPS enabled on your phone, your location data is being collected. This default is the result of the industry convincing the Federal Trade Commission about 20 years ago that if we switched from opt-out to opt-in data collection, we'd never have a commercial internet. At this point I think we've established the utility of the internet. I don't think companies need that excuse for collecting people’s data.  

In my view, when I’m browsing online, my data should not be collected unless or until I make some affirmative choice, like signing up for the service or creating an account. And even then, my data shouldn’t be considered public unless I’ve agreed to share it.

Ten years ago, most people thought about data privacy in terms of online shopping. They thought, “I don't know if I care if these companies know what I buy and what I'm looking for, because sometimes it's helpful.” But now we've seen companies shift to this ubiquitous data collection that trains AI systems, which can have major impact across society, especially our civil rights. I don’t think it’s too late to roll things back. These default rules and practices aren’t etched in stone.

As a general approach to data privacy protection, why isn’t it enough to pass data minimization and purpose limitation regulations that say companies can only gather the data they need for a limited purpose? 

These types of rules are critical and necessary. They play a key role in the European privacy law [the GDPR ] and in the California equivalent [the CPPA ] and are an important part of the federally proposed privacy law [the ADPPA ]. But I’m concerned about the way regulators end up operationalizing these rules. 

For example, how does a regulator make the assessment that a company has collected too much information for the purpose for which it wants to use it? In some instances, it could be clear that a company completely overreached by collecting data it didn’t need. But it’s a more difficult question when companies (think Amazon or Google) can realistically say that they do a lot of different things, meaning they can justify collecting a lot of data. It's not an insurmountable problem with these rules, but it’s a real issue.

Your white paper identifies several possible solutions to the data privacy problems posed by AI. First, you propose a shift from opt-out to opt-in data sharing, which could be made more seamless using software. How would that work?

I would argue that the default should be that our data is not collected unless we affirmatively ask for it to be collected. There have been a few movements and tech solutions in that direction.

One is Apple’s App Tracking Transparency (Apple ATT), which Apple launched in 2021 to address concerns about how much user data was being collected by third-party apps. Now, when iPhone users download a new app, Apple’s iOS system asks if they want to allow the app to track them across other apps and websites. Marketing industry reports estimate that 80% to 90% of people presented with that choice say no.  

Another option is for web browsers to have a built-in opt-out signal, such as Global Privacy Control , that prevents the placement of cookies by third parties or the sale of individuals’ data without the need to check a box. Currently, the California Privacy Protection Act (CPPA) provides that browsers may include this capability, but it has not been mandatory. And while some browsers (Firefox and Brave, for example) have a built-in op-out signal, the big browser companies (such as Microsoft Edge, Apple’s Safari, and Google Chrome) do not. Interestingly though, a California legislator recently proposed a change to the CPPA that would require all browser makers to respect third-party opt-out signals. This is exactly what we need so that data is not collected by every actor possible and every place you go.

You also propose taking a supply chain approach to data privacy. What do you envision that would mean?

When I’m talking about the data supply chain, I’m talking about the ways that AI systems raise issues on the data input side and the data output side. On the input side I’m referring to the training data piece, which is where we worry about whether an individual’s personal information is being scraped from the internet and included in a system’s training data. In turn, the presence of our personal information in the training set potentially has an influence on the output side. For example, a generative AI system might have memorized my personally identifiable information and provide it as output. Or, a generative AI system could reveal something about me that is based on an inference from multiple data points that aren’t otherwise known or connected and are unrelated to any personally identifiable information in the training dataset.

At present, we depend on the AI companies to remove personal information from their training data or to set guardrails that prevent personal information from coming out on the output side. And that’s not really an acceptable situation, because we are dependent on them choosing to do the right thing.

Regulating AI requires paying specific attention to the entire supply chain for the data piece—not just to protect our privacy, but also to avoid bias and improve AI models. Unfortunately, some of the discussions that we've had about regulating AI in the United States haven't been dealing with the data at all. We’ve been focused on transparency requirements around the purpose of companies’ algorithmic systems. Even the AI Act in Europe, which already has the GDPR as a privacy baseline, didn’t take a broad look at the data ecosystem that feeds AI. It was only mentioned in the context of high-risk AI systems. So, this is an area where there is a lot of work to do if we’re going to have any sense that our personal information is protected from inclusion in AI systems, including very large systems such as foundation models.  

You note in your report that the focus on individual privacy rights is too limited and we need to consider collective solutions. What do you mean?

If we want to give people more control over their data in a context where huge amounts of data are being generated and collected, it’s clear to me that doubling down on individual rights isn't sufficient.

In California where we have a data privacy law, most of us don’t even know what rights we do have, let alone the time to figure out how to exercise them. And if we did want to exercise them, we’d have to make individual requests to every company we’ve interacted with to demand that they not sell our personal information—requests that we’d have to make every two years, given that these “do not sell” opt-outs are not permanent.  

This all points toward the need for a collective solution so that the public has enough leverage to negotiate for their data rights at scale. To me, the concept of a data intermediary makes the most sense. It involves delegating the negotiating power over your data rights to a collective that does the work for you, which gives consumers more leverage.

We're already seeing data intermediaries take shape in some business-to-business contexts and they can take various forms , such as a data steward, trust, cooperative, collaborative, or commons. Implementing these in the consumer space would be more challenging, but I don't think it's impossible by any means.

Read the full white paper, “ Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World .”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

More News Topics

Why protecting privacy is a losing game today—and how to change the game

Subscribe to the center for technology innovation newsletter, cameron f. kerry cameron f. kerry ann r. and andrew h. tisch distinguished visiting fellow - governance studies , center for technology innovation @cam_kerry.

July 12, 2018

Recent congressional hearings and data breaches have prompted more legislators and business leaders to say the time for broad federal privacy legislation has come. Cameron Kerry presents the case for adoption of a baseline framework to protect consumer privacy in the U.S.

Kerry explores a growing gap between existing laws and an information Big Bang that is eroding trust. He suggests that recent privacy bills have not been ambitious enough, and points to the Obama administration’s Consumer Privacy Bill of Rights as a blueprint for future legislation. Kerry considers ways to improve that proposal, including an overarching “golden rule of privacy” to ensure people can trust that data about them is handled in ways consistent with their interests and the circumstances in which it was collected.

Table of Contents Introduction: Game change? How current law is falling behind Shaping laws capable of keeping up

  • 31 min read

Introduction: Game change?

There is a classic episode of the show “I Love Lucy” in which Lucy goes to work wrapping candies on an assembly line . The line keeps speeding up with the candies coming closer together and, as they keep getting farther and farther behind, Lucy and her sidekick Ethel scramble harder and harder to keep up. “I think we’re fighting a losing game,” Lucy says.

This is where we are with data privacy in America today. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society.

More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system.

The Cambridge Analytica drama has been the latest in a series of eruptions that have caught peoples’ attention in ways that a steady stream of data breaches and misuses of data have not.

The first of these shocks was the Snowden revelations in 2013. These made for long-running and headline-grabbing stories that shined light on the amount of information about us that can end up in unexpected places. The disclosures also raised awareness of how much can be learned from such data (“we kill people based on metadata,” former NSA and CIA Director Michael Hayden said ).

The aftershocks were felt not only by the government, but also by American companies, especially those whose names and logos showed up in Snowden news stories. They faced suspicion from customers at home and market resistance from customers overseas. To rebuild trust, they pushed to disclose more about the volume of surveillance demands and for changes in surveillance laws. Apple, Microsoft, and Yahoo all engaged in public legal battles with the U.S. government.

Then came last year’s Equifax breach that compromised identity information of almost 146 million Americans. It was not bigger than some of the lengthy roster of data breaches that preceded it, but it hit harder because it rippled through the financial system and affected individual consumers who never did business with Equifax directly but nevertheless had to deal with the impact of its credit scores on economic life. For these people, the breach was another demonstration of how much important data about them moves around without their control, but with an impact on their lives.

Now the Cambridge Analytica stories have unleashed even more intense public attention, complete with live network TV cut-ins to Mark Zuckerberg’s congressional testimony. Not only were many of the people whose data was collected surprised that a company they never heard of got so much personal information, but the Cambridge Analytica story touches on all the controversies roiling around the role of social media in the cataclysm of the 2016 presidential election. Facebook estimates that Cambridge Analytica was able to leverage its “academic” research into data on some 87 million Americans (while before the 2016 election Cambridge Analytica’s CEO Alexander Nix boasted of having profiles with 5,000 data points on 220 million Americans). With over two billion Facebook users worldwide, a lot of people have a stake in this issue and, like the Snowden stories, it is getting intense attention around the globe, as demonstrated by Mark Zuckerberg taking his legislative testimony on the road to the European Parliament .

The Snowden stories forced substantive changes to surveillance with enactment of U.S. legislation curtailing telephone metadata collection and increased transparency and safeguards in intelligence collection. Will all the hearings and public attention on Equifax and Cambridge Analytica bring analogous changes to the commercial sector in America?

I certainly hope so. I led the Obama administration task force that developed the “ Consumer Privacy Bill of Rights ” issued by the White House in 2012 with support from both businesses and privacy advocates, and then drafted legislation to put this bill of rights into law. The legislative proposal issued after I left the government did not get much traction, so this initiative remains unfinished business.

The Cambridge Analytica stories have spawned fresh calls for some federal privacy legislation from members of Congress in both parties, editorial boards, and commentators. With their marquee Zuckerberg hearings behind them, senators and congressmen are moving on to think about what do next. Some have already introduced bills and others are thinking about what privacy proposals might look like. The op-eds and Twitter threads on what to do have flowed. Various groups in Washington have been convening to develop proposals for legislation.

This time, proposals may land on more fertile ground. The chair of the Senate Commerce Committee, John Thune (R-SD) said “many of my colleagues on both sides of the aisle have been willing to defer to tech companies’ efforts to regulate themselves, but this may be changing.” A number of companies have been increasingly open to a discussion of a basic federal privacy law. Most notably, Zuckerberg told CNN “I’m not sure we shouldn’t be regulated,” and Apple’s Tim Cook expressed his emphatic belief that self-regulation is no longer viable.

For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation.

This is not just about damage control or accommodation to “techlash” and consumer frustration. For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation. An increasing spread of state legislation on net neutrality, drones, educational technology, license plate readers, and other subjects and, especially broad new legislation in California pre-empting a ballot initiative, have made the possibility of a single set of federal rules across all 50 states look attractive. For multinational companies that have spent two years gearing up for compliance with the new data protection law that has now taken effect in the EU, dealing with a comprehensive U.S. law no longer looks as daunting. And more companies are seeing value in a common baseline that can provide people with reassurance about how their data is handled and protected against outliers and outlaws.

This change in the corporate sector opens the possibility that these interests can converge with those of privacy advocates in comprehensive federal legislation that provides effective protections for consumers. Trade-offs to get consistent federal rules that preempt some strong state laws and remedies will be difficult, but with a strong enough federal baseline, action can be achievable.

how current law is falling behind

Snowden, Equifax, and Cambridge Analytica provide three conspicuous reasons to take action. There are really quintillions of reasons. That’s how fast IBM estimates we are generating digital information, quintillions of bytes of data every day—a number followed by 30 zeros. This explosion is generated by the doubling of computer processing power every 18-24 months that has driven growth in information technology throughout the computer age, now compounded by the billions of devices that collect and transmit data, storage devices and data centers that make it cheaper and easier to keep the data from these devices, greater bandwidth to move that data faster, and more powerful and sophisticated software to extract information from this mass of data. All this is both enabled and magnified by the singularity of network effects—the value that is added by being connected to others in a network—in ways we are still learning.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Futurists and business forecasters debate just how many tens of billions of devices will be connected in the coming decades, but the order of magnitude is unmistakable—and staggering in its impact on the quantity and speed of bits of information moving around the globe. The pace of change is dizzying, and it will get even faster—far more dizzying than Lucy’s assembly line.

Most recent proposals for privacy legislation aim at slices of the issues this explosion presents. The Equifax breach produced legislation aimed at data brokers. Responses to the role of Facebook and Twitter in public debate have focused on political ad disclosure, what to do about bots, or limits to online tracking for ads. Most state legislation has targeted specific topics like use of data from ed-tech products, access to social media accounts by employers, and privacy protections from drones and license-plate readers. Facebook’s simplification and expansion of its privacy controls and recent federal privacy bills in reaction to events focus on increasing transparency and consumer choice. So does the newly enacted California Privacy Act.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Most recent proposals for privacy legislation aim at slices of the issues this explosion presents.

Measures like these double down on the existing American privacy regime. The trouble is, this system cannot keep pace with the explosion of digital information, and the pervasiveness of this information has undermined key premises of these laws in ways that are increasingly glaring. Our current laws were designed to address collection and storage of structured data by government, business, and other organizations and are busting at the seams in a world where we are all connected and constantly sharing. It is time for a more comprehensive and ambitious approach. We need to think bigger, or we will continue to play a losing game.

Our existing laws developed as a series of responses to specific concerns, a checkerboard of federal and state laws, common law jurisprudence, and public and private enforcement that has built up over more than a century. It began with the famous Harvard Law Review article by (later) Justice Louis Brandeis and his law partner Samuel Warren in 1890 that provided a foundation for case law and state statutes for much of the 20th Century, much of which addressed the impact of mass media on individuals who wanted, as Warren and Brandeis put it, “to be let alone.” The advent of mainframe computers saw the first data privacy laws adopted in 1974 to address the power of information in the hands of big institutions like banks and government: the federal Fair Credit Reporting Act that gives us access to information on credit reports and the Privacy Act that governs federal agencies. Today, our checkerboard of privacy and data security laws covers data that concerns people the most. These include health data, genetic information, student records and information pertaining to children in general, financial information, and electronic communications (with differing rules for telecommunications carriers, cable providers, and emails).

Outside of these specific sectors is not a completely lawless zone. With Alabama adopting a law last April, all 50 states now have laws requiring notification of data breaches (with variations in who has to be notified, how quickly, and in what circumstances). By making organizations focus on personal data and how they protect it, reinforced by exposure to public and private enforcement litigation, these laws have had a significant impact on privacy and security practices. In addition, since 2003, the Federal Trade Commission—under both Republican and Democratic majorities—has used its enforcement authority to regulate unfair and deceptive commercial practices and to police unreasonable privacy and information security practices. This enforcement, mirrored by many state attorneys general, has relied primarily on deceptiveness, based on failures to live up to privacy policies and other privacy promises.

These levers of enforcement in specific cases, as well as public exposure, can be powerful tools to protect privacy. But, in a world of technology that operates on a massive scale moving fast and doing things because one can, reacting to particular abuses after-the-fact does not provide enough guardrails.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books. This includes most of the data we generate through such widespread uses as web searches, social media, e-commerce, and smartphone apps. The changes come faster than legislation or regulatory rules can adapt, and they erase the sectoral boundaries that have defined our privacy laws. Take my smart watch, for one example: data it generates about my heart rate and activity is covered by the Health Insurance Portability and Accountability Act (HIPAA) if it is shared with my doctor, but not when it goes to fitness apps like Strava (where I can compare my performance with my peers). Either way, it is the same data, just as sensitive to me and just as much of a risk in the wrong hands.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books.

It makes little sense that protection of data should depend entirely on who happens to hold it. This arbitrariness will spread as more and more connected devices are embedded in everything from clothing to cars to home appliances to street furniture. Add to that striking changes in patterns of business integration and innovation—traditional telephone providers like Verizon and AT&T are entering entertainment, while startups launch into the provinces of financial institutions like currency trading and credit and all kinds of enterprises compete for space in the autonomous vehicle ecosystem—and the sectoral boundaries that have defined U.S. privacy protection cease to make any sense.

Putting so much data into so many hands also is changing the nature of information that is protected as private. To most people, “personal information” means information like social security numbers, account numbers, and other information that is unique to them. U.S. privacy laws reflect this conception by aiming at “personally identifiable information,” but data scientists have repeatedly demonstrated that this focus can be too narrow. The aggregation and correlation of data from various sources make it increasingly possible to link supposedly anonymous information to specific individuals and to infer characteristics and information about them. The result is that today, a widening range of data has the potential to be personal information, i.e. to identify us uniquely. Few laws or regulations address this new reality.

Nowadays, almost every aspect of our lives is in the hands of some third party somewhere. This challenges judgments about “expectations of privacy” that have been a major premise for defining the scope of privacy protection. These judgments present binary choices: if private information is somehow public or in the hands of a third party, people often are deemed to have no expectation of privacy. This is particularly true when it comes to government access to information—emails, for example, are nominally less protected under our laws once they have been stored 180 days or more, and articles and activities in plain sight are considered categorically available to government authorities. But the concept also gets applied to commercial data in terms and conditions of service and to scraping of information on public websites, for two examples.

As more devices and sensors are deployed in the environments we pass through as we carry on our days, privacy will become impossible if we are deemed to have surrendered our privacy simply by going about the world or sharing it with any other person. Plenty of people have said privacy is dead, starting most famously with Sun Microsystems’ Scott McNealy back in the 20th century (“you have zero privacy … get over it”) and echoed by a chorus of despairing writers since then. Without normative rules to provide a more constant anchor than shifting expectations, true privacy actually could be dead or dying. The Supreme Court may have something to say on the subject in we will need a broader set of norms to protect privacy in settings that have been considered public. Privacy can endure, but it needs a more enduring foundation.

The Supreme Court in its recent Carpenter decision recognized how constant streams of data about us change the ways that privacy should be protected. In holding that enforcement acquisition of cell phone location records requires a warrant, the Court considered the “detailed, encyclopedic, and effortlessly compiled” information available from cell service location records and “the seismic shifts in digital technology” that made these records available, and concluded that people do not necessarily surrender privacy interests to collect data they generate or by engaging in behavior that can be observed publicly. While there was disagreement among Justices as to the sources of privacy norms, two of the dissenters, Justice Alito and Gorsuch, pointed to “expectations of privacy” as vulnerable because they can erode or be defined away.

How this landmark privacy decision affects a wide variety of digital evidence will play out in criminal cases and not in the commercial sector. Nonetheless, the opinions in the case point to a need for a broader set of norms to protect privacy in settings that have been thought to make information public. Privacy can endure, but it needs a more enduring foundation.

Our existing laws also rely heavily on notice and consent—the privacy notices and privacy policies that we encounter online or receive from credit card companies and medical providers, and the boxes we check or forms we sign. These declarations are what provide the basis for the FTC to find deceptive practices and acts when companies fail to do what they said. This system follows the model of informed consent in medical care and human subject research, where consent is often asked for in person, and was imported into internet privacy in the 1990s. The notion of U.S. policy then was to foster growth of the internet by avoiding regulation and promoting a “ market resolution ” in which individuals would be informed about what data is collected and how it would be processed, and could make choices on this basis.

Maybe informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t.

It is not simply that any particular privacy policies “suck,” as Senator John Kennedy (R-LA) put it in the Facebook hearings. Zeynep Tufecki is right that these disclosures are obscure and complex . Some forms of notice are necessary and attention to user experience can help, but the problem will persist no matter how well designed disclosures are. I can attest that writing a simple privacy policy is challenging, because these documents are legally enforceable and need to explain a variety of data uses; you can be simple and say too little or you can be complete but too complex. These notices have some useful function as a statement of policy against which regulators, journalists, privacy advocates, and even companies themselves can measure performance, but they are functionally useless for most people, and we rely on them to do too much.

At the end of the day, it is simply too much to read through even the plainest English privacy notice, and being familiar with the terms and conditions or privacy settings for all the services we use is out of the question. The recent flood of emails about privacy policies and consent forms we have gotten with the coming of the EU General Data Protection Regulation have offered new controls over what data is collected or information communicated, but how much have they really added to people’s understanding? Wall Street Journal reporter Joanna Stern attempted to analyze all the ones she received (enough paper printed out to stretch more than the length of a football field), but resorted to scanning for a few specific issues. In today’s world of constant connections, solutions that focus on increasing transparency and consumer choice are an incomplete response to current privacy challenges.

Moreover, individual choice becomes utterly meaningless as increasingly automated data collection leaves no opportunity for any real notice, much less individual consent. We don’t get asked for consent to the terms of surveillance cameras on the streets or “beacons” in stores that pick up cell phone identifiers, and house guests aren’t generally asked if they agree to homeowners’ smart speakers picking up their speech. At best, a sign may be posted somewhere announcing that these devices are in place. As devices and sensors increasingly are deployed throughout the environments we pass through, some after-the-fact access and control can play a role, but old-fashioned notice and choice become impossible.

Ultimately, the familiar approaches ask too much of individual consumers. As the President’s Council of Advisers on Science and Technology Policy found in a 2014 report on big data , “the conceptual problem with notice and choice is that it fundamentally places the burden of privacy protection on the individual,” resulting in an unequal bargain, “a kind of market failure.”

This is an impossible burden that creates an enormous disparity of information between the individual and the companies they deal with. As Frank Pasquale ardently dissects in his “Black Box Society,”   we know very little about how the businesses that collect our data operate. There is no practical way even a reasonably sophisticated person can get arms around the data that they generate and what that data says about them. After all, making sense of the expanding data universe is what data scientists do. Post-docs and Ph.D.s at MIT (where I am a visiting scholar at the Media Lab) as well as tens of thousands of data researchers like them in academia and business are constantly discovering new information that can be learned from data about people and new ways that businesses can—or do—use that information. How can the rest of us who are far from being data scientists hope to keep up?

As a result, the businesses that use the data know far more than we do about what our data consists of and what their algorithms say about us. Add this vast gulf in knowledge and power to the absence of any real give-and-take in our constant exchanges of information, and you have businesses able by and large to set the terms on which they collect and share this data.

Businesses are able by and large to set the terms on which they collect and share this data. This is not a “market resolution” that works.

This is not a “market resolution” that works. The Pew Research Center has tracked online trust and attitudes toward the internet and companies online. When Pew probed with surveys and focus groups in 2016, it found that “while many Americans are willing to share personal information in exchange for tangible benefits, they are often cautious about disclosing their information and frequently unhappy about that happens to that information once companies have collected it.” Many people are “uncertain, resigned, and annoyed.” There is a growing body of survey research in the same vein. Uncertainty, resignation, and annoyance hardly make a recipe for a healthy and sustainable marketplace, for trusted brands, or for consent of the governed.

Consider the example of the journalist Julia Angwin. She spent a year trying to live without leaving digital traces, which she described in her book “Dragnet Nation.” Among other things, she avoided paying by credit card and established a fake identity to get a card for when she couldn’t avoid using one; searched hard to find encrypted cloud services for most email; adopted burner phones that she turned off when not in use and used very little; and opted for paid subscription services in place of ad-supported ones. More than a practical guide to protecting one’s data privacy, her year of living anonymously was an extended piece of performance art demonstrating how much digital surveillance reveals about our lives and how hard it is to avoid. The average person should not have to go to such obsessive lengths to ensure that their identities or other information they want to keep private stays private. We need a fair game.

Shaping laws capable of keeping up

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights we developed in the Obama administration has taken on new life as a model. The Los Angeles Times , The Economist , and The New York Times all pointed to this bill of rights in urging Congress to act on comprehensive privacy legislation, and the latter said “there is no need to start from scratch …” Our 2012 proposal needs adapting to changes in technology and politics, but it provides a starting point for today’s policy discussion because of the wide input it got and the widely accepted principles it drew on.

The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission: individual control, transparency, respect for the context in which the data was obtained, access and accuracy, focused collection, security, and accountability. These broad principles are rooted in longstanding and globally-accepted “fair information practices principles.” To reflect today’s world of billions of devices interconnected through networks everywhere, though, they are intended to move away from static privacy notices and consent forms to a more dynamic framework, less focused on collection and process and more on how people are protected in the ways their data is handled. Not a checklist, but a toolbox. This principles-based approach was meant to be interpreted and fleshed out through codes of conduct and case-by-case FTC enforcement—iterative evolution, much the way both common law and information technology developed.

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights developed in the Obama administration has taken on new life as a model. The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission.

The other comprehensive model that is getting attention is the EU’s newly effective General Data Protection Regulation. For those in the privacy world, this has been the dominant issue ever since it was approved two years ago, but even so, it was striking to hear “the GDPR” tossed around as a running topic of congressional questions for Mark Zuckerberg. The imminence of this law, its application to Facebook and many other American multinational companies, and its contrast with U.S. law made GDPR a hot topic. It has many people wondering why the U.S. does not have a similar law, and some saying the U.S. should follow the EU model.

I dealt with the EU law since it was in draft form while I led U.S. government engagement with the EU on privacy issues alongside developing our own proposal. Its interaction with U.S. law and commerce has been part of my life as an official, a writer and speaker on privacy issues, and a lawyer ever since. There’s a lot of good in it, but it is not the right model for America.

There’s a lot of good in the GDPR, but it is not the right model for America.

What is good about the EU law? First of all, it is a law—one set of rules that applies to all personal data across the EU. Its focus on individual data rights in theory puts human beings at the center of privacy practices, and the process of complying with its detailed requirements has forced companies to take a close look at what data they are collecting, what they use it for, and how they keep it and share it—which has proved to be no small task. Although the EU regulation is rigid in numerous respects, it can be more subtle than is apparent at first glance. Most notably, its requirement that consent be explicit and freely given is often presented in summary reports as prohibiting collecting any personal data without consent; in fact, the regulation allows other grounds for collecting data and one effect of the strict definition of consent is to put more emphasis on these other grounds. How some of these subtleties play out will depend on how 40 different regulators across the EU apply the law, though. European advocacy groups were already pursuing claims against “ les GAFAM ” (Google, Amazon, Facebook, Apple, Microsoft) as the regulation went into effect.

The EU law has its origins in the same fair information practice principles as the Consumer Privacy Bill of Rights. But the EU law takes a much more prescriptive and process-oriented approach, spelling out how companies must manage privacy and keep records and including a “right to be forgotten” and other requirements hard to square with our First Amendment. Perhaps more significantly, it may not prove adaptable to artificial intelligence and new technologies like autonomous vehicles that need to aggregate masses of data for machine learning and smart infrastructure. Strict limits on the purposes of data use and retention may inhibit analytical leaps and beneficial new uses of information. A rule requiring human explanation of significant algorithmic decisions will shed light on algorithms and help prevent unfair discrimination but also may curb development of artificial intelligence. These provisions reflect a distrust of technology that is not universal in Europe but is a strong undercurrent of its political culture.

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy. The Consumer Privacy Bill of Rights offers a blueprint for such an approach.

Sure, it needs work, but that’s what the give-and-take of legislating is about. Its language on transparency came out sounding too much like notice-and-consent, for example. Its proposal for fleshing out the application of the bill of rights had a mixed record of consensus results in trial efforts led by the Commerce Department.

It also got some important things right. In particular, the “respect for context” principle is an important conceptual leap. It says that a people “have a right to expect that companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” This breaks from the formalities of privacy notices, consent boxes, and structured data and focuses instead on respect for the individual. Its emphasis on the interactions between an individual and a company and circumstances of the data collection and use derives from  the insight of information technology thinker Helen Nissenbaum . To assess privacy interests, “it is crucial to know the context—who is gathering the information, who is analyzing it, who is disseminating and to whom, the nature of the information, the relationships among the various parties, and even larger institutional and social circumstances.”

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy.

Context is complicated—our draft legislation listed 11 different non-exclusive factors to assess context. But that is in practice the way we share information and form expectations about how that information will be handled and about our trust in the handler. We bare our souls and our bodies to complete strangers to get medical care, with the understanding that this information will be handled with great care and shared with strangers only to the extent needed to provide care. We share location information with ride-sharing and navigation apps with the understanding that it enables them to function, but Waze ran into resistance when that functionality required a location setting of “always on.” Danny Weitzner, co-architect of the Privacy Bill of Rights, recently discussed how the respect for context principle “would have prohibited [Cambridge Analytica] from unilaterally repurposing research data for political purposes” because it establishes a right “not to be surprised by how one’s personal data issued.” The Supreme Court’s Carpenter decision opens up expectations of privacy in information held by third parties to variations based on the context.

The Consumer Privacy Bill of Rights does not provide any detailed prescription as to how the context principle and other principles should apply in particular circumstances. Instead, the proposal left such application to case-by-case adjudication by the FTC and development of best practices, standards, and codes of conduct by organizations outside of government, with incentives to vet these with the FTC or to use internal review boards similar to those used for human subject research in academic and medical settings. This approach was based on the belief that the pace of technological change and the enormous variety of circumstances involved need more adaptive decisionmaking than current approaches to legislation and government regulations allow. It may be that baseline legislation will need more robust mandates for standards than the Consumer Privacy Bill of Rights contemplated, but any such mandates should be consistent with the deeply embedded preference for voluntary, collaboratively developed, and consensus-based standards that has been a hallmark of U.S. standards development.

In hindsight, the proposal could use a lodestar to guide the application of its principles—a simple golden rule for privacy: that companies should put the interests of the people whom data is about ahead of their own. In some measure, such a general rule would bring privacy protection back to first principles: some of the sources of law that Louis Brandeis and Samuel Warren referred to in their famous law review article were cases in which the receipt of confidential information or trade secrets led to judicial imposition of a trust or duty of confidentiality. Acting as a trustee carries the obligation to act in the interests of the beneficiaries and to avoid self-dealing.

A Golden Rule of Privacy that incorporates a similar obligation for one entrusted with personal information draws on several similar strands of the privacy debate. Privacy policies often express companies’ intention to be “good stewards of data;” the good steward also is supposed to act in the interests of the principal and avoid self-dealing. A more contemporary law review parallel is Yale law professor Jack Balkin’s concept of “ information fiduciaries ,” which got some attention during the Zuckerberg hearing when Senator Brian Schatz (D-HI) asked Zuckerberg to comment on it. The Golden Rule of Privacy would import the essential duty without importing fiduciary law wholesale. It also resonates with principles of “respect for the individual,” “beneficence,” and “justice” in ethical standards for human subject research that influence emerging ethical frameworks for privacy and data use. Another thread came in Justice Gorsuch’s Carpenter dissent defending property law as a basis for privacy interests: he suggested that entrusting someone with digital information may be a modern equivalent of a “bailment” under classic property law, which imposes duties on the bailee. And it bears some resemblance to the GDPR concept of “ legitimate interest ,” which permits the processing of personal data based on a legitimate interest of the processor, provided that this interest is not outweighed by the rights and interests of the subject of the data.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected. This should hold regardless of how the data is collected, who receives it, or the uses it is put to. If it is personal data, it should have enduring protection.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected.

Such trust is an essential building block of a sustainable digital world. It is what enables the sharing of data for socially or economically beneficial uses without putting human beings at risk. By now, it should be clear that trust is betrayed too often, whether by intentional actors like Cambridge Analytica or Russian “ Fancy Bears ,” or by bros in cubes inculcated with an imperative to “ deploy or die .”

Trust needs a stronger foundation that provides people with consistent assurance that data about them will be handled fairly and consistently with their interests. Baseline principles would provide a guide to all businesses and guard against overreach, outliers, and outlaws. They would also tell the world that American companies are bound by a widely-accepted set of privacy principles and build a foundation for privacy and security practices that evolve with technology.

Resigned but discontented consumers are saying to each other, “I think we’re playing a losing game.” If the rules don’t change, they may quit playing.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Artificial Intelligence Internet & Telecommunications Privacy

Courts & Law

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative The Privacy Debate

Mark MacCarthy

May 23, 2024

Cameron F. Kerry

March 27, 2024

March 11, 2024

Learning Mind

3 Ways the Invasion of Privacy Takes Place Today through Current Technology

  • Post author: Valerie Soleil, B.A., LL.B.
  • Post published: November 12, 2017
  • Reading time: 5 mins read
  • Post category: Futurism & Technology / Uncommon Science

You may not be aware of that, but the invasion of privacy takes place every day in the modern world. Current technologies have made it possible to happen.

Imagine there were people watching and listening to you every moment of every day. From when you woke up in the morning, during your commute to work, the hours from 9 to 5 and even during the night as you enjoy a few drinks with friends. Such a total invasion of privacy sounds paranoid , right? Well, maybe not.

First of all, consider that the smartphone in your pocket is equipped with a GPS tracker, camera and a microphone. Also, given the current unstable security climate of the internet, it’s at least partly feasible that someone could hack into your device. As a result, they could use it to not only track your whereabouts but also peer into the immediate environment around you.

Luckily, this type of Orwellian situation isn’t the everyday norm. But it does speak volumes about the extent to which the invasion of privacy is taking place thanks, in large part, to the current technologies of our time.

Surveillance devices: stranger than fiction

Using 1984, the seminal work of George Orwell, as a standard, how exactly does the technology of our current era compare to the mass surveillance of the nightmarish dystopia imagined in the novel?

In the book, the government watches its citizens using “telescreens” which are installed in every apartment. People are also encouraged through fear and oppression to report any suspected wrongdoing to party officials.

Thankfully, this isn’t the case in most societies today, but it’s interesting to note that surveillance devices are abundant in a large number of households . Two key differences worth noting are that the surveillance tools in the book are forced upon ordinary people by the government.

Today, the devices that restrict privacy don’t report back to governments but to corporations , and they are installed in homes willingly and without legislation.

Spies in our homes

A good example of willingness to invite corporations into our homes is the rollout of the Amazon Echo and the Google Home . These devices, while built for an innocuous and utilitarian purpose of being a home assistant, are ears which can record, store and report sound heard in its environment. This includes conversations.

What would be the purpose of such surveillance ? Well, since companies such as Amazon and Google make a large part of their revenue from advertising, it stands to reason that data gathered in this way (and in many other legitimate ways) could be used to produce tailored advertising to their users based on their behavioral preferences.

If that sounds a little too farfetched, consider this: a recent bug in a particular Google Home Mini caused the device to record audio in its environment continuously without their owners’ consent.

Technology that knows you

If it’s not audio, there is also video and other optics to consider – after all, every laptop and mobile device is fitted with a camera.

A scenario in which this could be used to ill effect was played out to egregious proportions in the recent episode of dystopian sci-fi series Black Mirror , in which a teenager is blackmailed with illegally-obtained video footage to carry out a series of crimes. It blows the lid on our technological anxieties about being hacked, and the seeming inevitability of it.

Closer to reality, facial recognition apps mean that your identity can be matched with online social media networks and determined without willingness or consent. From there, it’s a small leap to imagine having all of your details and data on display whenever a camera catches a glimpse of your face when you are out in public.

The invasion of privacy: what awaits us in the future?

With the ever-increasing profusion of technology, the future of privacy is looking uncertain at best . The internet has thrown a broad net of connection over the entire planet, but as people grow closer, our privacy recedes.

The most telling evidence of this was perhaps the global surveillance scandal uncovered by Edward Snowden in 2013. A traitor to some and martyr to others, Snowden showed the extent to which technology can be abused and used to track everyday citizens. He emphasized how it can be done under the guise of seemingly well-intended means such as heightened national security and liberty.

This paradox – ensuring that a country’s population remains free and safe by means of increased surveillance and invasion of privacy – leaves a slew of questions in its wake. And as the world’s appetite and obsession for technology grow at a startling rate, this begs the question: will privacy hold a place in our future ?

power of misfits book banner desktop

Like what you are reading? Subscribe to our newsletter to make sure you don’t miss new thought-provoking articles!

Share this story share this content.

  • Opens in a new window Facebook
  • Opens in a new window X
  • Opens in a new window LinkedIn
  • Opens in a new window Reddit
  • Opens in a new window Tumblr

Leave a Reply Cancel reply

Save my name, email, and website in this browser for the next time I comment.

the power of misfits black friday

Technology, Privacy, and the Person Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Counter measures, works cited.

Advancement in information technology is one of the greatest achievements realized in human history. This has significantly shaped the manner in which people communicate and share information around the world. At a click of the mouse, it is possible to access and share information with friends, workmates, employees and the general public.

Nevertheless the internet has redefined information technology and taken it to another level (Castells 45). Consequently, these developments have led to emergence of privacy and security issues especially in the 21 st century with many people arguing that massive use of the internet puts users at risk of having their private information accessed and shared without their consent.

It is more obvious than not that the use of internet is a threat to human privacy and has to be addressed to mitigate cyber-related crimes. This essay explores ways in which technology affects human privacy and how internet users and victims respond to these challenges.

Internet is a global network which allows connection of computers, computer programs and enormous information. This connectivity permits sharing and transmission of information regardless of ones destination on the global map. As a result, the internet is regarded as a resource center for a wide range of people (Castells 45).

It has become a space where people deposit and share private information, becoming victims of identity theft and privacy related cases. It is however important to underline the fact that the internet is largely unregulated on an international scale thus internet laws have limited legislative authority over internet activities in other countries (Levmore and Nussbaum 22).

Although the internet was initially used for communication in organizations, institutions and in business, it has found massive application among individuals around the world. Besides computers, internet usage has been facilitated by the emergence of internet-enabled mobile phones (Castells 45).

People around the world are able to search the internet, post personal information and view other people’s profiles. Additionally, commencement of the 21 st century has witnessed rise in use of social networks like Facebook, Twitter and MySpace among others. Most of these sites allow users to create personal accounts under which they store personal information. Even though the accounts are usually managed by individual users, the privacy of such information has always remained a major point of concern among internet experts.

By posting personal information online, many people become vulnerable to cyber terrorists. While many believe that their social network accounts are solely managed by themselves, it is possible to access private information including physical addresses and other contact information among others (Dixon and Gellman 1).

This makes it possible to be stalked online and puts the private life of an individual at risk. Unless internet users are sensitive and have customized their settings not to allow strangers to view their profiles, social networks allow friends and other users to access their private data.

Who doesn’t need privacy? In fact, privacy is so crucial that most legislation around the world recognize the need of having secure and private life. This equally applies to information technology and its ability to facilitate flow of information. Privacy allows internet users to have a personal space that is free from external interference (Kendrick 20). It encompasses a wider range of dimensions which define personal information that have the power to reveal the identity of individuals.

They include but not limited to privacy of personal data, physical privacy, behavioral privacy and privacy of communications (Dixon and Gellman 1). It is the claim of every person to have total control of their personal information in order to limit public access. With current technological advancements, there is a lot of online data surveillance which threatens total privacy of online data.

Threats to private data are considered common especially when using the internet for communication. For example, data which is transmitted online is subject to a wide range of risks including delivery of information to the unintended person or organization, information being accessed by unintended person, change of content during transmission and denial of the recipient to have received such information (Kendrick 21).

Another transmission threat is the existence of transaction trails which are able to record internet activities performed by individuals. These may include records of sent and received emails, visited sites and transactions performed using other internet gadgets and services. Exploitation of these trails can be used to reveal important private information which could be used against criminally (Dixon and Gellman 9).

In addition, threats to personal identity are a major challenge to internet users as it is augmented by technological advancements (Finklea 1). Such threats revolve around available personal information revealing contact details. An example of this risk which is common in the United Sates is identity theft where people masquerade and use different identities to perform transactions and countless malpractices (Dixon and Gellman 10). Unsolicited communication is also common when using the internet to transmit information.

Internet users are likely to receive bulky mails from unknown people when using the web for communication (Smith and Kutais 4). Common implication of such mails is that they waste the recipient’s time and may turn out to be offensive depending on the content. Correspondingly, emails sent across the internet identify the sender to the recipient without the former’s knowledge. This causes personality threat that could be used against the sender.

It is clear that the internet exposes users to enormous threats most of which are identity-related. The question we need to ask is how these threats could be avoided. Notably, security of personal information is essential and has to be emphasized at all levels including personal and corporate. Among the ways IT experts respond to personal data threats is through encryption of sensitive records. Encryption allows protection of information being transmitted using codes or passwords which can only be decrypted by the recipient (Kendrick 162).

Besides coding of private information, web users are discouraged from posting personal information online, to avoid being preyed on by cyber terrorists (Smith and Kutais 4). In cases where such information is posted, it is important to customize access settings in order to limit public access to private information.

The use of firewalls and authentic antivirus software is also one of the ways in which most people are responding to secure their privacy. Firewalls are able to detect web trails although they do not reveal specific information being trailed. Lastly, stringent legislation should be formulated and implemented to guard against cyber terrorism across the globe (Levmore and Nussbaum 22).

The Internet is undoubtedly the leading means of communication which has been realized through technological advancement in the world. Having hit the peak in the 21 st century, the internet has become a basic communication toolkit for both private and individual usage. While these momentous achievements engulf the world of communication, internet privacy has emerged as a costly threat to all internet users. These threats revolve around personal information and all efforts have to be put in to protect the privacy of internet users.

Castells, Manuel. The rise of the network society . Hoboken, New Jersey: John Wiley and Sons, 2009. Print.

Dixon, Pam and Gellman, Robert. Online Privacy: A Reference Handbook . Santa Barbara, California: ABC-CLIO, 2011. Print.

Finklea, Kristin. Congressional Research Service. Identity Theft: Trends and Issues . Philadelphia: DIANE Publishing, 2009. Print.

Kendrick, Rupert. Cyber Risks for Business Professionals . NYC: IT Governance Ltd, 2010. Print.

Levmore, Saul and Nussbaum, Martha. The offensive Internet: privacy, speech, and reputation . Cambridge, Massachusetts: Harvard University Press, 2011. Print.

Smith, Marcia S., and Kutais B. G.. Spam and internet privacy . Hauppauge, New York: Nova Publishers, 2007. Print.

  • Astoria Community's Public Health Analysis
  • SB2616: Search and Rescue Bill
  • Geological and Cultural Importance of Deer Creek Park (Colorado)
  • Introduction to the Sources of the New Ethics that is Required by the Internet
  • Website Review: cio.com
  • E-mail Spam: What it is and how it works
  • The Internet's Good and Bad Sides
  • What Are the Benefits of the Internet?
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2018, October 12). Technology, Privacy, and the Person. https://ivypanda.com/essays/technology-privacy-and-the-person/

"Technology, Privacy, and the Person." IvyPanda , 12 Oct. 2018, ivypanda.com/essays/technology-privacy-and-the-person/.

IvyPanda . (2018) 'Technology, Privacy, and the Person'. 12 October.

IvyPanda . 2018. "Technology, Privacy, and the Person." October 12, 2018. https://ivypanda.com/essays/technology-privacy-and-the-person/.

1. IvyPanda . "Technology, Privacy, and the Person." October 12, 2018. https://ivypanda.com/essays/technology-privacy-and-the-person/.

Bibliography

IvyPanda . "Technology, Privacy, and the Person." October 12, 2018. https://ivypanda.com/essays/technology-privacy-and-the-person/.

  • Share full article

Advertisement

Supported by

The Battle for Digital Privacy Is Reshaping the Internet

As Apple and Google enact privacy changes, businesses are grappling with the fallout, Madison Avenue is fighting back and Facebook has cried foul.

technology invasion of privacy essay

By Brian X. Chen

Listen to This Article

SAN FRANCISCO — Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data.

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. The struggle has entangled tech titans, upended Madison Avenue and disrupted small businesses. And it heralds a profound shift in how people’s personal information may be used online, with sweeping implications for the ways that businesses make money digitally.

At the center of the tussle is what has been the internet’s lifeblood: advertising .

More than 20 years ago, the internet drove an upheaval in the advertising industry. It eviscerated newspapers and magazines that had relied on selling classified and print ads, and threatened to dethrone television advertising as the prime way for marketers to reach large audiences.

Instead, brands splashed their ads across websites, with their promotions often tailored to people’s specific interests. Those digital ads powered the growth of Facebook, Google and Twitter, which offered their search and social networking services to people without charge. But in exchange, people were tracked from site to site by technologies such as “ cookies, ” and their personal data was used to target them with relevant marketing.

Now that system, which ballooned into a $350 billion digital ad industry, is being dismantled. Driven by online privacy fears, Apple and Google have started revamping the rules around online data collection. Apple, citing the mantra of privacy, has rolled out tools that block marketers from tracking people. Google, which depends on digital ads, is trying to have it both ways by reinventing the system so it can continue aiming ads at people without exploiting access to their personal data.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

IMAGES

  1. Invasion of Privacy by Big Tech

    technology invasion of privacy essay

  2. ⇉Social media invasion of privacy Essay Example

    technology invasion of privacy essay

  3. Definition of Invasion of Privacy.odt

    technology invasion of privacy essay

  4. What constitutes Invasion of Privacy?

    technology invasion of privacy essay

  5. Invasion of Privacy by Big Tech

    technology invasion of privacy essay

  6. ≫ Internet Privacy Protection Free Essay Sample on Samploon.com

    technology invasion of privacy essay

VIDEO

  1. Invasion of Privacy (1992) Robby Benson

  2. AI Invasion: How Tech is Invading Your Privacy

  3. Privacy in the Digital Age: AI's Perspective #AIPerspective #DigitalPrivacy

  4. [2.2] IPC INVASION coming to Penacony. Honkai: Star Rail Theory

  5. New police technology raises privacy concerns

  6. Internet Access Is Blocked In Hindi#Data Protection

COMMENTS

  1. The erosion of privacy in the Internet era | Harvard Magazine

    Its author, Daniel Solove, now a professor at George Washington University Law School, identified 16 privacy harms modulated by new technologies, including: information collection by surveillance; aggregation of information; insecurity of information; and disclosure, exposure, distortion, and increased accessibility of information.

  2. Privacy in an AI Era: How Do We Protect Our Personal Information?

    Your white paper identifies several possible solutions to the data privacy problems posed by AI. First, you propose a shift from opt-out to opt-in data sharing, which could be made more seamless using software. How would that work?

  3. Technology and the Invasion of Privacy Essay - 706 Words ...

    As the advancing world of technology continues to grow and expand, so do the amount of cases involving privacy invasion. Technology drives these privacy-invading crimes; however, crime also drives technology, creating a vicious cycle.

  4. Why protecting privacy is a losing game today—and how to ...

    We need an American answer—a more common law approach adaptable to changes in technologyto enable data-driven knowledge and innovation while laying out guardrails to protect privacy.

  5. 3 Ways the Invasion of Privacy Takes Place Today through ...

    The invasion of privacy: what awaits us in the future? With the ever-increasing profusion of technology, the future of privacy is looking uncertain at best. The internet has thrown a broad net of connection over the entire planet, but as people grow closer, our privacy recedes.

  6. Technology, Privacy, and the Person - 1144 Words | Essay Example

    It is more obvious than not that the use of internet is a threat to human privacy and has to be addressed to mitigate cyber-related crimes. This essay explores ways in which technology affects human privacy and how internet users and victims respond to these challenges.

  7. The Battle for Digital Privacy Is Reshaping the Internet

    Media publishers, app makers and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models.

  8. Grand Valley State University ScholarWorks@GVSU

    How Technology is Killing Privacy. John Alexander. Grand Valley State University. Follow this and additional works at: https://scholarworks.gvsu.edu/honorsprojects. Part of the Social and Behavioral Sciences Commons, and the Technology and Innovation Commons.

  9. Spyware and surveillance: Threats to privacy and human rights ...

    GENEVA (16 September 2022) – People’s right to privacy is coming under ever greater pressure from the use of modern networked digital technologies whose features make them formidable tools for surveillance, control and oppression, a new UN report has warned.

  10. Full article: Online Privacy Breaches, Offline Consequences ...

    In essence, invasions of online privacy jeopardize offline privacy, as one cannot protect their offline privacy if their online privacy is not protected. Many people express concerns about privacy (Jupiter, Citation 2002 ) and, specifically, a desire to control how personal information is obtained and used by companies (Castañeda & Montoro ...