Data Privacy Legislation: Lessons Learned from Five Proposals

Maria Carnovale, PhD
April 7, 2020

Consumers are concerned about the online privacy of their personal data. They routinely navigate a sea of “terms and conditions” and “consents to use”, yet according to the Harvard Business Review, most consumers are blissfully unaware of the data they give up to the multiple service providers they interact with. In light of any new data breach, most consumers think that companies should be kept more accountable for their data practices and rely on government oversight to fill that role.

 

Privacy legislation has been given much attention in the 116th Congress (2019-2020). Multiple bills have been proposed to update the current legal framework that dates back to the late 1990s-early 2000s, a time when social media platforms were still on the rise. Five of these bills are still being discussed in Congress; even if none makes it to the finish line, they can teach a few lessons on the current debate on privacy regulation:

  • Overall there is political consensus on expanding the set of information protected. But a source of dissent is the level of protection granted to behavioral and de-identified data. Balancing privacy and economic welfare will guide the discussion on the level of protection for those two data classes.
  • Rapid technological development is a challenge for policymaking: as new uses of technology sprawl, policymakers are constantly playing catch-up. To prevent this outcome, some bills are changing the approach: instead of regulating inputs, e.g. what data can or cannot be used, they are focusing on outputs, e.g. what uses or effects are acceptable. This shift has the potential to make regulation resilient to technological change.
  • In response to criticisms over the Federal Trade Commission (FTC)’s ineffectiveness in discouraging misuse of private information, some bills aim to sharpen FTC’s teeth. Others propose to create an entire new agency in charge of data privacy. This difference reflects disagreement over the appropriate regulatory approach: refining the current “notice-and-consent” model focused on consumer choice versus directly limiting business practices over personal data use. Political pressure by companies and civil society will determine which approach will prevail.
  • Data breaches have become a common occurrence. Except for requiring damages in case of data malpractice, no proposed measure helps the victims of breaches address or recover from the spill of their personal information. Those measures should be developed by shifting the conversation away from overly focusing on firm’s accountability and towards treating breaches as a widespread economic shock.

Input versus outputs: information that is personal.

Previous definitions of personal information covered by data privacy laws tend to cover a limited set of information. Identifiers like names, social security numbers, and addresses belong to that category as they can be used to easily identify or impersonate an individual. Biometric records like fingerprints are also considered private information, as is information on education, financial transactions, and medical and criminal history.

Race, gender, religious beliefs, sexual orientation, and genetic information enter the class of especially sensitive data for their potential to create conscious or unconscious forms of bias and discrimination. These five bills concur that users’ consent should be required before their data is collected or used and extra auditing and security measures are necessary to avoid breaches.

Most of these proposals, however, cast a wider net. Eshoo’s is the most comprehensive. Like Moran’s and Wyden’s, this proposal defines as private any information that is “reasonably linkable to a specific consumer or device,” unless explicitly excluded, making protection of digital information the default. But Eshoo also adds “the means to behavioral personalization”: any collection and data analysis tool used to predict or influence individual behavior and to personalize content.

This behavioral information is extensively used in online marketing and describes almost every user-platform interaction when, for instance, shopping on Amazon, watching a movie on Netflix, or scrolling through YouTube videos. Under this proposal, these business practices would require express user consent.

Behavioral targeting is under widespread scrutiny for manipulating users and impeding their freedom of choice while creating “bubbles” that degrade social cohesion. To a smaller degree, this concern is likewise addressed in other proposals. DelBene’s expands the scope of protection to browsing, search histories, and any other interaction with a website or an app, as does Gillibrand’s, which also includes commercial information such as products or services purchased or considered for purchase.

Yet, by explicitly naming the type of information covered, Gillibrand’s and DelBene’s legislations will need to catch up with each new technological innovation. Eshoo’s proposal, which instead specifies the use of that data—its output—rather than the data itself, would cast a net that morphs with technological development, creating a regulatory framework resilient to the pace of innovation. For instance, DelBene explicitly considers the content of wire, oral, and electronic conversations as personal information, something that would fall under Moran’s, Wyden’s, and Eshoo’s “reasonably linkable to a specific consumer or device” definition of covered information.

Wyden’s and Gillibrand’s proposals also grant coverage to inferred information. With the booming growth of social media and other online platforms, data have become cheaper to gather. Powerful data analysis tools are also widely available. It is now possible to infer sensitive aspects of people’s lives, such as gender, sexuality, age, race, and political affiliation, from seemingly innocuous and public pieces of information, like Facebook “likes” patterns.

"In the era of powerful predictive statistics, any information is personal information."

Eshoo’s, on the other hand, explicitly excludes inferences, but it is the only proposal that includes de-identified personal information. Such an inclusion demonstrates a concern that most de-identification methods make data easily re-identifiable. In the era of powerful predictive statistics, any information is personal information.

Consumer choice and the FTC role.

The FTC currently enforces and investigates most privacy legislation. Yet, the agency has been amply criticized for its alleged ineffectiveness in protecting individual data privacy and in discouraging misuse of private information. For instance, the FTC has heavily fined YouTube for its violations of the Children’s Online Privacy Protection Act; in response to this FTC action, however, YouTube set its new compliance requirements, and therefore responsibilities, to fall on content creators. Similarly, Facebook’s FTC-approved settlement over data abuses in the 2016 Cambridge Analytica scandal underwent severe public scrutiny. Even Rohit Chopra, an FTC Commissioner, was disappointed with the outcome. “The settlement’s $5 billion penalty makes for a good headline, but the terms and conditions, including blanket immunity for Facebook executives and no real restraints on Facebook’s business model, do not fix the core problems that led to these violations,” he wrote in a statement.

In response, most proposals strengthen FTC’s oversight power. Moran’s is the most moderate. It requires user consent for collection and use of user data, with an exception for data used under “permissible purposes.” These purposes include, for example, compliance with the law and the delivery of firms’ primary products, but also any kind of research or firm operations, including marketing and advertisement. Interestingly, Moran lists access, correction, and deletion of one’s own personal information as rights maintained by the individual. However, firms can deny follow-up action if the request is “unfounded or frivolous,” among other reasons, setting significant conditions to the exercise of consumers’ rights.

DelBene strengthens requirements for firms’ annual privacy audits and for transparency over data management policies. Users would also be entitled to opt in or out of data collection, storage, and usage (including sale) at any time and be properly informed on how to do so. To decrease the burden of opting out, Wyden’s proposal sets up a centralized “Do Not Track” website, maintained by the FTC, as a one-stop online platform for consumers to opt-out of data sharing by any private business.

A new agency to regulate business practices on data privacy.

Gillibrand also subjects data collection and use to consumer consent and includes prohibitions for “pay-for-privacy provisions” and “take-it-or-leave-it” terms of service. However, she suggests that a novel federal agency takes on the task of regulating, monitoring, and enforcing firms’ data practices, a clear sign of the prominence of data in the current social and business environment.

Also headed by a new federal agency, Eshoo’s data protection will still be based on explicit consent for data collection, storage, and behavioral personalization. Yet the proposal severely limits companies’ data collection practices. Independently of consent, companies will have to articulate a reasonable need for collecting private information and can only collect data that are necessary for their primary business. The proposal also requires disposal of information that is no longer needed.

"The consumer choice regulatory model has been amply criticized for creating ubiquitous and burdensome terms of services that consumers blindly accept to access the desired service."

The consumer choice regulatory model, based on user “notice-and-consent”, has been amply criticized for creating ubiquitous and burdensome terms of services that consumers blindly accept to access the desired service. Assessing firms’ compliance with the accepted terms of service is a hard task for the average consumer, who is ultimately left disempowered and vulnerable.

As such, a willingness to take data privacy out of consumers’ hands and into government policies that directly limit business data practices is slowly seeping into policymaking. Eshoo’s proposal clearly goes in that direction, outlining a series of individual rights with respect to data. New consumer privileges appear next to the traditional rights to access, correct, and delete personal information. Consumers can elude manipulation of their individual decisions by having the right to opt out of behavioral personalization. They will also maintain the right to human review of automated decisions to prevent automated systems from reinforcing existing inequalities in access to services and opportunities.

By characterizing data protection as an individual right, as opposed to a choice, Eshoo’s bill shifts the burden onto firms. However, reframing privacy as an individual right also means it cannot be yielded in exchange for free services or other economic benefits. This approach risks to enrage both service providers, whose data use would be severely limited, as well as consumers that have grown accustomed to ubiquitous access to free or low-cost online services.

What is missing: social impacts of data breaches.

In the last few years we have witnessed large and repetitive data breaches: Equifax in 2017, Google in 2018, and Facebook in 2019 have only been the top of the iceberg of a less visible but nevertheless substantial list. Each of these incidents puts in question societal trust in the many private firms to whom consumers entrust their private information.

The Author
Maria Carnovale, PhD

Lead Policy Analyst @ SciPol.org

Most of the bills discussed here approach the issue. They attempt to decrease the probability of breaches by imposing management practices that increase the safety of data collection, sharing, and storing. For instance, DelBene’s requires third-party data security certification, while Wyden’s and Gillibrand’s scale up penalties for faulty security measures and audits. Eshoo’s and Moran’s require firms to delete personal information no longer in use, thus limiting the spill of information in the event of a breach.Yet the policy debate has avoided the conversation on how to mitigate the ramifications of a breach and has instead focused on trying to prevent it. What is missing is a focus on the victims of such inevitable breeches rather than on the perpetrators. Consumers can collect damages from firms who have infringed on data security requirements. But even the best data practices will not be completely safe.

None of these bills, nor the public debate, has suggested measures to minimize the spread of lost data or to recover from the economic and psychological impact of consumer health records, financial information, or private conversations floating around somewhere on the internet. The solution will require shifting the privacy debate away from firms’ accountability—which is nonetheless still an important piece of the puzzle—and towards thinking of data breaches as a widespread economic shock to collectively insure against.


Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Please distribute widely but give credit to Duke SciPol.org and the original author, linking back to this page if possible.