Algorithmic Accountability Act of 2019 (S 1108, 116th Congress)

Policy Details

Policy Details

Originating Entity
Last Action
Referred to Committee on Commerce, Science, and Transportation
Date of Last Action
Apr 10 2019
Congressional Session
116th Congress
Date Introduced
Apr 10 2019
Publication Date
Apr 26 2019
Date Made Public
Apr 10 2019

SciPol Summary

In response to growing concern of algorithmic bias – the systematic discriminatory outcomes produced by software decision-making – Senators Wyden (D-OR) and Booker (D-NJ) ) introduced the “Algorithmic Accountability Act of 2019” in the Senate in April 2019. Representative Clarke (D-NY-9) introduced a parallel bill in the House of Representatives.

If enacted, the Algorithmic Accountability Act would task the Federal Trade Commission (FTC) with regulating the use of decision-making algorithms by a select group of companies: companies that make over 50 million annually, gather personal information (PI) from over one million users or devices, or serve as data brokers (i.e., collect, sell, or trade user data as a core part of their business).

Within two years of enactment, companies would need to conduct impact assessments of high-risk automated decision systems – systems that make decisions or facilitate human-made decisions which can “alter the legal rights of consumers or otherwise significantly impact consumers.” These assessments need to:

  • Detail the algorithm’s purpose and design, the data it uses, and if and how it is trained; 
  • Elaborate on the algorithm’s benefits and drawbacks, detailing how the algorithm limits the amount of PI collected, the length of time it holds PI and final decisions for, the ability people have to see and dispute its decisions, and who can see its decisions; and
  • Assess how the algorithm could present bias, be used in a discriminatory manner, or undermine the privacy or security of consumers’ PI and enumerate the safeguards in place to combat these issues.

Within this two-year timeframe, companies will also need to conduct regular data protection impact assessments for both their high-risk automated decision systems and their high-risk information systems – systems that use sensitive consumer PI (such as race, gender, sexual orientation), pose a risk to the privacy or security of consumer PI, monitor a large physical space accessible to the public, or meet any other criteria set by the FTC. These data protection impact assessments will evaluate how the system keeps consumer PI private and secure. Companies must consult with independent external auditors “if reasonably possible” to conduct impact assessments.

If, as a result of these assessments, companies find issues with their algorithms, they would be required to fix the problems in a timely manner. Any violation will be considered an unfair or deceptive act and persecuted according to the Federal Trade Commission Act.

As predictive decision-making algorithms have become cheaper and more scalable to deploy in commercial software, they are having a larger impact on consumers’ lives. These algorithms look for correlations within historical datasets to make decisions, like credit grants or college admissions, aiming to maximize the accuracy of their predictions across all users. However, in their quest to optimize for overall accuracy, they can disproportionately disenfranchise subgroups of the population.

Recent events highlighted the pervasiveness of algorithmic bias in predictive algorithms. The U.S. Department of Housing and Urban Development (HUD) sued Facebook over discriminatory  housing advertisement where advertisers could automatically show housing ads to Facebook users based on their personal traits like race or national origin. Researchers have shown the potential of algorithms bias in race and gender.

As Senator Booker explains: “The discrimination that my family faced in 1969 can be significantly harder to detect in 2019: houses that you never know are for sale, job opportunities that never present themselves, and financing that you never become aware of — all due to biased algorithms. This bill… [is] a key step toward ensuring more accountability from the entities using software to make decisions that can change lives.”

However, critics point to the fact that the proposed bill holds decision-making algorithms to a higher standard than human decision makers, that it would require impact assessments for any new incremental innovation of a software, and that it targets only large firms despite small firms’ ability to cause similar harm to their users.

SciPol Summary authored by