Algorithmic Accountability Act targets bias in AI decision-making

SUMMARY:

U.S. lawmakers have introduced a bill to require companies to monitor and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions. It may not pass in today’s gridlocked political environment but it will be back.

As large organizations increasingly turn to artificial intelligence to evaluate information on the most important decisions affecting Americans’ lives–whether or not they can buy a home, get a job or a loan, or even go to jail–there has been growing concern among government regulators that these algorithms too often rely on biased assumptions or data that can actually reinforce discrimination against women and people of color.

These worries are apparently not without merit. The Department of Housing and Urban Development recently charged Facebook with violating the Fair Housing Act by allowing advertisers to discriminate based on race, religion and disability status. Last year, Reuters reported that Amazon shut down an automated recruiting tool that was biased against women.

Sen. Ron Wyden, D-Ore., Sen. Cory Booker, D-N.J., and Rep. Yvette D. Clarke, D-N.Y., last week introduced the Algorithmic Accountability Act, which requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans to take corrective action in a timely manner if such issues are identified.

In addition, the bill would require those companies to audit all processes beyond machine learning involving sensitive data—including personally identifiable, biometric, and genetic information—for privacy and security risks. The bill would place regulatory power in the hands of the US Federal Trade Commission, the agency in charge of consumer protections and antitrust regulation.

Sen. Booker, who is a 2020 Presidential candidate, drew from his personal experience to argue for the need for such a law:

“50 years ago my parents encountered a practice called ‘real estate steering’ where black couples were steered away from certain neighborhoods in New Jersey. With the help of local advocates and the backing of federal legislation they prevailed. However, the discrimination that my family faced in 1969 can be significantly harder to detect in 2019: houses that you never know are for sale, job opportunities that never present themselves, and financing that you never become aware of—all due to biased algorithms.

“This bill requires companies to regularly evaluate their tools for accuracy, fairness, bias, and discrimination. It’s a key step toward ensuring more accountability from the entities using software to make decisions that can change lives.”

What the Algorithmic Accountability Act would do

The new legislation would:

  • Authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to conduct impact assessments of highly sensitive automated decision systems. This requirement would apply both to new and existing systems.
  • Require companies to assess their use of automated decision systems, including training data, for impacts on accuracy, fairness, bias, discrimination, privacy and security.
  • Require companies to evaluate how their information systems protect the privacy and security of consumers’ personal information.
  • Require companies to correct any issues they discover during the impact assessments.

The rules would apply to companies with annual revenue above $50 million as well as to data brokers and businesses with over a million consumers’ data. The bill is endorsed by tech and civil rights groups, including Data for Black Lives, the Center on Privacy and Technology at Georgetown Law and the National Hispanic Media Coalition.

My Take

The new bill is a reflection of how far the reputation of Big Tech has plummeted with consumers and lawmakers in the wake of seemingly endless data breaches and the continuing Facebook and Google scandals.

Although the legislation stands virtually no chance of becoming law in the current environment of a divided Congress in an election year, lawmakers are signaling that the 2020 American Presidential campaign is likely to be the first time that reining in Big Tech becomes an important issue. And the attention isn’t likely to go away.

Regulation is coming, as more and more policymakers discover just how powerful AI really is. The UK, France, Australia, and others have all recently drafted or passed similar legislation to hold tech companies accountable for their algorithms but, alongside China, the U.S. is the world leader in AI so it has an opportunity to shape future development.

For now, Big Tech seems to be pushing back. Said Daniel Castro, vice president of the Information Technology & Innovation Foundation, a Washington-based non-profit that represents Big Tech.

“To hold algorithms to a higher standard than human decisions implies that automated decisions are inherently less trustworthy or more dangerous than human ones, which is not the case. This would only serve to stigmatize and discourage AI use, which could reduce its beneficial social and economic impacts.”

Good luck with that argument now that the bloom is definitely off Big Tech’s rose.

Image credit - Image sourced via Pixabay

    Leave a Reply

    Your email address will not be published. Required fields are marked *