April 10, 2018

Algorithmic Impact Assessments

There’s a new report from New York University’s AI Now Institute, giving recommendations for algorithmic impact assessments (PDF). Worth reading for anyone who is or should be interested in criteria for automated decision systems. As the researchers say:

AIAs will not solve all of the problems that automated decision systems might raise, but they do provide an important mechanism to inform the public and to engage policymakers and researchers in productive conversation. With this in mind, AIAs are designed to achieve four key policy goals:

  1. Respect the public’s right to know which systems impact their lives by publicly listing and describing automated decision systems that signi cantly a ect individuals and communities;
  2. Increase public agencies’ internal expertise and capacity to evaluate the systems they build or procure, so they can anticipate issues that might raise concerns, such as disparate impacts or due process violations;
  3. Ensure greater accountability of automated decision systems by providing a meaningful and ongoing opportunity for external researchers to review, audit, and assess these systems using methods that allow them to identify and detect problems; and
  4. Ensure that the public has a meaningful opportunity to respond to and, if necessary, dispute the use of a given system or an agency’s approach to algorithmic accountability.

(via Harkanwal Singh)

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »