Setting Standards: AI & Algorithmic AccountabilityBy Ariella Brown | Posted 2017-04-21 Email Print
WEBINAR: On-demand webcast
Next-Generation Applications Require the Power and Performance of Next-Generation Workstations REGISTER >
As artificial intelligence becomes a mainstream tool and algorithms form the basis for business decisions, companies need principles to guide their progress.
At a recent World Economic Forum meeting, Ginni Rometty, IBM's CEO, spoke about the responsibilities that accompany the introduction of powerful technologies, including determining how they are used. She shared three core principles used at IBM that she recommends to businesses that are deploying artificial intelligence (AI) systems. They are:
Purpose: Rometty said IBM's AI systems will revolve around augmenting rather than replacing human intelligence. The systems are there "in the service of humankind" and to "extend human capability."
Transparency: Rometty discussed the need to be transparent with everyone about using these systems. She said the company is committed to detailing the purposes for which it will develop and deploy AI.
Skills: She affirmed that the company would help its employees obtain the skills needed to use this technology, so there will be jobs for them.
The substance of Rometty's recommendations are included in the principles advanced by the nonprofit Association for Computing Machinery US Public Policy Council, though the terms differ slightly. Here are the association's "Principles for Algorithmic Transparency and Accountability":
1. Awareness: Owners, designers, builders, users and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation and use—and the potential harm those biases could cause to individuals and society.
2. Access and redress: Regulators should encourage organizations to adopt mechanisms that enable questioning and redress for individuals and groups that have been adversely affected by decisions based on algorithms.
3. Accountability: Organizations should be held responsible for decisions that are made based on the algorithms they use, even if it's not possible to explain in detail how the algorithms produce their results.
4. Explanation: Institutions that use algorithmic decision-making should provide explanations about the procedures followed by the algorithm, as well as the specific decisions made. This is particularly important in public policy contexts.
5. Data Provenance: A description of the way the training data was collected should be maintained by the algorithm builders. In addition, it should be accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
Public scrutiny of the data will provide maximum opportunities to make corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified, authorized individuals.
6. Auditability: Models, algorithms, data and decisions should be recorded so they can be audited in cases where harm is suspected.
7. Validation and Testing: Organizations should use rigorous methods to validate their models and document those methods and results. They should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.
Stuart Shapiro, chair of the U.S. Public Policy Council of the Association for Computing Machinery, explained that the policy statement grew out of extensive discussions on the question of algorithmic accountability. These discussions included input from invitation-only workshops with people from academia, industry and nongovernmental organizations (NGOs), whose discussions laid the groundwork.
Now that the policy statement is set, Shapiro said that the association "hopes to bring it to the attention of any organization—whether government or business—that operates these kinds of systems," as guidance in "developing policies to govern these types of systems."
The plan is to get the "policy statement to people and organizations where we think they might prove informative and useful." That could include Congressional staffers or "NGOs of various types or industry groups."
The hope is that informed partnerships between nonprofit organizations and businesses will keep the development of AI and other advanced technologies on track in the service of humanity.