New research emerging out of the United Kingdom and Switzerland suggests there may be a mathematical solution for helping regulators and businesses police Artificial Intelligence systems’ biases.
Researchers from the University of Warwick, Imperial College London, EPFL – Lausanne, and strategy firm Sciteb Ltd, began their research partnership conscious that the increasing use of artificial intelligence systems (AI) are likely to increase in coming years creating unique challenges and opportunities.
Unlike humans acting as moral filters, AI’s use of algorithms to analyze, collect, sort, and store data can create the potential to create moral skews that could disportionately affect diverse populations especially in industries like insurance; and some of these decisions could carry financial penalties from companies for violating regulatory standards.
Seeking to address these problems, researchers have come together and created the “Unethical Optimization Principle” that provides a formula to estimate the impact of AI decisions.
According to Professor Robert MacKay of the Mathematics Institute of the University of Warwick, “Optimization can be expected to choose disproportionately many unethical strategies.” The goal of our research, he says, is “to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”The full details of their research is outlined in their paper, “An unethical optimization principle”, published in Royal Society Open Science on July 1, 2020.
Source: Good News Network