Can Algorithmic Auditing Prevent Bias in AI?

Johannes Leon Kirnberger
6 min readMay 18, 2021

An assessment of an emerging regulatory practice.

The digital revolution started out with the great promise of democratisation of knowledge, freedom of expression and communication, and neutrality. Algorithms were hailed as unbiased and efficient means to increase productivity and make decisions more transparent. Technology leaders predicted that “soon everyone on Earth will be connected” and that the digital revolution would “improve a wide range of inefficient markets, systems, and behaviours, in both the most and least advanced societies”. Most importantly to the technologists, this revolution was happening in a largely ungoverned space.

In recent years, it has become increasingly accepted that machine learning algorithms show bias and discriminatory behaviour due to both technical and human causes. On the technical side, bias in the training data of an algorithm as well as bias in the algorithmic design itself can produce unfair and biased outcomes. On the human side, bias can be introduced at various points along the algorithmic supply chain, as humans often display a variety of cognitive and behavioural biases.

Policy makers have recognised the risks that algorithms and the digital transformation pose to society at large and have introduced legislation for data governance. The European GDPR and the Californian Consumer Privacy Act implement data privacy regulations in order to safeguard the private information of internet users. One of the key components in the ex-post enforcement of these regulations is the tool of algorithmic auditing.

Existing Legislation

The key idea behind algorithmic auditing is to reduce the “black box” effect of algorithms, which would allow for greater transparency to the public and build trust with companies and consumers alike. Successful algorithmic audits would ensure that an algorithm

Is transparent to end users
Is used in a socially acceptable way
Does not produce unintended consequences — Is not being used deceptively
Shows no signs of bias in its design
Reports how it arrives at its recommendation

Many industry insiders are convinced that algorithmic auditing legislation is only a matter of time. Deloitte, one of the Big Four auditing firms, asserts that “algorithmic challenges, along with data privacy issues, have created the need for the role of algorithm auditors”. As early as 2011, the US Federal Reserve mandated banking organisations to conduct audits of AI models in order to prevent financial instability. In 2020, the Democratic Party proposed the Consumer Online Privacy Rights Act (CORPRA), which would force companies to annually assess the impact of their AI algorithms, which are defined as “computational processes that a covered entity uses to make a decision or facilitate human decision-making”. CORPRA follows the proposal of an Algorithmic Accountability Act of 2019 that would require companies to “audit their algorithms for bias and discrimination”.

In the European Union, the General Data Protection Regulation already prohibits automatic algorithmic decision making that factors in users’ personal data, and the combination with the proposed Artificial Intelligence Act positions the EU to enact the first binding legislation on algorithmic auditing.

Implementation

The first option of implementation, heavily favoured by technology companies, is the form of internal auditing. Recently, Google and researchers at the Partnership on AI have released the article Closing the AI Accountability Gap, in which they propose the introduction of internal auditing upstream of development and through the whole design phase of an algorithm. This would allow them to protect internal processes and confidential information such as their source code, which is amongst their most valuable intellectual property. It would also enable companies to intervene early in the development process of an algorithm.

Naturally, internal audits often lack independency and objectivity, which even Google acknowledges in its paper. This is why many policy experts favour independent, third-party external audits. The so-called Big Four auditing firms (PwC, KPMG, Deloitte, and EY) already have algorithmic risk practices in place and would surely be expected to aggressively enter a (European) auditing market. Some specialised consultancies for algorithmic auditing are operating, such as BNH.ai, a boutique AI-centred law firm, and O’Neil Risk Consulting and Algorithmic Auditing (ORCAA). While technologically challenging, algorithmic auditing does not fundamentally differ from traditional auditing, which is why a European audit market with a high auditing concentration of the Big Four (70–90%) and a small share of highly specialised or SMO-focused auditing firms can be expected.

Third-party audits have recently come under increasing pressure, due to the collapse of Wirecard and the associated shortcomings in supervision. In the European Parliament, the ECON committee has called for revised audit rules, the break up of the Big Four monopoly, and the introduction of co-auditing. The Legal Affairs Committee asked for European-level rules to hold auditing companies accountable for gross negligence. While supreme audit institutions — independent national oversight bodies — have not yet appeared on the agenda, private auditing companies can expect both regulatory and political pressure as they enter the lucrative algorithmic auditing market.

Impact and Limitations

Algorithmic auditing has the potential to have a lasting, positive impact on society. AI algorithms are eroding fundamental human rights such as privacy, freedom of expression, and the right to equality and non-discrimination. Algorithmic auditing could outline red lines, put high obligations on high-risk AI systems, and secure social progress and fundamental rights of citizens. According to the Ranking Digital Rights Index, no single digital platform “offered anything close to an adequate explanation of how its algorithms work”. Auditing those algorithms could force those platform providers to increase transparency, commit to human rights, and be held accountable for bias, discrimination, and harm.

At the same time, it is important to note that algorithmic auditing faces some significant limitations. The lack of industry standards or official regulations has resulted in companies being accused of using algorithmic audits as a PR exercise. When HireVue, a hiring software company, hired ORCAA to asses the hiring algorithm’s performance towards bias and fairness, it was widely criticised of misrepresenting the audit. Due to technical and time constraints, ORCAA focused on only a single use case. Nevertheless, HireVue stated that its entire software would “work as advertised with regard to fairness and bias”.

In addition to misrepresentation or singular focus, algorithmic auditing remains extremely complex, even for experienced auditors. For example, the GDPR’s Right to Explanation might be almost impossible to enforce in real life situations, as an algorithm’s complexity combined with the ever-changing nature of machine learning might make it impossible to explain the outcome of a black box. Algorithms like Google’s search engine or Facebooks timeline recommendation algorithm often factor in many thousands of variables, and even if an algorithm’s decision could be retraced, it might prove difficult to explain it to a non-technical audience. This does not question the usefulness of algorithmic auditing in general but shows the technological realities that future auditors will face.

Conclusion

The negative impact on algorithms, from unfair credit scores to discriminatory law enforcement practices, have been widely recognised and call for regulatory intervention in order to protect citizens of what data scientist Cathy O’Neil has called Weapons of Math Destruction. The European Union, building on its GDPR and Artificial Intelligence Act, is uniquely positioned to take leadership in ex-post enforcement of data governance regulations by introducing mandatory algorithmic audits. While technology companies prefer internal audits of their algorithmic intellectual property, third-party external audits will be imperative to ensure independence and objectivity. Algorithmic auditing has the potential to achieve profound positive impact for society, but still faces signifiant technological and political challenges. Perhaps most importantly, the development and auditing of AI algorithms will need to incorporate a wide range of diverse stakeholders and experts, to prevent the the introduction bias into a system that aims to eradicate it in the first place.

--

--

Johannes Leon Kirnberger

AI & Sustainability at OECD | GPAI | The Future Society | Columbia University https://johanneskirnberger.com