top of page
Search
Dharshi Harindra

The role of human intervention in fair and inclusive AI

Updated: Aug 26, 2021

For all the jobs that AI seeks to replace, a truly fair and inclusive AI ecosystem should entail recruitment of large groups of people from a diverse range of backgrounds, education levels, and disciplines.

With the exponential growth in development and uptake of AI decision-making tools, the race is on to develop trusted frameworks, governance, and regulation to ensure it remains fair, transparent and without entrenched bias.


Fairness in AI formed part of the key theme “Tech for Good” as part of this year’s Davos Agenda.


Within days of the Biden administration coming to power, the agenda to reverse the Trump world order began in earnest and included appointing diverse leaders in all manner of policy, science and technology fields. Addressing bias in technology is high on their agenda.


The stage has been set.


In the discourse for ways to achieve fair and unbiased AI, much is said about the need to ensure as a minimum that there is increased diversity amongst the developers and engineers responsible for creating artificial intelligence products. The ability for those working on AI tools from the ground up to be alive to, and eliminate biased datasets or criteria for algorithmic decision-making is imperative.

But if datasets themselves ultimately reflect existing biases that could lead to any form of discrimination or unfairness to particular groups in society, how do we address this with technology?


When the very basis for developing a technology product in these contexts is to reduce or eliminate the need for humans at the other end, and when the point of AI decision-making tools is to use pure, accurate, factual datasets that already exist, could it be that we actually do need more human intervention rather than less when creating AI products to make sure that it’s fair and unbiased?


In a recent article, Sudhi Sinha noted, “when AI takes center stage in decision-making — especially when it comes to services or access to resources for the population at large — ensuring inclusiveness is equally important to fairness. For this to happen, the needs and the context of all sections of society should be included in the logic tree. The datasets may not always support this, but human intervention has to drive inclusiveness.” (Sinha)


Begin with the end in mind


Arguably, not only do we need to make sure that biased datasets are removed, but we should begin with the end in mind and take active steps to build inclusive algorithms. This involves creators being very clear on the ways in which to ensure inclusive, fair outcomes, and embedding them into the algorithms at the outset.


End users need to be able to rely on the technology to not only provide accurate results, but decisions that could positively assist with achieving inclusion and eliminating bias. This is particularly important when it comes to automated decision making in areas such as recruitment, and as mentioned by Sinha above, those which affect services or access to resources.


The recent events of the violent unrest caused by white supremacists at the Capitol building in Washington DC, when compared with the handling and reporting of peaceful protests from the Black Lives Matter movement following the police brutality that led to the death of George Floyd, highlight the disparity in society. If this is used as a baseline for decision making it will inevitably produce undesirable results. As Ruby Hamad so aptly describes in her book White Tears Brown Scars,


racism is not so much embedded in the fabric of our society as it is the fabric.


The same could surely be said for other under-represented, or poorly represented groups, the likes of whom we have had to embed anti-discrimination laws in order to protect.


Generally, those people who already benefit from the white, male, privileged status quo, (and who balk at the idea of positive discrimination) may see human intervention that could be making policy-type judgments as a step too far. In her thought-provoking book “Invisible Women: Exposing data bias in a world designed for men” Caroline Criado Perez, through pages of revealing data and also data gaps, concludes that “certain men, who have grown up in a culture saturated by male voices and male faces, fear what they see as women taking power and public space that is rightfully theirs. This fear will not dissipate until we fill in that cultural gender data gap, and, as a consequence, men no longer grow up seeing the public sphere as their rightful domain”. In my mind, technology and jumping on the AI bandwagon to embed fair, inclusive and unbiased outcomes can be a scalable way to redress these imbalances.


It can also be a transparent way to redress these imbalances. There is no need for AI to be a “black box” solution. By embedding transparency of decision making, through regular oversight we can keep monitoring, questioning, and updating the bases for decisions over time.

20 views0 comments

Comments


bottom of page