The growing use of internet technologies in human livelihoods has an ambiguous impact upon the notions of human equality and social status. Despite the immense opportunities for digital equality, this article will posit that the current use of internet technologies is more likely to exacerbate the inequality gap and neglect socially marginalized groups. In particular, the core assumption is that internet technologies, with an emphasis on Artificial Intelligence (AI), are more likely to widen the inequality gaps due to existing socio-cultural barriers in the physical sphere.
All Users Are Equal, but Some Users Are More Equal
Despite the promise of emerging technologies for innovation and financial prosperity, not all social groups enjoy the digital advantages equally. First, not all social groups are able to access technologies, for reasons such as inadequate digital access and infrastructure in rural communities and poor regions. In fact, the literature suggests that low- and middle-income countries are more likely to be susceptible to the negative implications of AI and will possibly not receive the tremendous benefits as the developed countries will.
The World Economic Forum exemplifies this notion through several case studies in the Global South. For instance, data access to loans for farming purposes in Kenya has exhibited the increased vulnerability of marginalised groups. As articulated in the report, rural farmers are more likely to not access the internet, and therefore possibly excluded from the credit algorithms. In Indonesia, screening algorithms may be biased against applicants whose educational history did not include the elite universities, due to historical socio-economic factors. Consequently, without addressing the socio-economic barriers of disadvantaged groups, algorithms are more likely to perpetuate and digitize the inequality gap.
Furthermore, according to a McKinsey report, the automation of traditional human factions will substantially disrupt the employment sector in developing countries. Compared to the Western world, AI is more likely to automate an abundant number of positions and replace them using machines, as seen in several Southeast Asian countries. According to previous estimations, AI is presumed to replace more than 50% of all work activities in Indonesia, the Philippines, Thailand, and Malaysia, which yield billions of salaries.
With regards to potential benefits, a PwC report examined the global impact of AI across the different regions and found again substantial gaps between developed and developing countries. As mentioned in the report: ‘All geographic regions of the global economy will experience economic benefits from AI. North America and China stand to see the biggest economic gains with AI enhancing GDP by 26.1% and 14.5%… Latin America and other less developed markets are expected to lag behind somewhat, though despite lower uptake of AI themselves are still expected to see GDP gains of approximately 5% of GDP in 2030’.
Following the empirical evidence, the literature suggests the emergence of ‘data colonization’, as mainly white Western regions enjoy the benefits of emerging technologies, whereas developing countries are not entitled to the same advantages. Some posit that the exclusion of the Global South from the AI discourse exposes the potentially discriminatory impact of technologies. An interesting policy paper by Research ICT Africa describes the way African individuals are excluded from the ethical discourse of AI, and therefore deterred from accessing its essential benefits. As explained in the paper, there are several reasons for this digital exclusion, such as improper data regulation, lack of transparency, denied access to government data, and others.
Notably, due to the relatively premature stage of AI algorithms, the production and access costs of applications are more likely to be unaffordable for developing economies. Paradoxically, the goal of increased diversity and inclusion through AI is, to some extent, serving the opposite idea. In fact, Western data ethicists decide upon the parameters of responsible AI and ethical codes of conduct, while vulnerable populations may not be able to access internet technologies in the first place. This article posits that AI will continue aggravating the digital divide, as long as it is controlled by one segment of the population. Ironically, those who can benefit the most from AI, even in its nascent phase, do not even have digital education nor potential access to it. The transformative power of technology is therefore questionable when the existing barriers to its use represent not a digital phenomenon but ingrained socio-cultural aspects of inequality.
This digital exclusion and its aligned factors require a broader socio-cultural analysis, which exceeds the scope of this article. Briefly, the cultural alienation of the Global South from the digital sphere stems from direct physical factors, including improper infrastructure, historical discrimination, and data politicization, such as increased censorship in autocratic regimes. However, it also derives from indirect factors of Western scholarship, dominating the AI discourse, which shapes its opportunities and barriers for non-Western actors. This notion is crucial in the understanding of the argument above, as they exhibit the potential indirect and moral responsibility of data ethicists and practitioners in changing the current AI narratives and widening its ethnic scope of participants. Consequently, data inclusivity should be addressed on the infrastructural level, as well as through the modification of Western intellectual culture promoting more diverse and representative voices.
Algorithmically Discriminated?
Following the societal and economic barriers to digital access, AI technologies are more likely to amplify social inequalities, especially among vulnerable marginalised groups in society due to historically biased datasets. Since the current algorithmic phase is still dependent on human judgment and training, existing social and historical biases may be reflected in algorithmic decision-making. This notion was exemplified in several discriminatory practices against Latin and African American borrowers based on data-driven credit systems.
Likewise, the HR Tech sector represents an ambiguous challenge when it comes to the creation of a fairer society. On the one hand, AI technologies have the potential to increase talent diversity and productivity in the workforce, speed up recruitment processes, and reduce the human biases of hiring managers. Nevertheless, the risk of discrimination and stigmatization persists, since AI is not fully autonomous and can lead to biased results due to unrepresentative, false, or biased datasets. These technical pitfalls were seen in the notorious Amazon case in which an AI-driven recruiting tool discriminated female candidates from positions due to historically biased datasets.
The risk of human bias in algorithmic output highlights a broader question of whether the current algorithmic phase of AI can truly fix historical injustice. While current applications may not truly show the full potential of AI to address the digital divide, it is important to observe the following. On the one hand, the hi-tech sector offers innovative data science solutions to fix data biases in the HR tech sector (Garr, & Jackson, 2019; Houser, 2019), alongside pioneering research methodologies aimed at identifying automated discriminatory patterns, as seen with the ‘Conditional Demographic Disparity’ modelling tool identified by Wachter, Mittelstadt, and Russell.
Nevertheless, despite its market potential, the literature highlights substantial challenges in the fairness of automation, due to the complexity and vagueness of algorithmic discrimination and the current accountability loopholes of AI. As articulated by Wachter, a discriminated individual cannot always realize the true bias intention and source of an algorithm. Will AI be the right instrument to promote concepts such as ‘Contextual Equality’? Which metrics should society set for algorithmic training, if human history is ‘flawed’ and requires bias testing? The current bias testing methods developed are focused upon European scholarship, and therefore mainly match this regional setting. Any attempts to widen the Western scope of AI fairness will require policymakers and data scientists to establish a better digital infrastructure outside of Europe and aligned modelling tools.
Conclusion
As this article has sought to demonstrate, the current use of emerging technologies is more likely to widen the inequality gap. Only by fulfilling the preliminary steps of digital access and education can society start discussing the transformative role of emerging technologies and tackle the inequality gap. Notably, sophisticated data science tools can potentially fix the data biases temporarily, and contextually. However, these biased technological outputs do not represent a core technological issue but primordial societal hierarchy, and therefore require the design of interdisciplinary solutions promoting digital representation of disadvantaged groups.
The discussion above highlights a notable philosophical controversy, of whether society should neglect its history to reflect a society that is ‘fairer’, but not necessarily representative of the population. Crucially, should data ethicists dictate the way society is constructed when they represent a certain demographic segment? Keeping in mind these questions, I posit that technology is more likely to aggravate the inequality gap when the preliminary steps of digital access and education are not properly addressed. The advantages of AI fairness exceed the scope of this article and deserve better scrutiny, both due to the ethical contribution, but also through the in-depth algorithmic cultivation it provides for future sophisticated AI-driven products, services, and applications, through increased cycles of data testing.
To deal with inequality exacerbation due to digital technologies, policymakers should design proper mechanisms of digital governance in emerging markets and promote digital access and orientation skills. As humans still control the spread and moderation of technologies, and, to some extent, teach the machines how to act, the ethical risks resulting from technologies, including historical bias, incorrect medical treatment, and gender inequality, should first be governed by humans, and not by temporary bias fixes.
The current governance of technologies is inefficient to an extent, as there are no advanced ethical frameworks that are sufficiently socially and culturally nuanced to deal with the above risks. Digital technologies transform human livelihoods and turn geographic borders obsolete, bypassing spaces of time and location through bits and cables. However, humans should be borderless as well, and be part of a representative digital ecosystem.
Maya Sherman is a MSc student at the Oxford Internet Institute.