17 March 2025

What we see when we apply equity lenses to AI

A recent report by the Technology Association of Grantmakers (TAG) highlights that 81% of foundation professionals in the US are experimenting with AI, yet only 4% have integrated it organisation-wide. Furthermore, only 30% of foundations have AI policies, and just 9% have both policies and advisory committees. Similarly, a survey by the Association of Charitable Foundations (ACF) found that only 18% of UK foundations actively use AI, with an additional 40% exploring its potential. Barriers to broader adoption include privacy and security concerns, a lack of skills and uncertainty about AI’s role in philanthropy. These findings closely mirror our own report and recent chapter in The Routledge Handbook of Artificial Intelligence and Philanthropy. Addressing these challenges requires investment in responsible AI strategies, strengthening data governance and fostering a culture of continuous learning.

While much attention is given to AI’s ability to enhance productivity within philanthropic organisations, it is equally important to examine its impact on equity. To what extent does AI reinforce existing disparities, and how can these risks be mitigated? Conversely, where does AI hold the potential to be a transformative force for advancing equity? These questions remain largely absent from dominant AI narratives, which often frame AI in terms of efficiency, progress and economic growth while neglecting its implications for social justice.

In reality, one of the most pressing issues with AI is the inherent bias within these systems, which poses a significant threat to equity. In the current digital landscape, particularly with the rise of unmonitored social media, the rollback of censorship and the growing influence of far-right ideologies, it’s almost inevitable that such biases will be reflected in AI models, as they are shaped by the data we generate online.

These biases can have serious consequences in real-world applications, further entrenching systemic inequalities. For example, facial recognition technology has been shown to have higher error rates for people of colour, leading to wrongful identifications and disproportionate surveillance. Similarly, AI used in recruitment to automatically filter job applicants can unintentionally introduce algorithmic bias, unfairly disadvantaging legally protected groups.

In education, AI presents both opportunities and challenges for equity and inclusion. While AI can help equalise learning opportunities for people with disabilities by creating more accessible educational environments and Intelligent Tutoring Systems (ITS) can personalise learning experiences to better cater to individual needs, these advancements are not without their risks. Techno-ableism underscores the importance of involving persons with disabilities in AI development to ensure tools are truly inclusive. Additionally, the bias inherent in AI systems may inadvertently reinforce societal prejudices, and schools in socio-economically disadvantaged areas, already lacking adequate digital resources, risk seeing the ‘digital divide’ widen further.

These are the kinds of discussions we should have. Instead, US-based developers of major Large Language Models (LLMs) have largely controlled the narrative around AI governance, pushing for more lenient data protection laws to enhance AI innovation, often clashing with the EU’s stricter regulations. Figures like Musk, Zuckerberg and members of the US administration have advocated for less stringent data protection, claiming it will benefit AI systems. Meanwhile, within the EU, emerging AI technologies such as Mistral AI’s ‘Le Chat’ and the upcoming OpenEuro LLM have been developed with greater respect for EU data laws, though they are still vulnerable to perpetuating existing inequalities.

The Artificial Intelligence Narratives: A Global Voices Report explores how AI is framed in public discourse and how powerful actors shape these narratives to influence perceptions of AI’s role in society. While AI is often portrayed as a tool for innovation, concerns persist regarding its use for surveillance, political control and the exacerbation of existing inequalities, particularly in nations with histories of repression.

Moreover, AI discourse remains heavily centred on Western nations and China, often overlooking its complex impact across Global Majority countries. AI safety frameworks from high-income nations frequently fail to address the lived experiences of communities in the Global Majority. Despite active AI development in these regions, including national AI strategies in Africa and language models in Latin America, representation at major AI safety summits remains limited. Western-centric benchmarks, such as Massive Multitask Language Understanding (MMLU), further reinforce these disparities by inadequately assessing AI performance in diverse linguistic and cultural contexts.

Global Majority researchers are advocating for localised governance, transparency and investment in region-specific AI safety initiatives. Examples of this progress include AI research hubs in Africa, governance leadership in Singapore, bias mitigation efforts in Latin America and emerging AI safety discussions in Oceania and the Caribbean. These efforts signal a shift towards more inclusive AI governance that accounts for diverse global perspectives.

One pioneering initiative in this space is Abundant Intelligences, which reimagines AI through an equity lens by integrating Indigenous Knowledge (IK) systems into its design and development. Grounded in Indigenous epistemologies, this programme prioritises cultural sustainability, community sovereignty and technological abundance over scarcity. By incorporating Indigenous languages, storytelling traditions, environmental stewardship and socio-cultural intelligence into AI, Abundant Intelligences challenges dominant narratives and power structures that marginalise Indigenous perspectives. Through deep collaboration with Indigenous communities, researchers and institutions, it fosters the development of AI systems that are ethical, inclusive and responsive to diverse ways of knowing and being.

When philanthropic organisations work together with other stakeholders, their impact on AI governance becomes significantly more powerful. Research institutions, think tanks and even private organisations working for the public benefit are well equipped to push for the technological change that can enable a more equitable functioning and governance of AI technologies. By engaging in conversations with these actors, philanthropy can put forward a more holistic effort towards creating equitable AI tools and play a critical role in setting ethical guidelines. In this sense, AI can be compared to a garden, without diverse seeds (data) and thoughtful tending (collaboration), it can grow wild and unmanageable, producing results that are poorly aligned with non-profit missions. By fostering partnerships that bridge gaps between technical experts, policymakers and community leaders, foundations help ensure AI development serves diverse populations and grows in an equitable way. This approach not only amplifies underrepresented voices but also opens avenues for addressing issues that are typically out of reach for foundations, such as bias, access disparities and data privacy. Initiatives like the Partnership on AI and the European AI & Society Fund exemplify how foundations can collaborate with think tanks, businesses and civil society groups to ensure AI is developed in a socially responsible and inclusive manner. These collaborations aim to shape AI policies that promote fairness, human rights and democratic values.

We encourage foundations to take on a more significant role in the AI conversation, one that, while not directly involved in technology development, remains close enough to influence outcomes that strengthen human connections, promote equity and provide learning opportunities to further grow in this burgeoning field, especially through collaborations with diverse constituents.

Authors

Sevda Kilicalp
Head of Knowledge and Learning, Philea
Jack O’Neill
Data Officer, Philea