From Algorithms to Accountability: What Global AI Governance Should Look Like

The International Telecommunication Union (ITU) is a specialized agency of the United Nations. Credit: ITU/Rowan Farrell

 
Artificial intelligence holds vast potential but poses grave risks, if left unregulated, UN Secretary-General António Guterres told the Security Council on September 24.

By Chimdi Chukwukere
ABUJA, Nigeria, Oct 14 2025 – Recent research from Stanford’s Institute for Human-Centered AI warns that bias in artificial intelligence remains deeply rooted even in models designed to avoid it and can worsen as models grow. From bias in hiring of men over women for leadership roles, to misclassification of darker-skinned individuals as criminals, the stakes are high.

Yet it’s simply not attainable for annual dialogues and multilateral processes as recently provisioned for in Resolution A/RES/79/325 for the UN to keep up to pace with AI technological developments and the cost of this is high.

Hence for accountability purposes and to increase the cost of failure, why not give Tech Companies whose operations are now state-like, participatory roles at the UNGA?

When AI Gets It Wrong: 2024’s Most Telling Cases

In one of the most significant AI discrimination cases moving through the courts, the plaintiff alleges that Workday’s popular artificial intelligence (AI)-based applicant recommendation system violated federal antidiscrimination laws because it had a disparate impact on job applicants based on race, age, and disability.

Judge Rita F. Lin of the US District Court for the Northern District of California ruled in July 2024 that Workday could be an agent of the employers using its tools, which subjects it to liability under federal anti-discrimination laws. This landmark decision means that AI vendors, not just employers, can be held directly responsible for discriminatory outcomes.

In another case, the University of Washington researchers found significant racial, gender, and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated names over equally qualified candidates with names associated with other racial groups.

In 2024, a University of Washington study investigated gender and racial bias in resume-screening AI tools. The researchers tested a large language model’s responses to identical resumes, varying only the names to suggest different racial and gender identities.

The financial impact is staggering.

A 2024 DataRobot survey of over 350 companies revealed: 62% lost revenue due to AI systems that made biased decisions, proving that discriminatory AI isn’t just a moral failure—it’s a business disaster. It’s too soon for an innovation to result in such losses.

Time is running out.

A 2024 Stanford analysis of vision-language models found that increasing training data from 400 million to 2 billion images made larger models up to 69% more likely to label Black and Latino men as criminals. In large language models, implicit bias testing showed consistent stereotypes: women were more often linked to humanities over STEM, men were favored for leadership roles, and negative terms were disproportionately associated with Black individuals.

The UN needs to take action now before these predictions turn into reality. And frankly, the UN cannot keep up with the pace of these developments.

What the UN Can—and Must—Do

To prevent AI discrimination, the UN must lead by example and work with governments, tech companies, and civil society to establish global guardrails for ethical AI.

Here’s what that could look like:

Working with Tech Companies: Technology companies have become the new states and should be treated as such. They should be invited to the UN table and granted participatory privileges that both ensure and enforce accountability.

This would help guarantee that the pace of technological development—and its impacts—is self-reported before UN-appointed Scientific Panels reconvene. As many experts have noted, the intervals between these annual convenings are already long enough for major innovations to slip past oversight.

Developing Clear Guidelines: The UN should push for global standards on ethical AI, building on UNESCO’s Recommendation and OHCHR’s findings. These should include rules for inclusive data collection, transparency, and human oversight.

Promoting Inclusive Participation: The people building and regulating AI must reflect the diversity of the world. The UN should set up a Global South AI Equity Fund to provide resources for local experts to review and assess tools such as LinkedIn’s NFC passport verification.

Working with Africa’s Smart Africa Alliance, the goal would be to create standards together that make sure AI is designed to benefit communities that have been hit hardest by biased systems. This means including voices from the Global South, women, people of color, and other underrepresented groups in AI policy conversations.

Requiring Human Rights Impact Assessments: Just like we assess the environmental impact of new projects, we should assess the human rights impact of new AI systems—before they are rolled out.

Holding Developers Accountable: When AI systems cause harm, there must be accountability. This includes legal remedies for those who are unfairly treated by AI. The UN should create an AI Accountability Tribunal within the Office of the High Commissioner for Human Rights to look into cases where AI systems cause discrimination.

This tribunal should have the authority to issue penalties, such as suspending UN partnerships with companies that violate these standards, including cases like Workday.

Support Digital Literacy and Rights Education: Policy makers and citizens need to understand how AI works and how it might impact their rights. The UN can help promote digital literacy globally so that people can push back against unfair systems.

Lastly, there has to be Mandates for intersectional or Multiple Discriminations Audits: AI systems should be required to go through intersectional audits that check for combined biases, such as those linked to race, disability, and gender. The UN should also provide funding to organizations to create open-source audit tools that can be used worldwide.

The Road Ahead

AI is not inherently good or bad. It is a tool, and like any tool, its impact depends on how we use it. If we are not careful, AI could lengthen problem-solving time, deepen existing inequalities, and create new forms of discrimination that are harder to detect and harder to fix.

But if we take action now—if we put human rights at the center of AI development—we can build systems that uplift, rather than exclude.

The UN General Assembly meetings may have concluded for this year, the era of ethical AI has not. The United Nations remains the organization with the credibility, the platform, and the moral duty to lead this charge. The future of AI—and the future of human dignity—may depend on it.

Chimdi Chukwukere is an advocate for digital justice. His work explores the intersection of technology, governance, Big Tech, sovereignty and social justice. He holds a Masters in Diplomacy and International Relations from Seton Hall University and has been published at Inter Press Service, Politics Today, International Policy Digest, and the Diplomatic Envoy.

IPS UN Bureau

 


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);  

Leave A Comment...

*