Africa does not lack laws. It lacks power. That distinction sits at the heart of the Tech Justice in Africa report, launched by the Global Centre on AI Governance in partnership with the Centre for Human Rights at the University of Pretoria, and supported by Luminate.
The report arrives as artificial intelligence reshapes how millions of Africans access healthcare, find work, navigate government services, and participate in public life. Yet the systems driving these changes were built elsewhere, by people who hold the capital, own the infrastructure, and write the rules.
The report focuses on Nigeria, Kenya, and South Africa as anchoring case studies, chosen for their regional influence and the concentration of digital investment flowing through their borders. What researchers found across all three was a familiar and frustrating pattern: sophisticated legal frameworks producing weak outcomes.
Frameworks That Live on Paper, Not in Practice
South Africa operates the Protection of Personal Information Act. Nigeria has the Nigeria Data Protection Act. Kenya enforces its own Data Protection Act. These are not thin or poorly written laws. The African Union has added layers of continental architecture: the Declaration on Freedom of Expression and Access to Information, the Convention on Cybersecurity and Personal Data Protection, and a Digital Transformation Strategy running through to 2030. The report maps all of it carefully, then documents precisely where it breaks down.
The breakdown is institutional. Mukelani Dimba of the Information Regulator of South Africa put the problem in plain numbers. “We have to regulate a sector that has over three million data processors,” he said, “and my institution has just about 100 members of staff.”
That ratio is not a management failure. It reflects a deliberate structural decision by governments to mandate regulation without resourcing it. In South Africa, the Information Regulator issued a single infringement notice imposing a $279,000 fine against the Department of Justice and Constitutional Development for breaches of the Protection of Personal Information Act. A single fine, against a government department, years into the law’s operation. That is the scale of enforcement the continent currently produces.
The numbers tell only part of the story. When civil society organisations submitted information requests to four major digital platforms ahead of South Africa’s 2024 national elections, seeking to understand what measures those platforms had taken to protect electoral integrity, three of the platforms refused. Their argument: the Information Regulator lacked jurisdiction over entities domiciled outside the country. The investigation stalled before it reached the merits of any complaint. As Dimba observed, “these laws have to be updated. How people exercise their right of access to information is no longer a paper-based system.”
Old Extraction, New Wires
The report refuses to treat digital inequality as a fresh problem requiring fresh solutions. It situates the current moment inside a longer history of extraction and contested sovereignty. Dr. Rachel Adams, founder of the Global Centre on AI Governance, pressed this point from the opening.
“We need a new language with which to talk about these issues,” she said, “and with which to come together and advocate for change.”
That language, the report insists, must be intersectional. The seven thematic areas it examines, covering gender justice, racial justice, labor rights in the platform economy, climate and environmental harm, children’s online safety, healthcare access, and privacy and surveillance, do not operate independently. They compound one another.
Nigeria offers a sharp illustration. Africa’s most populous country, with an estimated 240 million people, has seen rapid digital expansion through mobile broadband and submarine cable infrastructure. Yet the country’s biometric national identity system excluded women, rural populations, and persons with disabilities from full participation. The gig economy expanded just as fast. Uber, Bolt, and similar platforms recruited hundreds of thousands of drivers while classifying them as independent contractors, stripping them of the labour protections that employment law would otherwise guarantee. Workers absorbed fuel costs, accident liability, and equipment expenses while algorithms determined their pay rates and deactivated their accounts without explanation or appeal.
Kenya presents a parallel story in a different register. Celebrated as the Silicon Savannah, it hosts regional offices for Microsoft, Google, Meta, and Amazon Web Services, and has built genuine strength in fintech and mobile money through platforms like M-PESA. Kenya formalised its first licensing regime for crypto and stablecoins, tightened data rules, and launched an AI strategy in 2025, signaling a government that wants to shape the digital economy rather than simply absorb it. But the same country saw internet freedom decline in recent years as authorities integrated digital surveillance tools into electoral monitoring, and used them to track protest activity during the cost-of-living demonstrations that drew hundreds of thousands of Kenyans into the streets.
South Africa carries the continent’s most advanced digital infrastructure. As of mid-2025, South Africa hosts 56 data center facilities, compared to Kenya with 19 and Nigeria with 17. It plays a leading role in AI governance discussions regionally and globally. Yet it remains the most unequal country in the world by Gini coefficient. That inequality does not disappear when people go online. It follows them there, concentrated through platform design, algorithmic management, and the growing private surveillance of township communities through CCTV networks, a development that researchers described in the report as carrying the spectre of a digital apartheid.
The Concentration of Power Nobody Wants to Name
Toyin Akinniyi of Luminate named it directly.
“Surveillance capitalism is not inevitable,” she told the room. “Exploitation is not the price of innovation.”
Nigeria’s Federal Competition and Consumer Protection Commission proved her point. After a 38-month joint investigation with the Nigeria Data Protection Commission, the FCCPC issued a Final Order imposing a $220 million administrative penalty, concluding that Meta and WhatsApp engaged in discriminatory and exploitative practices against Nigerian consumers. The investigation found that Meta shared the data of Nigerians without authorisation, denied consumers the right to self-determine the use of their data, and applied discriminatory practices alongside abuse of market dominance. Meta appealed. The Tribunal rejected the appeal in April 2025 and upheld the fine. Meta then threatened to withdraw Facebook, Instagram, and WhatsApp from Nigeria entirely. The regulators condemned this as “a calculated move aimed at inducing negative public reaction” to their decision.
The threat reveals something essential about how platform power operates in Africa. In the EU, users are presented with clear opt-in and opt-out tools, often delivered through in-app prompts, and are informed before their data is used to train AI systems or sold to advertisers. In Nigeria? No notice. No choice. No opt-out. The same company. Different rules for different users, based on their geography. This is the two-tier system the Tech Justice in Africa report documents and challenges.
The opaqueness, as Dr. Leilani Kaya-Carter of the South African Human Rights Commission argued, is not accidental.
“The opaqueness is by design. Because those who do not understand the language cannot use the same language to seek redress.”
What Proactive Accountability Actually Looks Like
Kaya-Carter offered one of the most instructive examples from the day. Her commission identified that publicly accessible African government documents were not machine-readable, meaning African sources were being systematically excluded from the training data used to build large language models. The commission opened an own-initiative investigation, audited government documents, and communicated its findings to the relevant minister. No fine. No litigation. By March 2025, government publishers had committed to making all online documents accessible, both for citizens using screen readers and for AI systems drawing on African data.
The same logic of proactive coordination drives South Africa’s ICT and Media Regulators Forum, which brings together the Information Regulator, the Film and Publication Board, the Independent Communications Authority, and the South African Domain Name Authority. Rather than forcing victims to navigate four separate institutions for a single complaint involving children’s privacy, online safety, and broadcasting standards, the Forum creates a shared platform for collective response. UNESCO hosted a global regulators conference in Pretoria in February 2025 and showcased the model as a potential blueprint for the region.
The Generation That Cannot Wait
Kristen Grau, a master’s student at the Centre for Human Rights, challenged a comfortable assumption.
“Just because you’re a digital native doesn’t mean you have digital understanding,” she said. Being online does not equal being equipped to recognize algorithmic bias, data monetization, or the design choices that turn platforms into instruments of surveillance.
With 66 percent of enterprises already moving away from hiring junior developers, the standard degree pipeline produces graduates whose skills AI will replicate before they reach their second job. The report calls for curricula to pivot from content production toward critical analysis: training students to audit AI systems, fact-check outputs, and interrogate the assumptions embedded in the tools shaping their world.
Professor Admire Mare of the University of Johannesburg grounded this in something harder to legislate. “Technology incorporates people into a network system where there is very little room to manoeuvre.” An Uber driver in Nairobi, a content moderator in Lagos, an influencer in Johannesburg, all participate in platform economies where the price is set by an algorithm, the terms change without consultation, and the exit costs fall entirely on the worker.
While platforms may ignore us individually, there is power in regional solidarity. From Kenya to Nigeria, Senegal to Zambia, African civil society is uniting around a shared demand: that digital technology must serve the public good, not profit at the cost of people’s rights.
That solidarity is the foundation the Tech Justice in Africa report builds toward. Not better frameworks alone, but the political will to enforce them, the resources to sustain civil society beyond project cycles, and the confidence to insist that Africa’s data, languages, laws, and communities belong at the center of any system that claims to serve them. The continent does not need to be rescued. It needs the room and the resources to act on what it already knows.



