When the Algorithm is Blind: AI, Data Bias, and the South African Patient
When the Algorithm is Blind: AI, Data Bias, and the South African Patient
Matome Ponego Letswalo
Have you ever clipped a small device onto your finger to check your oxygen during an illness? Many South Africans did just that during the COVID-19 pandemic. Those little pulse oximeters gave a readout of blood oxygen levels, a potential lifesaver telling you when to rush to the hospital. But here’s a startling fact: those gadgets often work less well on people with darker skin. In other words, if you’re Black, a dangerously low oxygen level could go undetected. This isn’t science fiction or a malicious plot; it’s a real example of how bias in technology can literally become a matter of life and death.
Data Bias in Healthcare AI: Why would a seemingly objective medical device have a racial bias? The answer lies in data. Artificial intelligence (AI) systems and advanced gadgets learn from data patterns. If the data used to design or train these tools isn’t diverse or representative, the tools can end up “blind” to certain groups of people. In the case of pulse oximeters, many were calibrated primarily on lighter-skinned patients. The result? Less accuracy for darker skin tones, because the device’s light sensors and algorithms weren’t tuned for the higher melanin levels common in African populations. This is just one illustration of a broader problem: when AI in healthcare is fed biased or incomplete data, it can reinforce existing inequalities instead of fixing them.
—When an algorithm is “blind” to fairness, we need to open its eyes by providing better data and strong ethical guardrails—
South Africa’s diverse society, with its deep historical divide provides a critical backdrop for this issue. AI technologies are increasingly used in our health sector, from apps that screen for disease to algorithms that help allocate medical resources. They promise to improve patient outcomes and streamline care. But if those systems are trained on data that reflects past inequities, they may unwittingly perpetuate them. The former Vice chancellor of the University of Johannesburg – UJ, Professor Tshilidzi Marwala warn of “The rise of machines and digital apartheid: Discrimination by algorithms as a social weapon of destruction or simply “algorithmic apartheid,” where digital systems discriminate by inheriting the biases of a segregated past. For instance, an AI diagnostic tool developed overseas might not recognize an illness manifestation common among African patients if most of its training images or records came from Europe or North America. In a country still healing from racial disparities, the last thing we want is new technology that underserves Black South Africans or rural communities because it doesn’t “see” them in its data.
Real-World Consequences for Patients: These concerns aren’t just theoretical. A famous international example revealed how bias can creep in: A 2019 study published in the journal “Science” found that a widely used hospital risk-prediction algorithm in the US systematically underestimated the health needs of Black patients. Why? The AI was trained using healthcare spending as a proxy for health and historically less money was spent on Black patients’ care, so the program assumed they were “healthier” than they really were. If such a tool were applied in South Africa without adjustment, it might similarly favor patients from wealthier (often white) backgrounds for additional care programs, while overlooking sicker patients in underprivileged communities. In practical terms, an algorithm might send a healthier suburban patient to a disease management program while ignoring a sicker township patient. That kind of skewed decision-making directly affects who gets extra attention from doctors, special medications, or follow-ups. It’s a sobering thought that an algorithm’s hidden bias could determine medical priorities, potentially leading to delayed treatment for those who need it most.

Case Study, Bias in Medical Aid Algorithms: South Africa has already had a wake-up call on this front. In 2019, a group of Black healthcare providers raised alarms that they were being unfairly flagged for fraud by medical aid schemes. This led to a Formal Inquiry known as the Section 59 investigation. The findings, released in 2021 and confirmed in a Final Report in 2025, were eye-opening: the algorithms used by major medical schemes to detect fraudulent claims were indeed biased against Black practitioners. Black doctors were far more likely to be flagged and audited than their white counterparts, in some cases three to six times more likely, according to the investigation’s data.
Why does this matter for patients? Consider a Black general practitioner or specialist serving a community: if they are wrongly penalized or put under unjustified scrutiny due to a biased algorithm, their practice suffers. They might be reimbursed late or not at all for treatments provided, or even barred from a network. In the end their patients, often from the same disadvantaged communities, lose access to care. In the Section 59 inquiry, some doctors reported having to curtail services because of these investigations. The algorithm was supposed to catch fraudsters neutrally, but because it mirrored societal biases (or possibly historical patterns in the data), it ended up over-policing certain groups.
This is a prime example of how an AI system, presumed objective, can amplify inequality: the very tools designed to improve efficiency and fairness in healthcare can backfire if not carefully managed. South Africa’s Health Minister acknowledged these findings, and there have been calls for algorithmic transparency, that is, requiring these schemes to reveal how their software flags people, and for fixes to ensure fair, data-driven practices that don’t target professionals simply for being Black. The phrase “the algorithm is blind” rings true here: it was blind to fairness, seeing only distorted patterns.
Fighting Bias: Policy and Practice: The good news is that awareness of AI bias is growing, and steps are being taken to address it. South Africa’s government has drafted a National Artificial Intelligence Policy Framework that explicitly lists “fairness and mitigating bias” as a key pillar. Policymakers recognize that trustworthy AI is critical in sectors like healthcare. This means encouraging developers to use diverse training data, test their algorithms for bias, and include human oversight for important decisions. For example, if a hospital in Gauteng deploys an AI system to help diagnose patients, the expectation is that this system should have been trained on data that reflect South Africa’s demographic and disease profile, not just, say, European clinical data. Likewise, there are pushes to require that any high-stakes AI (like those deciding on patient care or insurance coverage) undergo regular audits or assessments. Just as hospitals must follow protocols for patient safety, algorithms may soon have to meet safety and equity standards before they can be used widely. South Africa’s draft policy emphasizes the need for transparency and the ability for people to appeal algorithmic decisions, which could be a game-changer for patient rights if enacted
Towards Equitable Healthcare AI: What’s the bottom line for the South African patient? Vigilance and advocacy. As patients, we should be able to trust that the tools aiding our doctors are designed with us in mind, all of us, in our full diversity. Healthcare AI should help close the gap in a country where access and quality of care have historically been uneven. That will only happen if we consciously work to remove bias from the equation. Developers need to be aware of pitfalls—as one expert bluntly put it, “Algorithms can do terrible things or wonderful things. Which one they do is up to us.”—and authorities must set guidelines so that “fairness” isn’t just a buzzword but a reality.
As South Africa navigates this digital transformation, there’s an opportunity to lead by example. We have hard-earned lessons about inequality, and we can apply them to technology by insisting on inclusive data, transparent AI, and accountability for how machines make decisions. Imagine an AI-powered health system that truly serves everyone: where a smart alert reaches all the right patients in time, where no one is left behind because of a skin tone or a zip code, and where both patients and providers trust that the “smart” tools are on their side. Achieving that will take work, from tweaking algorithms and collecting better data to enforcing policies, but it’s work that could save lives and create a fairer healthcare system. In the end, an algorithm doesn’t have human eyes, it will see only what we teach it to see. By shining a light on bias and demanding better, we ensure that our AI systems aren’t blind, but instead truly perceptive to the needs of all South Africans.
Ensuring AI improves healthcare for everyone, and not just a select few is a challenge we must meet head-on. When an algorithm is “blind” to fairness, we need to open its eyes by providing better data and strong ethical guardrails. The lives and well-being of South African patients depend on getting this right. By confronting data bias now, we can build a future where smart healthcare truly means fair healthcare for all.
Cite this article in APA as: Letswalo, M. P. (2025, October 2). When the algorithm is blind: AI, data bias, and the South African patient. Information Matters. https://informationmatters.org/2025/09/when-the-algorithm-is-blind-ai-data-bias-and-the-south-african-patient/
Author
-
View all posts IT Operations and Governance Analyst
Certified Cybersecurity Professional and AI Governance Research Fellow. Working at the intersection of technology, governance, and security - aligning operational systems with regulatory frameworks.