Project Saathi
Rethinking Health AI Through a Gender-Responsive Systems Lens
Artificial intelligence is increasingly positioned as a solution to healthcare access gaps. From symptom checkers to clinical decision support, AI pro
mises efficiency, scale, and consistency in contexts where health systems are stretched thin. Yet a growing body of evidence suggests that many of these systems do not merely fall short of equity goals they actively reproduce, and sometimes amplify, existing structural inequities. Gender remains one of the most persistent and under-examined fault lines in this transition.
For women in low- and middle-income countries (LMICs), the consequences of gender-misaligned health AI are not abstract. They appear as delayed care, misinterpreted symptoms, erosion of trust, and ultimately poorer health outcomes. Addressing these failures requires more than wider deployment or incremental model improvements. It demands rethinking health AI as a socio-technical system one shaped by how data is produced, how language is processed, how access is structured, and how responsibility is governed.
Where Bias Really Begins
Bias in AI systems rarely originates at the model layer. It is embedded much earlier, in data collection practices, research priorities, and assumptions about who the “default user” is. Across sectors, this pattern is well established. Hiring algorithms trained on male-dominated work histories penalize women. Credit scoring models overlook informal income patterns that disproportionately affect them. In each case, the algorithm is not inherently biased; it is faithfully reproducing a skewed historical record.
Healthcare inherits this same imbalance, but with far higher stakes. Medical research has historically relied on male subjects as the norm, with women underrepresented in clinical trials and observational studies. As a result, symptom profiles, disease progression patterns, and risk indicators for women are less comprehensively characterized. When AI models are trained on such datasets, the bias is not neutralized it is computationally reinforced.
Empirical evidence already shows that AI systems can be significantly less accurate for women in detecting conditions such as liver disease or cardiovascular events. Women’s pain is more likely to be classified as psychosomatic, and atypical symptom presentations are often treated as noise rather than signal. In data-rich environments, this manifests as underdiagnosis. In data-poor environments, it manifests as exclusion.
Access Is Not Neutral
Even a technically robust model fails if it cannot be accessed meaningfully by its intended users. Many health AI tools implicitly assume individual smartphone ownership, stable internet connectivity, English literacy, and familiarity with app-based interfaces. These assumptions break down quickly in LMIC contexts and they break down first for women.
Women are disproportionately affected by the digital divide. They are less likely to own smartphones, more likely to share devices, and often operate with intermittent or low-bandwidth internet access. Time poverty due to unpaid care work further limits sustained engagement with complex digital tools. In such settings, text-heavy interfaces, app downloads, and constant connectivity are not neutral design choices; they are exclusionary ones.
These access constraints also shape the data that AI systems learn from. If only a subset of relatively privileged women can interact with digital health tools, the resulting datasets will continue to skew toward those already better served. Over time, this reinforces the very inequities AI is meant to address, creating a feedback loop between access, data, and model performance.
Language as Infrastructure
Healthcare interaction is fundamentally linguistic. Symptoms are narrated, not measured. For many women especially in rural or informal settings this narration happens in vernacular languages, dialects, and culturally specific metaphors. Standard NLP pipelines struggle in such environments, particularly when faced with high dialectical variance, code-switching, and non-standardized health vocabulary.
Accurate symptom interpretation therefore requires more than translation. It requires sensitivity to how discomfort, pain, and risk are expressed across regions and social contexts. Errors in speech recognition or intent detection are not evenly distributed; they disproportionately affect rural speakers, women, and low-literacy users. In healthcare settings, such failures can either suppress early warning signals or escalate benign symptoms into alarming outputs.
This is where user training becomes a technical necessity rather than a peripheral activity. Teaching users how to interact with AI what kinds of inputs are effective, how to interpret outputs, and when to escalate to human care directly affects system performance and safety. In healthcare contexts, poor UX or miscommunication can increase anxiety, discourage care-seeking, or generate false reassurance.
The Interface
Usability in health AI is often framed as a design concern. In reality, it is a determinant of health outcomes. Systems that overwhelm users with worst-case scenarios, opaque recommendations, or excessive medical jargon can induce fear and disengagement. Conversely, oversimplified systems risk minimizing serious symptoms. For women navigating social constraints around healthcare access, the stakes are even higher.
An AI system that fails to explain why a symptom may matter, or how urgent a response is, limits a woman’s ability to advocate for care within her family or community. Explainability, pacing, and tone are therefore not cosmetic features; they are mechanisms of agency.
Ethical Fault Lines in Deploying Health AI
These design challenges intersect with deeper ethical concerns. Shared device usage complicates privacy. Informed consent must account for varying levels of digital literacy and power dynamics within households. Even anonymized data collection can generate anxiety if users do not understand how their information is used.
There is also the risk of automation authority where AI outputs are treated as definitive despite being probabilistic and non-diagnostic. Without clear red-line protocols and human-in-the-loop governance, health AI systems can inadvertently replace rather than support clinical judgment. Gender-responsive health AI therefore requires intentional safeguards: explicit consent mechanisms, conservative triage thresholds, continuous monitoring for bias, and oversight structures that include women themselves in decision-making.
Introducing Project Saathi: Designing from Constraints, Not Capabilities
It is from within this landscape of technical, social, and ethical constraints that Project Saathi emerges.
Rather than starting with model capabilities, Saathi begins with user realities. It is a vernacular, voice-based AI system designed to support women’s health decision-making in low-resource settings, without replacing medical care. Its architecture prioritizes women-centric data creation through gender-disaggregated preprocessing and intentional inclusion of female-specific health markers to mitigate algorithmic blind spots.
Saathi’s voice-first interaction is built on Indic language models to accommodate low literacy, dialect variation, and culturally grounded symptom narration. Its outputs are deliberately non-diagnostic, focusing instead on explainable guidance that helps users interpret symptoms and decide appropriate next steps. User training and community intermediaries are embedded into deployment, recognizing that trust, usability, and accuracy are co-produced through human engagement.
While health is Saathi’s initial domain, its framework is intentionally extensible to areas such as financial literacy, where similar patterns of gender bias, access constraints, and language barriers persist.
As AI continues to enter public service domains, the question is no longer whether these systems scale, but who they scale for. Gender-responsive health AI requires moving beyond technical optimization toward contextual intelligence systems that listen carefully, speak clearly, and know their limits.

