I've been working with a Year 12 student this week to think more rigorously about the relationship between lying and politics, using Hannah Arendt's essay Truth and Politics as our guide. In it, Arendt distinguishes between rational truth, mathematical or philosophical propositions that remain constant, and factual truth, which refers to real-world events that are contingent, verifiable, and historical. The latter, Arendt warns, is most vulnerable to political manipulation.
As we discussed her work and its wider implications, one observation became central to our conversation: "the chances of factual truth surviving the onslaught of power are very slim indeed" (p. 231). This fragility extends beyond politics to contemporary debates over education, technology, and progress.
The World Bank recently published a report claiming that an AI-powered tutoring programme in Nigeria produced learning gains equivalent to 1.5 to 2 years of 'business-as-usual' schooling. The headlines were bold: ChatGPT-4, embedded in Microsoft Copilot, had transformed learning outcomes for students, yet a closer examination reveals precisely what Arendt meant about factual truth struggling against the onslaught of institutional power.
The research cited in the report centred on an after-school programme in Benin City, Nigeria, where volunteer students used a large language model to improve their English skills. The study describes the structure clearly:
The sessions began with a teacher-provided prompt, followed by free interaction between the student pairs and the AI tool. Teachers circulated the classroom, ensuring students' interactions remained relevant and on task. (p.7)
Over six weeks, experimental group students attended two 90-minute sessions weekly, totalling 18 hours of additional learning. The control group, meanwhile, "did not receive any intervention but continued their regular learning in the classroom" (p.9). When performances were compared, the results seemed impressive:
students selected to participate in the programme score 0.31 standard deviation higher in the final assessment that was delivered at the end of the intervention… [w]e also show that the intervention yielded strong positive results on the regular English curricular exam of the third term. (p.2)
John Hattie's influential meta-analysis "Visible Learning: The Sequel" suggests that an effect size of 0.4 represents "a worthwhile hinge [point] than the usual zero (which allows nearly all to claim that their favourite strategy or influence can enhance achievement)" (p.31). The 0.31 effect size, whilst not insignificant, falls below this threshold and is roughly equivalent to the impact of parental involvement in education. If the outcomes are questionable, there are concerns with the design of the experiment itself.
The researchers compared students who received 18 hours of additional, teacher-guided English instruction with students who received no additional support whatsoever. This isn't a comparison between AI-enhanced learning and traditional pedagogy, it's a comparison between something and nothing. Any intervention that provides students with extra time, attention, and structured practice would likely yield positive results. Would students working with textbooks, discussing literature, or engaging in peer conversation for the same 18 hours have shown similar gains? We cannot know, because the researchers chose not to test this. The embodied reality of teaching, the relationships, responsiveness, and contextual judgements that experienced educators know matter, disappears from the analysis entirely. It is not surprising; for all the impressive credentials and experience held by the team, none appear to have substantial classroom teaching experience. This is precisely what Arendt warned against: truth becoming displaced by convenient narrative shaped by institutional power and an eager audience for low-cost solutions. In diminishing the role of the teacher, the study helps to reshape the perception of the reality in classrooms.
The Nigerian study isn't necessarily wrong; extra teaching and support did help students improve their English scores. But the broader narrative of AI representing a scalable solution is misleading and reduces the complexity of education to performance data, losing, and at the same time underscoring, something important about what it means to learn and teach in an age where factual truth is under threat.