Part 1: The reliability of children’s self-reported data
In this blog post, I compare observational data with self-reported data on handwashing rates among schoolchildren in the Philippines and discuss the implications of measurement bias for causal inference. In a companion blog post next week I will discuss survey techniques that we piloted to try and reduce measurement bias in self-reported data.
Handwashing facility in Camarines Norte province, Philippines. ©Maan Aure
Many interventions try to change human behaviour, such as getting people to save more money, use latrines, or vote. But it can be hard to measure the impact of these interventions when the key behaviour is not visible to the evaluator. Researchers sometimes rely on self-reported data, but the same interventions that seek to influence behaviour may make those behaviours more salient in the respondent’s memory, or trigger social desirability motivators that prompt the respondent to give the surveyor a particular answer. In instances where the effects of salience or social desirability differ across treatment and control groups — for example, a community sanitation campaign in treatment villages that triggers feelings of shame for openly defecating — results based on self-reported recall data may be biased. In this post, I describe how we collected both observational and self-reported data from schoolchildren as part of an evaluation of a handwashing promotion program, and how our understanding of handwashing incidence and the effect of the program would be very different if we had only relied on survey data.
The “HiFive for Hygiene and Sanitation” program was a six-week behaviour-change campaign in primary schools in the Philippines to encourage students to wash their hands after using the toilet and before eating. We designed and conducted a randomized controlled trial to measure whether this program affected rates of student handwashing (here is the working paper, joint with Qayam Jetha, Clément Bisserbe, Daniel Waldroop, Ella Cecelia Naliponguit, Jon Michael Villasenor, Louise Maule, and Lilian Lehmann). Out of 196 schools in two provinces, we randomly assigned half to receive the program in the 2017–18 school year (“treatment”) and half to receive the program in the following year (“control”). We collected data in both groups three months after the program ended in treatment schools (and before it started in control schools).
To determine whether the program increased handwashing rates, our surveyors spent 1,700 hours observing whether children washed their hands after using the toilet. Classrooms in these schools have their own bathrooms, but the handwashing station is typically outside of the bathroom and in the classroom itself. Our surveyors chose a discrete spot in the back of the classroom from which to observe students as they exited the toilets. Teachers and students were told that surveyors were there to observe normal classroom activities, and neither handwashing nor sanitation were mentioned to them. In total, surveyors observed 5,296 handwashing opportunities across treatment and control schools.
We recognized the opportunity to compare ‘true’ handwashing rates – what surveyors observed – with recall rates – what students told surveyors when they were asked about their handwashing behaviour. Later in the school day, after finishing their classroom observations, surveyors surveyed a random sample of students in treatment and control schools. In total we asked 4,295 students the question: “The last time you used a toilet at school, immediately after using the toilet, did you (a) wash your hands in water; (b) wash your hands with water and soap; or (c) not wash your hands?” Here is how student behaviour compared to student responses:
We were surprised by how infrequently students washed their hands after using the toilet: In control schools, students were observed washing their hands with at least water 14.8% of the time, and with soap only 2.5% of the time. The HiFive program modestly increased handwashing rates, +5.6 percentage points with at least water (p = 0.03) and +3.7pp with water and soap (p < 0.01), though handwashing rates remained very low in treatment schools.
Although few students washed their hands, almost all students reported washing their hands with at least water in both treatment and control schools, and more than 3/4 reported washing their hands with soap. So why do kids lie about it? At the end of the student survey, we asked students to list the reasons why they washed their hands, and over two-thirds of respondents in both treatment and control schools gave reasons related to preventing the spread of germs and avoiding sickness. Children recognize the importance of handwashing; we suspect that it is this knowledge of what they should be doing that is driving students to over-report handwashing rates.
So if kids know that handwashing is important, why don’t kids do it? Access to water does not appear to be the problem: surveyors found that over 90% of classrooms had regular water supply at their handwashing station, yet fewer than 15% washed their hands with at least water. Soap is somewhat harder to come by: Only 41% of handwashing stations in control schools and 51% in treatment schools had soap. But even at handwashing stations with a regular supply of water and soap, handwashing rates with soap were less than 13%. Instead, it may largely be an issue of forgetfulness: Among the 300 students who admitted to not washing their hands after using the bathroom, a majority said it was because they ‘forgot’, rather for reasons related to lack of access, lack of knowledge, or being in a hurry.
How does this measurement bias influence our understanding of whether the program affects handwashing rates? If students in treatment and control schools were equally likely to over-report handwashing then our estimates of the program’s impact would be unbiased (though over-reporting could moderate our interpretation of program effects: increasing handwashing rates a modest amount from a very low base rate may be considered more impressive than the same percentage point increase when starting from a relatively high base rate). The problem is that the program might not only affect behaviour but also whether that behaviour is top-of-mind during a survey, as well as the willingness of respondents to report honestly about their behaviour to the surveyors. It can be hard or impossible to predict whether treatment is having these secondary effects or how big they might be.
In the case of HiFive, we do find that the average treatment effect for handwashing with soap is pretty similar when measured by observation (+3.7pp, p < 0.01) versus student self-reports (+4.5pp, p = 0.05). But this correspondence between observation and self-reported data is not guaranteed, and probably somewhat coincidental in our evaluation. The treatment effect on handwashing with at least water, for instance, is quite different when measured by observation (+5.6pp, p = 0.03) versus student self-reports (+1.4pp, p = 0.17). Clearly, self-reported data is not an adequate substitute for observation in either descriptive surveys or impact evaluations.
So what happened to the program? Given the disappointingly small impact of the HiFive program, we are working with UNICEF and the Philippines Department of Education to design, implement, and evaluate a new type of behaviour-change program that uses ‘nudges’ to encourage handwashing in schools. Nudge-based interventions, such as arrows on the ground of the bathroom that point from the toilet to the sink, have successfully improved hand hygiene in universities and hospitals. We are hopeful that this will be a more effective approach to increasing handwashing rates since forgetfulness, rather than lack of access to water and soap or lack of knowledge about the importance of handwashing, was highlighted as the main barrier to handwashing in the student survey. Stay tuned for those results in 2020.
We collaborate with government leaders to develop and roll out data-driven policy solutions aligned with their priorities and within their budgets.
We work with international agencies to ensure their agendas and operations are founded on the best available data and evidence.
16 November 2021
27 August 2021
14 October 2020
10 May 2019