Skip to content
Blog

Measurement challenges in Financial Inclusion

Women in Bahraich district, Uttar Pradesh, India

This week, practitioners globally reflect on successes and challenges to facilitating financial inclusion (FI); we want to take this opportunity to share methodologies and approaches we are exploring to measure financial health. For financial inclusion week, we are sharing and assessing a range of data sources and tools that our FI team in India has drawn from over the last year to measure key dimensions of financial health.

For other organizations looking to comprehensively measure financial health, we hope this overview provides a good starting point. As we are always looking to learn, please let us know in the comments about your experiences with financial inclusion measurement.

According to a 2016 conceptual framework published by Innovations for Poverty Action, an individual is financially healthy if they:

  1. Balance income and expenses
  2. Build and maintain reserves
  3. Manage existing debts and have access to potential resources
  4. Plan and prioritize
  5. Manage and recover from financial shocks
  6. Use an effective range of financial tools

We briefly discuss the potential and limitations of four key data sources to assess these indicators: administrative data, sample survey data, Discrete Choice Experiments, and qualitative information. The first three sources are types of quantitative data; the last section briefly summarizes a few types of qualitative data. In some cases, a researcher might need only one of these sources to measure a certain financial health indicator, but these data sources can also work together to supplement each other. Together, these sources helped us to describe the full customer experience — from enrollment to use — and to cover both financially included and yet-to-be-included individuals.

Administrative data — insights from the supply side

Administrative data — which could consist of registered bank accounts, clients’ transactions, or other information — is usually collected by the government or financial service providers. This often seems like the most obvious data source.

Potential use: Starting from a person’s enrollment, data is collected by the financial service provider at every step. Each time a customer uses their account, the data is captured by the bank, this includes the number of times the account is accessed on a phone (or computer), what it was accessed for (withdrawals, deposits, transfers, etc.), and the mode of usage. This data can be used to understand a customer’s spending habits and elements of their financial health like building and maintaining reserves or balancing income and expenses.

Examples: Government sources in India where admin data can be accessed publicly include the Champions of Change website, which tracks key financial indicators monthly for 117 Aspirational Districts from March 2018. At a higher level, chapters of the Reserve Bank of India’s annual report (like this) provide key achievement statistics of Financial Inclusion schemes.

Private sources are valuable. A number of research programs are based on variations of a partnership with a financial services provider supplemented with primary data collection. At IDinsight we have partnered with key financial providers and set up experiments to test ‘nudges’ aimed at a variety of outcomes. We have examined administrative data from these partners to assess the effectiveness of these levers. Such partnerships have enabled us to work within the financial ecosystem and to leverage existing administrative data.

Limitations: Administrative data does not include individuals who do not have access to financial services. Additionally, it does not cover most demand-side issues, such as individuals’ preferences or perceptions of financial inclusion.

When trying to improve services, working together with private financial service providers to assess and analyze existing data is imperative. For privacy reasons, such agreements are bound by strict data protection agreements. This can impose constraints such as having to go to a specific location to access the data and analyze it in a short timeframe. It also may require additional time to de-identify data and make sure people’s identities are protected.

Self-reported data from sample surveys — demand-side insights from users

Sample survey data can be either collected by an evaluating organization or accessed through existing datasets. Well-designed surveys can provide representation of specific populations of interest. Surveys can either target individuals or potential clients needing services, or institutions/facilities encompassing both demand and supply-side issues.

Potential use: Survey data can be useful to obtain self-reports of barriers or enablers that individuals perceive in regards to financial services. It can help explain both demand and supply-side issues in financial inclusion, to understand wherein the individual’s journey they need the most support. Survey data allows you to tailor questions and disaggregate responses based on respondents’ demographic profiles in a way that administrative data (usually presented in an aggregate format) may not. Finally, household survey data has the advantage of covering those not (currently) included by administrative systems.

Examples: At IDinsight we have used household survey data to understand issues related to awareness (“do people know about pension”), access (“is a bank branch available close by”), intent (“do people want to use bank accounts”), and action (“why have people not been able to save money in banks”). They have been useful to understand the customer experience, for example, how long it took to receive insurance claim payouts and quality of their most recent experience with their banking agent.

We have also been able to collect data on specific populations of interest: for example, we learned from a recent survey that while awareness of financial products offered by the government (such as life insurance or pension) is low across the population, awareness is systematically lower among women and individuals from poorer socioeconomic backgrounds.

There have been some independent efforts to collect data on financial inclusion indicators. Dvara Open Online Repository (DOOR) is an effort to catalog and curate publicly available household finance data in India. The Global Findex Database publishes a comprehensive survey-based dataset on savings, borrowing, payments, and risks every three years. The Financial Access Survey (FAS) reports supply-side data on the use of financial products and is collected from national central banks or financial regulators.

Limitations: Self-reported data is only as good as what the respondent remembers. For example, people may say they don’t have a bank account if they signed up long ago but no longer use it actively. Additionally, in many households, the head of the household takes care of financial decisions, and other household members may not have enough information to accurately respond to questions.

Respondents must also openly report information in order for data to be accurate. Finances are a sensitive subject; people may not always want to give true responses when we ask them how much they save. Social desirability bias also plays a role here, especially when social norms make it difficult to guarantee privacy in interviews. People may misrepresent their true behaviour to conform to what other people in the room believe and value, such as saying they are enrolling in a pension scheme even when they aren’t. These issues are even more applicable when surveying women. In many Indian households, the norm is for women to not be involved in finances. In such cases, if a woman is saving money, she may be doing so in secret and may not want to divulge this information to a surveyor.

We do our best to mitigate these issues, such as asking for privacy in interviews and emphasizing that anything respondents say is confidential. We have also tried using phone surveys which may be a faster and cheaper alternative to traditional, in-person surveys. However, phone surveys may exclude individuals who don’t have a phone, are prone to disconnection due to poor network, and are not able to hold the respondent’s attention for very long. This is still an active area of research for us — we would love to hear your experiences with phone surveys.

Discrete Choice Experiments (DCEs) — measuring trade-offs

One way to measure people’s preferences for a specific product is through Discrete Choice Experiments or DCEs. DCEs provide different options to a respondent and allow them to choose one. This allows a researcher to quantify the trade-offs people are willing to make as they decide on different components of products they like.

Potential Use: DCEs allow you to assess statements such as “workers are four times more likely to say they will save in [x type of bank account] over [y type of bank account].” DCE are designed using random utility theory in the social sciences and were first proposed by Daniel McFadden in 1974, and are widely used in fields like marketing and health. In development they are used to model, for example, between formality and informality in choosing a job in Bangladesh or understanding what features drive the uptake of an employer-offered savings technology in Malawi.

Examples: Our financial inclusion team in India piloted DCEs to quantify the tradeoffs people living in low-income districts make over different attributes of posters related to savings. In one instance, we conducted a DCE with 162 participants in Uttar Pradesh to rank their responses to four different themes illustrated through four posters’ pictures and messages. For example, we showed a respondent a poster with a positive story about a person’s family benefitting from life insurance, and another poster with a negative story of the family of a person who did not benefit from life insurance, and see which one they preferred. A preliminary analysis of this DCE suggested that participants were twice as likely to choose a poster based on a specific picture.

Limitations: DCEs are all about stated preferences and we don’t observe any real behaviours. We only observe choices over hypothetical scenarios. They don’t tell us why respondents prefer certain attributes over others, or why they care about a version of one attribute other another. This was the case in the pilot we conducted: while we were able to quantify the preferences of certain pictures, we weren’t able to determine if this was the case because of the savings “theme” of the picture or because people liked attributes like colours and people’s clothing. In the instances in which we really care about the “why,” rigorous qualitative work might be more appropriate.

Qualitative research tools — getting to the “why”

A variety of qualitative research tools can be used to better understand the barriers to financial health, what’s working, what’s not, and why; and generate possible areas of focus for quantitative research, among other things. They may enable the researcher to uncover new dimensions not covered in past literature on the subject, which may be relevant for a study’s particular context.

Potential Use: While the uses of qualitative data go beyond this, there are four specific ways in which qualitative work has helped us make our measurement in financial inclusion more meaningful: understanding the mechanisms behind or “why” respondents make certain decisions; generating new questions on which to collect large-n quantitative data; generating response options for survey questions, and understanding relevant subgroups.

Examples: In our case, we conducted our qualitative research mostly through in-depth interviews and focus group discussions, though there are other qualitative methods we have not yet used. This is what we found.

  • Mechanisms: A quantitative stated choice model informed us which picture or message respondents like on the poster (see DCE section). But to understand why they prefer that picture, we relied on qualitative work. Similarly, an RCT provided insights on which message framing caused the largest increase in the desired outcome, but in-depth interviews with the recipients helped understand why the framing elicited the observed response.
  • New questions: Semi-structured conversations with respondents have helped us understand their major challenges using the formal financial system. Once these challenges emerged in our small-n qualitative work, we decided to collect data through large-n quantitative surveys to measure the scale of the problem. For example, in conversations with some of our respondents, we learned that despite interacting extensively with the banking agents, they perceived the agents negatively. They felt ill-treated by the agent and believed they had been defrauded. Since banking agents provide the crucial last-mile link of the banking system, negative attitudes towards them may reduce engagement and trust in the formal financial sector. To corroborate whether or not this is a common problem, we decided to ask respondents about their experience with banking agents in our recent large-n quantitative survey.
  • Comprehensive response options: Similarly, qualitative research suggests response options that may be particularly relevant for the study context, but those which we (and other researchers) may not have encountered elsewhere. For example, how do people send money home? There’s an obvious list of common remittance options, but if our respondents are adopting innovative methods, understanding these methods through qualitative work will allow us to frame a more comprehensive set of response options.
  • Understanding subgroups: Qualitative research often sheds light on groups of individuals who may be particularly excluded from the formal financial sector, or who may respond differently to intervention. For example, existing research tells us that women are excluded from the formal financial sector to a much greater extent than men. But deep conversations with financial intermediaries in our particular study context are invaluable for understanding whether, say, married women have a different level of engagement with the banking system than unmarried women, so that we can focus our sampling strategy and survey questions accordingly.

Limitations: Qualitative work, because of its in-depth nature, is generally restricted to small sample sizes. It therefore may not be representative of a population of interest. Therefore, in many studies, it often acts as a supplement or precursor to one or more of the quantitative methods detailed above.

Conclusion

Mixed methods are essential to fully comprehend and improve the financial health and inclusion of customers. Each data source provides insights into particular aspects of the customer journey. While we have touched upon the limitations of some of these sources, it is essential that the quality of this data is high to ensure accurate and useful conclusions.

Measuring financial inclusion is complex, and there is much within and beyond the scope of our research that we could not summarize in this post. Please stay tuned for more of what we’ve learned on this topic.