___15106240633_f70917d539_k.jpgmoutain.jpggirl on knees.jpg

Monitor and Evaluate

This section provides practical guidance in monitoring and evaluation (M&E) best practices in water+ as well as suggested indicators, frameworks and tools to help in initial framework design and ongoing M&E.


Creating a monitoring and evaluation plan is a vital stage in a program planning process. The immediate aim of program monitoring and evaluation should be to improve program quality and it should be an iterative and regular process. Monitoring can help with efficient use of resources, identifying problems and enabling solutions, accountability to different stakeholders, documentation for evaluations and impact-assessment studies, knowledge and learning, and empowering people to take action. Note that, monitoring and evaluation will be immensely more useful if baseline data is collected at the beginning of the program (Smith, 2001).

This guide will give you the background of other M&E plans in the context of Water+, as well as provide indicators for your work so that you can accurately track the successes of your program, make the necessary changes along the way, and lastly so that CARE USA can assess its global impact in the field of water+.

Best Practices in M&E

When programs are being evaluated by different individuals and organizations all around the world, it is important that they remain consistent so that it is possible to track trends and compile information about common indicators. A 2012 water+ impact report identified several gaps in M&E, in particular a lack of common indicators, no baselines, and different units of measurement. Included here are some recommendations to improve the quality of data received and enable global aggregation and analysis.
  1. Define and standardize indicators. There are some aspects of programs that can be nebulous and hard to measure (ex: livelihood, disease burden, proper hand washing). It is helpful to have standardized definitions so when data is compiled across multiple evaluations, it is understood that everyone has measured the same thing (e.g. proper hand washing is defined by frequency, timing, and use of soap; if all three are not present, proper hand washing has not occurred). Where possible, standard definitions are supplied by CARE USA so that country offices will measure indicators in a uniform way (See Indicators section).
  2. Have baseline tests, comparison groups, and/or target goals. Many of these evaluations reported number of people (households, families, villages, etc.) reached but several failed to report how that compared to a baseline evaluation, a control group, or a previously set target. Therefore the number tells us very little.
  3. Report frequencies in number and percentiles. When conveying frequencies in tables and text, the data is much more meaningful if a percentile is also reported. A water program that reports reaching 482 people is not conveyed as effectively as a water program that reaches 482 (96%) people in the village.
  4. Report significance. Evaluations often showed changes between baseline and endline measurements or between an intervention and control group, however without reporting a p-value (a measure of significance obtained through statistical analysis), it is impossible to tell whether a statistically significant change has actually taken place.
  5. Total the results. When reporting data for the indicators measured, it is helpful to report the sum of the data in a total column as well as data for the individual communities. In a large report, this allows the total of the data to be easily compiled with other similar programs.
  6. Recognize proxy indicators for what they are. It is common in evaluations to measure something as a proxy for behavior (i.e. presence of soap or an educational program) and then report it as behavior change. It is important to remember that a proxy is only a proxy and our assumptions are limited. If presence of soap is being measured next to latrines, it cannot definitively be reported that X number of people are washing their hands.
  7. Agree upon universal units of measurement. For example, out of the 23 evaluations that reported increased water access, 3 (13%) reported in terms of number of people, 7 (30%) reported number of households, and 11 (nearly 50%) had no unit of measurement reported at all. This makes it impossible to compile the information into one statistical average. Another example from these reports is when measuring decreased burden in collection of water: some reports measured in terms of change in time it took to collect water and others reported change in distance traveled. This prevents us from reporting a total decrease in burden due to these programs from around the world.
  8. Create a representative sample. Use probability sampling to ensure that every individual in your survey population has an equal chance of being chosen. See below for additional guidance on sampling.

There are often multiple organizations partnering within a program and demanding their own set of indicators be measured in their own specified way. However, using consistent and efficient indicators measures will enable cross-country benchmarking and global aggregation.

Monitoring and Evaluation and Gender

Monitoring and evaluation experts around the world have remarked on the importance of focusing on gender in the evaluation of programs, whether the program itself is gender-focused or not. Siska Ivens states that especially within water programming, there is a need in the literature to evaluate the impact on women’s empowerment and decision-making (2008). She further remarks that impact evaluations of empowering participatory approaches need to be conducted. Policymakers as well as water experts have commented that there is not enough evidence-based literature on the impacts of gendered water programming and women’s empowerment in order to address gender at a policy level (Zwarteveen, 2006).

It is important that we recognize that, beyond focusing on gender in our monitoring and evaluation frameworks, we need to be aware of gender as an issue within the very process of evaluation itself. Research teams need to consider the hierarchical power relations between men and women that put women at a disadvantage throughout the research process (recognize the gendered nature of the research process itself), integrate different kinds of diversity that also affect gender relations (age, class, ethnicity, etc.) into all steps of the research process, and analyze the relationships among the research parties themselves (Beetham & Demetriades, 2007).

The matrix in the Indicators section has several gender-related indicators that can be incorporated into an M&E framework.



The following indicators are organized according to the domains and program areas of the global water+ theory of change. Country programs are strongly advised to select applicable indicators for inclusion in water program monitoring and evaluation frameworks.

The following documents are questionnaires tailored to each of the global water+ theory of change domains. Each questionnaire contains questions related to every indicator per domain. They are separated by subgroup of each domain to better organize one’s successes and challenges in the monitoring and evaluation process.

Water+ Theory of Change Domain 1 Indicator Questionnaire:

Water+ Theory of Change Domain 3 Indicator Questionnaire:

Tools and Composite Indicators

  1. Gender Analysis Survey (GAS): This tool explores the links between gender equity and WASH programs.
    1. GAS User Guide Draft:
    2. GAS Microsoft Excel Data Entry Example:
  2. Governance into Functionality Tool (GiFT): This survey provides a snapshot of the governance and functionality status of water and sanitation schemes. It explores current scheme preparedness for future sustainability.
    1. GiFT User Guide Draft:
    2. GiFT Microsoft Excel Data Entry Example:
    3. GiFT User Guide using the mWater platform on an Android-based tablet or mobile phone: coming soon

  3. Women’s Experiences Tool (GWI) / Impact of WASH on Women Tool (IWWT) (Water Team): Asks a set of questions that reveal issues surrounding the differentiated experiences of women. The tool provides a quantitative-scoring based way of documenting women's experiences.
    1. IWWT User Guide Draft:
    2. IWWT Microsoft Excel Data Entry Example Draft:

Additional Monitoring and Evaluation Tools

These are quantitative and qualitative tools for measuring progress in WASH, IWRM, governance, learning, and partnership. Click and Open or Save the .zip file to access these documents.Monitoring and Evaluation Tools
These tools are summaries that give an overview of progress at project, community, local, national, and/or regional level to document "how well" the program has done towards achieving goals and objectives over set time periods. Click and Open or Save the .zip file to access these documents.

-Information and Summaries on the Major Measuring Tools
-Overall Assesment of Progress
-Description of Community Feedback and Monitoring Process
-Description of the Personal and Community Stories Process
-Finance, Functionality and Governance Snapshot
-Government IWRM Support Assesment
-Basin IWRM Roadmap
-Learning Survey Tool
-Partnership Assesment Tool
-Numbers Table
-Baseline and Evaluations

This is an accompanying file to the Monitoring and Evaluation tools file. It contains a Table of Contents, an Acronym page, and a formatted Title Page.

Partnership Assessment Tool

Vibrant Water Assessment Tool
The vibrant water sector self assessment is a tool for identifying strengths and weaknesses within a given country's water+ work. View the VWA Tool here

Information Communication Technology (ICT)

ICT for data collection is a growing and viable technique in the development industry, particularly in the WASH sector where products like Akvo FLOW1 and WellDone2 have emerged. The costs involved vary widely from free, open-source platforms to several thousands of dollars a month for complex systems. This does not consider the potential significant initial cost of hardware such as cell phones or computers.

ICT is currently expanding from an M&E tool to a complementary component for service delivery. ICT can be useful in obtaining improved communication, monitoring systems, and feedback from remote areas that have significant cost and access barriers, although electricity and cell phone connectivity can also be an issue in remote locations. There is evidence that ICT can play a practical role in improving accuracy and transparency of M&E practices, but inconsistent evidence demonstrating how it aids program goals. For example, an SMS information program in India relaying weather patterns and market prices to smallholder farmers found no statistical difference on farming practices, crop selection or selling price among participants. A similar SMS program relaying information through community workers in Uganda noted an increase in higher-risk, higher-value crops, resulted in farmers selling less but profiting more.

The platforms presented encompass a wide range of activities from data collection surveys to water pump monitoring tools to crowd sourcing fostering collective consumer purchasing power. The ICT tools and platforms listed range in both utility and focus, as many water institutes have adapted various survey tools and programs for water + activities. The potential to obtain timely feedback and improve information flow through improved employment of ICT is significant and worthy of additional consideration.

While there are a few ICT platforms specifically targeting WASH, the majority of platforms focus on wide applicability and are adapted for WASH programs by various institutions. Many ICT companies are emerging from the developing country market with local knowledge of best practices. The potential for mobile banking, remote sensing, automated data collection and other mobile applications to engage vulnerable and marginalized populations will undoubtedly change the manner and opportunities for engagement in the development field.