We have evidence on the best ways to measure learning, in alignment with SDG 4, which can help ministries of education know where they are making progress and where they need to adjust policies and practices. A new tool, the Learning Data Toolkit, gives countries access to all these resources and allows them to provide feedback on measuring what matters.
This blog was co-authored by Silvia Montoya (UIS), Hetal Thukral and Melissa Chiappetta (USAID), Diego Luna-Bazaldua and Joao Pedro De Azevedo (World Bank), Manuel Cardoso (UNICEF), Rona Bronwin (FCDO), Ramya Vivekanandan (GPE) and Clio Dintilhac (BMGF).
We are in the midst of a global learning crisis: Reports on learning poverty suggest that 7 in 10 children in low and middle income countries (LMICs) do not know how to read with comprehension by age 10. However, in most of the developing world, we can only estimate how many children know how to read as we do not have reliable data to measure outcomes and progress on learning over time.
In 86% of LMICs, we do not know how much learning has been lost due to COVID school closures. If we are to measure progress across and within countries across years, we need data that measures what matters and that is reliable and comparable over time.
Thankfully, we are closer than ever before to this objective. We have political momentum: Education partners came together recently at the United Nation’s Transforming Education Summit to make a commitment to action on foundational learning, including committing to better data on learning. Most importantly, there are now methods available to countries to strengthen their assessments and anchor their measurement in expert advice.
Why has the collection of comparable, reliable learning data been so difficult?
Despite the growth of national and international assessments, collecting comparable learning data over time and between countries is no simple feat. This is because most assessments:
Don’t measure what matters: Most assessments do not measure the specific sub-skills that lead to reading with meaning and often prioritize the measurement of content knowledge. The measurement of sub-skills is important to allow education actors to identify and target the specific gaps among learners who are unable to read with comprehension. Are not comparable over time: Many assessments are not designed to be psychometrically comparable over time. When subject and grade assessed change, it also prevents comparability. Are not comparable between countries: Different countries assessments test different skills at different grades. It’s difficult to learn or benchmark across countries because difficulty levels are not the same.Furthermore:
International assessments may enable comparability, but they have low coverage in low-income and lower-middle income countries, particularly for the early grades of primary school. Moreover, primary grade international assessments take place in cycles, every five to six years, which is too long to provide meaningful information and inform decisions. This contrasts with lower secondary assessments that happen every three years. Learning assessments within donor projects are often limited to the beneficiaries and timeline of the projects, limiting the sustainability of these efforts.This explains why data gaps are still so high. For example, 24 countries in sub-Saharan Africa did not report data in the 2022 learning poverty report. The below map of the UIS highlights the data gaps.
The solutions: A common framework and methods that build on countries’ existing assessments
Under the leadership of the UNESCO’s Institute for Statistics (UIS) and with support from partner organizations, solutions have been developed in the last few years to enable countries to improve their learning data, building on their existing assessments:
There is now an agreement on what are the minimum proficiency levels (MPL) that allow the reporting for SDG indicator 4.1.1. There is now an agreed common framework to measure key education outcomes such as reading and mathematics. It is called the Global Proficiency Framework and gives expert advice to measure what matters, i.e., the skills that students should acquire on the pathway to mastery of reading and mathematics. Rigorous methods have been developed to strengthen existing learning assessments (national, international and household based) and link them to this common framework.These new methods include:
A methodology known as policy linking, which is based on experts’ judgment and allows countries to use their national assessment results under certain conditions to report globally. The Assessment for Minimum Proficiency Levels (AMPL), which are test booklets targeted at measuring the attainment of a single proficiency level, and the MPL in reading and mathematics, that are made available for integration into national assessments to strengthen their reliability and allow for comparisons over time and across countries. These are currently being piloted in five countries. International assessments that allow for better international comparison of learning outcomes, such as Africa’s PASEC and Latin America’s LLECE, which have now been rigorously linked to an international learning standard allowing participating countries to report on results on a comparable metric. The UNICEF MICS household survey, which is now aligning the learning outcomes module to the new Global Proficiency Framework and is adding a module to report on learning poverty.The Learning Data Compact, launched last year, brought together our institutions to take stock of learning gaps and provided a framework on how countries, development partners and donors can work together to fill these gaps. UNICEF, the World Bank, UNESCO and partners committed to supporting all countries to have at least one quality measure of learning by 2025 on two grades and two subjects; and two measures of learning on two grades and two subjects by 2030.Achieving this goal in time will require an acceleration of the use of these solutions to strengthen existing assessments and use them to measure progress reliably.