Measuring learning outcomes in India

AMIT KAUSHIK

back to issue

RECENT years have seen much debate about the crisis of learning. Study after study has shown that after spending billions of dollars and crores of rupees, we still seem to be grappling with lower and lower levels of learning in schools. Access has certainly gone up, and more children than ever before seem to be in school, but although there are many theories, it is not quite clear why the multiple interventions do not seem to positively affect learning. In a recent report, the World Bank points out that even after several years in school, millions of children cannot read, write or do basic math, referring to this learning crisis as one that is increasing social and economic gaps instead of helping to reduce them.1

A fundamental discussion, however, centres around the methods by which these learning levels are measured, and how reliable and valid data can be obtained. Clearly, accurate data is important to keep policy makers, school heads, teachers, and parents better informed, helping them to make more optimal choices about the way in which their children are educated.

Interestingly enough, policy documents in India prior to the 2000s almost never used the phrase ‘learning outcomes’. The hoary Kothari Commission report, the National Policy on Education, 1968, and its successor in 1986 (including the Programme of Action, 1992), all dealt with various critical aspects of education, without referring to the outcomes of school education, except in passing or in the very broad sense of helping society to create democratic and aware citizens. Yet, almost every other relevant aspect of education – its objectives, structure and standards, teachers and teaching methodologies, enrolment, equity, curriculum, administration and supervision, even the physical location of schools – has been discussed in great detail.2

 

Even examination reform finds a place in these discussions – for instance, the National Policy on Education (1968) states in section 10 that ‘a major goal of examination reforms should be to improve the reliability and validity of examinations and to make evaluation a continuous process aimed at helping the student to improve his level of achievement rather than at "certifying" the quality of his performance at a given moment of time.’3 There is thus an indirect reference to learning outcomes here in the form of a ‘level of achievement’ but without further elucidation.

Similarly, the National Policy on Education (1986) refers to the need to lay down ‘minimum levels of learning’ – which in itself became the subject of controversy later when schools started to teach only to those levels – but does not go further to discuss their significance.4 An entire section of this policy also deals with evaluation and examination reform, but restricts itself to viewing such devices as ‘a measure of student development’ and an ‘instrument for improving teaching and learning’.5

Given the circumstances at the time, it is understandable that the focus was more on increasing access and ensuring equity. A poor country, recently freed from colonial rule, was seeking to build a more egalitarian society, and the emphasis, therefore, was on ensuring that children and young people received adequate and equitable opportunities for education, within limited resources. Even in the mid-1980s and early 1990s, resources remained a significant constraint within a closed economy even as the population had increased substantially, making it unsurprising that the objectives of policy remained consistently focused on access and equity.

Underlying this focus was an implicit assumption that the provision of adequate inputs would automatically lead to desired outcomes, even though these were not identified or explicitly stated. From the point of view of the bureaucracy this was certainly a preferred approach, since inputs are easily measured and accounted for, making it relatively simple to maintain the necessary official records. Clearly, the number of classrooms or toilets built or teachers appointed lend themselves to easy enumeration, as opposed to measuring an intangible like learning for which one would need different kinds of proxy indicators for latent abilities.

 

This approach was carried forward to the Right of Children to Free and Compulsory Education Act, 2009 (hereafter the RTE Act) which, at least initially, made no reference to learning outcomes.6 It was only as recently as 2017 that the subsidiary rules were amended to provide for the preparation of class-wise, subject-wise learning outcomes,7 in belated acknowledgement of the need to consider the end result of school processes.

In fairness though, and as someone who was peripherally a part of that process, I am aware that one of the main reasons why there was no particular emphasis on learning outcomes in the RTE Act was the introduction of the concept of Continuous and Comprehensive Evaluation (CCE) to accompany the no-detention provision. Even though it did not discuss learning outcomes specifically, the CABE subcommittee which drafted the ‘essential provisions’ of the draft legislation in 2005 was of the view that if CCE were done right, there would, at the end of each school year, be a detailed portfolio of each child, providing an exhaustive assessment of her achievements and abilities. It is another matter that this form of evaluation never materialized in practice, and that even if successful, would have provided information about individual students but not the system.

 

The change of public discourse to emphasize the importance of learning outcomes can reasonably be traced to the inception of the Annual Status of Education Reports (ASER) brought out by the NGO Pratham from 2005-06 onwards. Although the initial reports dealt primarily with enrolment, subsequent ones began to look at learning levels of enrolled children. As noted elsewhere by this author, despite the many methodological questions raised by some, this led to a shift in attitude, where inputs were assumed but outcomes became the focus of discussion.8 Policy documents began to reflect this change, with the 12th five year plan noting that ‘there is a need for a clear shift in strategy from a focus on inputs and increasing access and enrolment to teaching-learning process and its improvement in order to ensure adequate appropriate learning outcomes.’9

 

The spotlight on outcomes brings with it the need for accurate and reliable measurement, and the development of related capacity. Sporadic attempts at determining student achievements have been made by various researchers from 1970 onwards, but these rarely went beyond limited parameters. The introduction of DPEP witnessed the beginning of baseline, midline and endline surveys, but it was only as recently as 2001-2002 that NCERT institutionalized the current National Achievement Surveys (NAS) for grades 3, 5 and 8, and later, 10. Initial surveys were based on Classical Test Theory (CTT), so comparisons and analysis from year to year were not possible; lately, the surveys have moved to an Item Response Theory (IRT) base which allows monitoring trends over time and comparisons across the country and internationally.

For most countries, quality in school education goes hand-in-hand with the need to ensure equity. Yet, unlike ASER which is household-based, NAS does not include out-of-school children, and cannot, therefore, help address equity concerns. Further, except for the last cycle for grade 10, NAS does not include the performance of children from private schools and so the picture we receive remains incomplete, given that the latter account for nearly 40 per cent of overall school enrolments. Dealing with the learning crisis will need a more complete picture of learning than we currently have available to us.

It is important to emphasize here that the collection of learning data through large-scale external assessments should not be seen as replacing, or in opposition to, ongoing internal assessments that teachers undertake as part of the teaching-learning process. Instead, as Masters argues, the former serves as an important indicator of the health of the system, providing valuable information to policy makers to ‘establish where learners are in their learning at the time of assessment’s so that corrective action may be taken in time at a policy level.’10

Viewed in this light, even if done right, CCE as noted above would still remain an assessment of individual classroom performance and not be able to provide information on the well-being of the school system. Both types of assessments are clearly very different in their objectives and scope and should thus be seen as complementary, yielding valuable data that supports improvements in student learning.

 

The Sustainable Development Goals (SDGs) adopted by the world community in 2016 have a specific goal devoted to quality education. Departing from earlier practice, the SDGs speak of ‘levels of proficiency’, which clearly implies a need to measure such proficiency in a manner that is consistent and comparable within and across countries. The emphasis of the SDGs on quality of outcomes, rather than just inputs, immediately highlights the value of educational assessment and the critical role it can play in determining the extent to which a number of SDGs have been met. As a responsible member of the international community, India too will need to demonstrate her commitment to these agreed goals and to put in place mechanisms to gather the necessary evidence of progress.

 

However, one of the most significant difficulties in obtaining and using student learning data to plan for policy development and implementation, not just in India but also in the larger South Asian region, remains the weak institutional capacity available for this purpose. Technical capacity is nascent and fragile, and often insufficient for the magnitude of the task at hand. In our work with countries in the region, we have observed a number of challenges, some of which can become serious hurdles to accurate measurement of learning achievement.

Assessment of learning is a technical and highly specialized field. Yet, policy related to assessment continues to be driven by generalists with no particular training or expertise in the area. Unlike Nepal for example, which recruits officials with education-related qualifications to an education service and then continues to provide them with appropriate capacity building opportunities through their career, the erstwhile Indian Education Service was discontinued some time ago. A proposal to revive it was made by the now-defunct Subramanian Committee on the new education policy but those recommendations seem to have been placed in cold storage.11 As a result, the Ministry of HRD at the Centre and education departments in the states continue to be manned by well intentioned officers from multiple civil services (but primarily the IAS), who usually lack the technical knowledge needed.

 

The lack of technically trained decision makers leads to several challenges. First, there is often little appreciation of the time and effort required in undertaking large-scale learning assessments, which in turn results in poor planning and implementation. A tick-box approach to assessment means that the emphasis is on ‘doing’ rather than on ‘doing well’; significant resources are utilized on collecting data that is inaccurate or invalid because of incomplete planning or compromised quality processes. Often assessments may be carried out at more frequent intervals than needed, instead of taking enough time to get them right qualitatively, with the attendant cost implications. As an example, consider the proposal made by some officials of the Ministry of HRD a couple of years ago – a national learning survey collecting data every year about every subject, for every student, in every class from grade 1 through 8! Clearly, such proposals stem from a lack of technical understanding or appreciation of the purpose of large-scale diagnostic assessments.

The paucity of trained human resources is also a constraint in other ways. Within education departments, there are usually inadequate permanent resources allocated to assessment. What is often not realized is that conducting large-scale student assessments requires subject experts, assessment experts, statistics and psychometric experts, sampling experts, survey experts, language experts, graphic artists and designers, field administrators, and communication specialists.12 All of these are rarely available at the same time to most education departments. Additionally, as is the case with other staff working in government departments, each one is tasked with a multitude of roles that frequently go beyond their primary responsibility for assessment.

Periodic transfers between government departments can also contribute to the lack of trained resources. Developing skills in assessment takes time but stability in tenures remains a challenge in the region at all levels. The head of the Education Review Office, Nepal, is on record in a published paper that frequent redeployment of staff members was one of the most difficult challenges faced in their national assessment.13 Several states in India have created assessment cells to try to institutionalize processes and knowledge but are hampered by the fact that tenures of staff at such centres remain irregular.

 

A related issue is that universities in the region rarely consider including specific modules or courses on assessment methods as part of their pre-service teacher training programmes, other than perhaps topics related to in-class assessments. Specialized training in areas important to assessment, such as psychometrics, is almost non-existent, as a result of which the pool of expertise available to the public and private sectors is highly restricted. Some capacity has been built over the years within NCERT, but that is clearly not enough to support the vast scope of such assessments across the country.

The allocation of adequate financial resources to assessment remains a challenge, which can lead to compromises in processes or quality. In a recent assessment, due to budget constraints, the state decided to save on the cost of printing Optical Mark Recognition (OMR) sheets for students, preferring to capture their responses through manual data entry instead. As a result, even after several rounds of data re-verification and cleaning (which came with unnecessary additional expense), the state was only able to report the performance of about half of its assessed districts. In Afghanistan, learning data collected under the Monitoring Trends in Educational Growth (MTEG) project lay idle for nearly two years before being analysed due to resource constraints. Again, there is sometimes the perception that learning assessments can be carried out inexpensively, causing governments to underestimate the financial resources actually required to deliver them.

 

While the importance of measuring learning is widely understood, there are certain potential pitfalls that can reduce the utility of related assessments. First, there can be a real risk of moving from no measurement at all, to measuring everything. As Muller reminds us, it is important to remember that measurement is not an alternative to judgement. ‘Measurement demands judgement: judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available.’14 The massive increase in the scope and coverage of NAS in India in the 2017 cycle allows detailed reporting of learning outcomes – yet reporting is not the end objective here. The goal must be for states to use the data that NAS provides to help improve learning in the classroom.

Second, such external assessments are useful only if they are perceived as low stakes assessments by the children, the teachers, and the schools. In recent years, international agencies and governments are increasingly moving towards funding regimes that are linked to certain deliverables, including in some cases, improvement in learning outcomes. Similarly, there has recently been some discussion that under the newly restructured Samagra Shiksha Abhiyan, which has subsumed the erstwhile SSA and RMSA, some part of central funding to states may be linked to the School Education Quality Index (SEQI) which ranks states on a variety of parameters, including performance on learning outcomes.

Linking learning outcomes to incentives carries with it the risk that an external assessment (say, the NAS) would no longer be perceived as low stakes. Instead, the stakes would be raised for the national or state governments involved since part of their education budgets would be tied to the results, which in turn could lead to pressure on schools and teachers to show ‘better’ performance. Education history is replete with examples of what happens in such circumstances – the Chicago Public School cheating scandal in which teachers helped students on standardized tests was one such outcome of school incentives being linked to student performance.15

 

Any learning assessment is valuable only in terms of how it supports improved learning in the classroom. It is here that the link between the data emerging from a learning assessment and its implications for policy and practice needs to be strong. In 2009, two states from India, Tamil Nadu and Himachal Pradesh, considered to be among the most progressive educationally, participated in the Programme for International Student Assessment (PISA). An international assessment that includes more than 70 countries, PISA is conducted every three years by OECD to evaluate education systems worldwide by assessing the skills and knowledge of 15-year olds from participating countries in science, reading, and math. The two states ranked 72 and 73 respectively, in a field of 74.

One of the major reasons for this was the fact that PISA tests for application of knowledge, as opposed to traditional examination systems in South Asia that primarily test rote memorization. It is a well known fact that while memorization is useful for dealing with relatively simple questions, more complex concepts and questions require an ability to apply the memorized knowledge. The PISA results could have been used as an opportunity for India to review her assessment systems and move towards a less rote-based regime. Instead, we chose to withdraw from future PISA attempts.

 

The results of such large-scale assessments need to filter back into teacher professional development, changes in curriculum and teaching methodologies, and the entire approach to assessing learning. Strengthening this link requires investment in human resources, technical expertise, and institutionalizing capacities. India and other countries in the region have made a beginning in this direction, but a great deal more remains to be done; ensuring that our children learn well and are prepared for the future should demand that we make that effort.

 

Footnotes:

1. World Bank, World Development Report 2018: Learning to Realize Education’s Promise. World Bank, Washington, DC, 2018.

2. Ministry of Education, Report of the Education Commission 1964-66. GoI, New Delhi, 1966.

3. National Policy on Education. Ministry of Education, GoI, New Delhi, 1968.

4. National Policy on Education. Ministry of HRD, GoI, New Delhi, 1986.

5. Ibid.

6. Ministry of HRD, Right to Education Act, 2009. [Online] Available at: http://mhrd.gov.in/sites/upload_files/mhrd/files/document-reports/RTEAct.pdf [Accessed 19 April 2018].

7. Ministry of HRD, Right of Children to Free and Compulsory Education Rules 2010, 2017. [Online] Available at: http://mhrd.gov.in/sites/upload_files/mhrd/files/upload_document/RTE_Amendment_2017.pdf [Accessed 19 April 2018].

8. A. Kaushik, ‘ASER 2014 – Looking Back’, Annual Status of Education Report (Rural), January 2015.

9. Planning Commission, Twelfth Five Year Plan (2012-2017) – Faster, More Inclusive and Sustainable Growth. Government of India, New Delhi, 2012.

10. G.N. Masters, ‘Reforming Educational Assessment: Imperatives, Principles and Challenges’, Australian Education Review, 2013.

11. T.S.R. Subramanian, Report of the Committee for Evolution of the New Education Policy, 2016. [Online] Available at: http://nep.ccs.in/TSR+Report [Accessed 13 April 2018].

12. Based on its work across the world, ACER follows an assessment process that includes 14 distinct steps, each of which requires a high degree of technical expertise and highly trained teams. Even if this cycle is not followed in its entirety, the need for trained and experienced human resources remains. For more information on the cycle, see https://www.acer.org/gem/about/approach/robust-assessment-program.

13. L.N. Poudel, ‘Reviewing the Practice of National Assessment of Student Achievement in Nepal’, Nepalese Journal of Educational Assessment 1(1), 2016.

14. J.Z. Muller, The Tyranny of Metrics. Princeton University Press, Princeton (NJ), 2018.

15. S. Levitt and S.J. Dubner, Freakonomics: A Rogue Economist Explores the Hidden Side of Everything. William Morrow, NY, 2005.

top