Building better surveys
IRFAN NOORUDDIN
THE contribution of the Lokniti-CSDS election surveys to our understanding of Indian electoral politics beggars belief. For long, Lokniti and CSDS have set the standard for election surveys, generating a corpus of empirically grounded knowledge that is unparalleled anywhere in the developing world and likely rivals even the election study traditions of the United States and Great Britain. Soon after Phil Converse and his colleagues at the University of Michigan wrote The American Voter,
1 Rajni Kothari authored Politics in India.2 Since that seminal volume appeared, the Lokniti-CSDS community has developed, refined, and innovated a sampling methodology that generates credible, nationally representative, multi-strata random samples in arguably one of the hardest and most complex environments in which to do so.If pollsters in the United States complain of the technological disruption to their craft caused by the spread of mobile telephony,
3 can you imagine what they would think about administering polls in India where survey instruments must be translated into a multitude of languages, where surveys must be conducted in-person, and where interviewers must therefore enter the mazes of urban slums and travel to distant villages, and in one unforgettable anecdote, sleep in a cowshed because the subject had not yet returned from the foothills with his sheep and Lokniti’s rigorous sampling protocols do not allow for a replacement.Admiration for what Lokniti-CSDS have achieved notwithstanding, my task here is to identify the limitations that hold back this remarkable institution from growing its impact and influence in an increasingly crowded, competitive, and chaotic polling milieu.
In this essay, I offer three critical observations. First, while Lokniti-CSDS has done a creditable job predicting elections – a result of the aforementioned rigorous sampling methodology – its record in generating theoretical answers to the core questions in Indian politics has been more limited. Second, the burden of providing the ‘election survey of record’, as it were, has limited Lokniti’s flexibility and willingness to adapt new methodological innovations in measurement and survey research. Third, these problems are compounded by the relatively closed nature of the Lokniti fraternity, which constricts the flow of ideas that could lead to improvements.
W
hat is the primary function of an election survey? If the answer is ‘to predict the winner of the election’, then one ipso facto endorses the pathologies of the polling industry which has made ‘calling an election correctly’ the ultimate metric for assessing the quality of the pollster and the utility of the survey. The problem with this approach is that the focus of the survey boils down to a single question: for whom do you intend to vote and the tallying up of those responses. Analyses of variation in intended vote choice is then limited to exploring how vote intention breaks down across ascriptive categories such as gender, caste or religion, and relatively fixed aspects such as whether one lives in urban or rural areas, geographical area, and the ever-popular media exposure index. In other words, we answer the question ‘who votes’ and ‘for whom’, but the key question of ‘why’ is left unanswered. That is not to say that pundits do not offer all sorts of explanations for why they think the Indian voter behaves as she does but rather that this part of the ‘analysis’ is unencumbered by data.
F
ocusing on the why question is not simply a call for incorporating experimental methodologies in our work, but rather for the development of explanatory theories that privilege concept generation and measurement. Consider the parallel developments in the fields of public opinion, political behaviour, and political psychology in American political science. As in India, the central question was and remains vote choice, but to answer that question a rich array of new theoretical concepts was conceived, measured, and analyzed.The Michigan School gave us party identification;
4 Jennings and Markus5 discussed the impact of political socialization; Stimson’s pioneering work introduced the notion of policy mood;6 Dawson7 talked about linked fate while Kinder and Sanders8 showed us how Americans were divided by colour and helped inspire better measures of racial resentment. More recently, Hillygus and Shields9 gave us the idea of ‘cross-pressured’ voters. And I have not even broached the hugely influential literature on priming and framing effects in campaigns, or on the role of emotions such as fear, anger, anxiety or outrage on how citizens engage with politics.
B
y contrast, no such theoretical innovation has occurred to enhance our understanding of micro-political behaviour in India. At the macro-level political scientists have done important work to think about the ‘Congress System’,10 the second democratic upsurge,11 the effects of party system fragmentation12 and the rise of regional parties,13 of sub-nationalism as an identity,14 and of anti-incumbency and electoral volatility.15 But where are the corresponding ideas at the individual level? Are we perhaps making a more provocative point: that Indian voters are not motivated by ideas and that psychological processes of attitude formation are unhelpful for our understanding of their behaviour.So why do Indian voters behave as they do? Is it because they are paid, bribed, or coerced by political parties and their brokers who leverage patron-client networks? In other words, is it that their votes are sold, and their turnout is bought? In the absence of arguments that allow voters to be thinking rational actors making thoughtful choices between alternatives in a contested political space, we reduce them to pawns to be swapped and shuffled by political and social elites. Of course, the study of Indian politics is not unique in this regard; the disjuncture between how we talk about voters in developing countries the world over and their counterparts in the West is jarring.
T
he simplistic mechanistic reduction of voters to chess pieces being re-arranged by political parties sits uncomfortably with the popular understanding of why the 2014 Lok Sabha elections turned out the way they did. That election had two heroes: Narendra Modi who led the Bharatiya Janata Party (BJP)-led National Democratic Alliance (NDA) to form the first majority government since 1984, and the aspirational voter. Epic stories about both were spawned, but the aspirational voter in particular has become a truism overnight. Even as Modi has his doubters, the notion that young India is sick and tired of politics as usual and yearns for something better has no naysayers. But where is the empirical evidence for this? There isn’t any. Which surveys provide a measure of aspiration? None. Do we even know what aspiration means and how we would capture it in a survey? We do not.Fixing this situation requires changing how we think about the function of surveys. We need to design surveys to answer questions, by which I really mean to test hypotheses against each other. We need to develop, test, and refine measures of ideological congruence, aspirational versus identity politics, linkages to patronage networks, national identity and foreign policy attitudes. Armed with these, we can utilize the amazing power of random sample surveys to know better the political hopes and dreams of over a billion people. Without such measures, all we know is who won the election. We do not need Lokniti-CSDS surveys for that. We just need Google.
D
esigning good surveys is hard work. With years of experience, Lokniti-CSDS scholars are among the very best at writing questions that can be understood and answered, and at offering appropriate response categories from which respondents can choose. But the mode of the survey has remained largely unchanged over time. A hallmark of Lokniti’s surveys is that they are conducted in person – this ought never to change, at least not for the foreseeable future. But is it necessary that surveys be conducted on paper with the surveyor checking answers on the questionnaire as the respondent answers?Exploring alternatives to this mode is important and increasingly feasible thanks to advances in technology. For example, moving surveys from paper to tablets would allow Lokniti to study the effects of question order which in turn would provide us a better handle on issues such as priming effects. A case in point: among the first questions asked on the standard Lokniti National Election Study is whether one voted and for whom. The survey then continues to ask questions about preferences for leaders, which party is closer to the respondent’s preferences in various policy domains, and so on.
But what effect does thinking about one’s vote choice have on subsequent responses? If I said I voted for the BJP, do I feel a pressure then to justify that choice via my answers to other questions? Of course, the opposite could also be true: that having expressed a policy preference for more economic reform, one feels pressured to appear consistent in one’s stated vote choice. The point is that we do not know if or how such question order effects operate, or if they affect some respondents more than others. By harnessing cheaper computing technology and the power of randomization, we could begin to study this issue.
S
imilar advances in how questions – especially about sensitive issues – are posted have not been incorporated adequately in Lokniti’s surveys.16 Three distinct examples will illustrate this point. First, voter turnout is notoriously over-reported in surveys. Voting is normatively desirable and so respondents feel a social pressure to say they voted, even when they did not. This pressure is exacerbated when the interview is face-to-face because now respondents have to admit to a stranger that they did not vote. But simple changes to question wording have been shown to reduce over-reporting in the turnout question.17 Lokniti should conduct similar analysis and revise its turnout question. Second, social desirability bias does not operate solely with regard to voter turnout. Asking sensitive questions about respondents’ views about religion, gender, caste, or about electoral fraud, corruption, and tax evasion, are unlikely to elicit truthful answers. Corstange18 uses the list experiment design coupled with unobtrustive measures to study vote buying in the Middle East. Such techniques have also been widely used to study racial prejudice.Lastly, a dirty little secret of surveys is that far too many respondents choose the ‘no opinion’ or ‘don’t know’ option. They do so because they genuinely might not have an opinion but also to mask unpopular views.
19 Kailash20 has used the Lokniti surveys to show that the propensity to answer ‘don’t know’ is disproportionately higher among women, lower castes, and less privileged sections of Indian society. But these responses – and, therefore, the respondents who chose them – are typically dropped from our statistical analysis of the survey, thereby systematically silencing the voices of those least likely to be represented in mainstream politics. We need to do better to reduce the high proportion of ‘don’t know’ responses and to get there trying forced choice questions, feeling thermometer indicators, and sensitive, less obtrusive measures is required.
S
urvey design involves more than just question wording and order. The current model of Lokniti’s surveys is the workhorse model of pollsters everywhere – a one-time slice or cross-section of the Indian voting age public. But politics is dynamic. Yet, we have no way of tracking attitudinal change in the current Lokniti survey. Consider again the 2014 election: at what point between December 2013 and May 2014 did the NDA victory and UPA defeat become a foregone inevitability? Or consider the much ballyhooed wave election that led to the AAP landslide in the Delhi state elections? One reason we all missed it is because, as any surfer can attest, in riding a wave, timing is everything! The broader point is that, to understand campaign effects, the standard Lokniti survey is inadequate. Hillygus and Henderson21 use a eleven-wave survey to study campaign dynamics in the United States. Doing this face-to-face, as required in the Indian context, might prove infeasible, but only if we assume that a sample of 25,000-plus is a requirement for Lokniti. It need not be.A last point: as I have argued elsewhere, it is high time that Indian survey researchers incorporate experimental designs in their work.
22 Doing so combines the external validity of a random sample with the internal validity of an experiment. The end result is more power to assess causal relationships between variables and greater confidence in our results. The problem statement for this Seminar issue suggests ominously that survey experiments limit ‘the possibility of conducting a holistic analysis of citizens’ preferences’.23 Frankly, I have no idea what this means or why it would be so. It is a canard that should be rejected if Lokniti’s election surveys are to continue making contributions in the future.
G
enerating ingenious workable solutions to the problems afflicting the polling enterprise requires a collective investment. Lokniti’s amazing leadership and network provides a remarkable public good to political analysts everywhere. No wonder then that they tend to be protective of their data. But less protection and more openness is the need of the hour. For instance, while the questionnaires are made available online,24 to obtain the data requires permission from Delhi. Often the data are not shared but rather the analysis is conducted by the CSDS Data Unit and the results communicated to the investigator. The design of the survey is also closely guarded with little to no opportunity for those outside the Lokniti network to propose new questions.
O
ver the past ten years, Lokniti-CSDS has very generously organized and funded a summer school in quantitative analysis of political data. Lokniti provides its data, its resource persons, and the facilities. I believe that this initiative has the potential for being transformative for the study of politics and training of political scientists in India. Further, Lokniti, in collaboration with Sage, now publishes a peer reviewed journal, Studies in Indian Politics, providing a much-needed outlet for rigorous empirical research. This is another game changing public good. A similar effort is now needed for thinking about the Lokniti-CSDS’s crown jewel: the election survey itself. One model might be American National Election Study’s Online Commons, where researchers can propose ideas for new questions or topics, share pilot studies and other research, and generally contribute to improving survey methodology.This last point might seem to some as straying beyond my remit. But as survey methodology and design becomes more sophisticated, keeping up with best practices becomes more onerous and can quickly outstrip the capacity of even the most dedicated team. Building an institutional framework that allows – nay, encourages – cross-fertilization and outside contributions benefits all concerned.
Footnotes:
1. Philip E. Converse, Warren E. Miller and Donald E. Stokes, The American Voter. John Wiley, New York, 1960.
2. Rajni Kothari, Politics in India. Orient Longman, Hyderabad, 1970.
3. D. Sunshine Hillygus and Brian Guay, ‘Polling in the United States’, Seminar 684, August, 2016.
4. Philip E. Converse, et. al., 1960, op. cit.
5. M. Kent Jennings and Gregory B. Markus, ‘Partisan Orientations Over the Long Haul: Results from the Three-Wave Political Socialization Panel Study’, American Political Science Review 78(4), 1984, pp. 1000-1018.
6. James A. Stimson, Michael B. MacKuen, and Robert S. Erikson, ‘Dynamic Representation’, American Political Science Review 89(3), 1995, pp. 543-565.
7. Michael C. Dawson, Behind the Mule: Race and Class in African-American Politics. Princeton University Press, Princeton, N.J., 1994.
8. Donald R. Kinder and Lynn M. Sanders, Divided by Colour: Racial Politics and Democratic Ideals. University of Chicago Press, Chicago, IL, 1996.
9. D. Sunshine Hillygus and Todd Shields, The Persuadable Voter: Wedge Issues in Presidential Campaigns. Princeton University Press, Princeton, N.J., 2014.
10. Rajni Kothari,1970, op.cit., fn. 2.
11. Yogendra Yadav, ‘Reconfiguration of Indian Politics: State Assembly Elections, 1993-95’, Economic and Political Weekly, 13 January 1996.
12. Pradeep Chhibber and Irfan Nooruddin, ‘Do Party Systems Matter? The Number of Parties and Government Performance in the Indian States’, Comparative Political Studies 37(2), 2004, pp. 152-187.
13. Adam Ziegfeld, ‘Coalition Government and Party System Change: Explaining the Rise of Regional Political Parties in India’, Comparative Politics 45(1), 2012, pp. 69-87.
14. Prerna Singh, How Solidarity Works for Welfare: Subnationalism and Social Development in India. Cambridge University Press, New York, 2015.
15. Irfan Nooruddin and Pradeep Chhibber, ‘Unstable Politics: Fiscal Space and Electoral Volatility in the Indian States’, Comparative Political Studies 41(8), 2008, pp. 1069-1081.
16. An important exception is the vote choice question which involves an actual ballot box so that respondents can record their vote choice confidentially.
17. Michael J. Hanmer, Antoine J. Banks and Ismail K. White, ‘Experiments to Reduce the Over-Reporting of Voting: A Pipeline to the Truth’, Political Analysis 22(1), 2014, pp. 130-141.
18. Daniel Corstange, The Price of a Vote in the Middle East: Clientelism and Communal Politics in Lebanon and Yemen. Cambridge University Press, New York, 2016.
19. Adam J. Berinsky, Silent Voices: Public Opinion and Political Participation in America. Princeton University Press, Princeton, N.J., 2004.
20. K. Kailash, ‘The More Things Change, the More They Stay the Same in India: The Bahujan and the Paradox of the "Democratic Upsurge",’ Asian Survey 52(2), 2012, 321-347.
21. D. Sunshine Hillygus and Michael Henderson, ‘Political Issues and the Dynamics of Vote Choice in 2008’, Journal of Elections, Public Opinion and Parties 20(2), 2010, pp. 241-269.
22. Irfan Nooruddin, ‘Making Surveys Work Better: Experiments in Public Opinion Research’, Studies in Indian Politics 2(1), 2014, pp. 105-108.
23. Rahul Verma, ‘The Problem’, Seminar 684, August 2016.
24. http://www.lokniti.org/national-election-studies.php.