Opinion surveys at and between elections
JAMES MANOR
THIS paper focuses on two rather different things. It first assesses some of the strengths and weaknesses of opinion surveys at election time in India in order to suggest ways of making polling more reliable. But then it shifts attention to a different question: how surveys conducted at and between elections might enrich our understanding of election outcomes and of the interplay of society and politics.
Election polls in India have a decidedly mixed record. There is plenty to criticize. But lest we get carried away with complaints, it is worth recalling a time not so long ago when next to no reliable polling took place, so that election analysts were flying semi-blind.
Consider an example. During the 1971 national election campaign which gave a thumping victory to Indira Gandhi’s version of the Congress party, the British political scientist W.H. Morris-Jones visited that party’s headquarters in Bangalore. He interviewed two men there. The first was a senior leader who quietly went into meticulous detail about the party’s problems in various constituencies in Mysore (renamed Karnataka the next year). After a carefully nuanced explanation, he estimated that his party would win about 14 of the state’s 28 seats. Morris-Jones then spoke to a younger man. He was too pumped up with adrenalin to offer many details, but pounding the table with excitement, he said that the party would win all 28.
With no opinion polls to guide him, Morris-Jones concluded that he had met one judicious realist who accurately estimated the party’s prospects, and an unstable young man who based his predictions on hopes rather than hard facts. But when the votes were counted, he was stunned to discover that Indira Gandhi’s Congress won 27 of the state’s 28 seats. Morris-Jones told this story against himself, to illustrate how limited his understanding was in the absence of reliable (or any) polls.
T
hings have improved since then. At every national and state election, we are bombarded with competing polls, usually making worryingly diverse predictions before the votes are counted. It is tempting to say that ‘feasts’ of data have now replaced that earlier ‘famine’, but ‘feasts’ is too kind a word. The vast outpourings of findings from polls in recent years include an abundance of misleading information. Some of this is the unintended result of methodological errors (see below). But some pollsters consciously engage in shenanigans – the use of loaded or leading questions, and the massaging of figures, to favour one or another party.The problems extend beyond dubious predictions. Misleading surveys contribute to somewhat (or wildly) inaccurate post-election explanations of results by some commentators who rely on them or who fail to look carefully at details from polls that are more dependable. After the Congress-led United Progressive Alliance (UPA) won the 2004 election, the Indian and international media trumpeted claims that a revolt of the rural poor against the previous National Democratic Alliance (NDA) government had produced the result. This was nonsense. A close examination of Lokniti’s reliable polling plainly showed that the UPA received more support in urban areas than in rural, and more from prosperous groups than from the poor.
1 Congress leaders knew this to be true,2 but they did not challenge the myth of the revolt of the rural poor because it made them appear more humane.New myths emerged from the next parliamentary election in 2009. Several commentators announced that Rahul Gandhi had rejuvenated the Congress and Parliament, attracted the youth vote, and rebuilt his party’s organization.
3 All three claims were false. Lokniti’s dependable data indicated that younger people actually voted less often for Congress and its allies than did their elders.4 The new Parliament was the fifth oldest of the 15 since Independence. And Congress strategists at state and national levels – including Rahul Gandhi himself – agreed that the reconstruction of the Congress organization had not yet occurred.5
E
rroneous claims, based in part upon poorly conducted polls, are also made at state elections. This occurred, for example, in Bihar in late 2015. Some commentators claimed that women strongly backed the Maha Gathbandan or Grand Alliance – a combination of Nitish Kumar’s Janata Dal (United), Lalu Prasad’s Rashtriya Janata Dal and Congress – which defeated the NDA led by the Bharatiya Janata Party.6 But Lokniti’s carefully conducted post-poll showed that women and men voted for the Grand Alliance in equally strength (42%), and that women provided only slightly less support to the NDA than did men (33% against 35%).7 Their data also showed that the strong efforts made by the NDA and Prime Minister Narendra Modi to attract young voters failed. The Grand Alliance led the NDA among all age groups.8
H
ow do we separate the gold from the dross in Indian election polls? This is important for readers in general, and especially for those who may wish to engage in polling. Several things are worth noting. Great care is required in deciding what methods to use in conducting surveys. Polls in Britain before the referendum of 23 June 2016 on remaining in the European Union were almost entirely conducted either by telephone or online. In the period before late April, most telephone polls suggested majority support for the campaign to ‘remain’. But all online polls found less than a majority for that option!9 Such consistent differences clearly demonstrate that the methods used affect poll results, and they suggest that both of these methods are dubious. In India, where computer and even telephone connections are far less widespread than in Britain, both methods are even less reliable. That does not stop some Indian polling organizations from using them, but it indicates that face-to-face interviews with respondents are much more dependable.
F
ace-to-face interviews are more expensive than telephone or online polling. More care and time must be taken to train interviewers, and results emerge more slowly. That matters during Indian election campaigns when media organizations press pollsters for quick predictions. This explains the frequent use of telephone or online polls. But the face-to-face method enables pollsters to consult a representative sample of the electorate. If you want adequate responses from disadvantaged groups – dalits, adivasis, landless labourers, slum dwellers, among others, all of whom vote in great numbers – you will not succeed online or by telephone.The most dependable and informative face-to-face polls are slower for one further reason. Lokniti stands out here. It conducts long interviews with voters in which they answer an extensive list of questions. They are asked about their social, economic and educational backgrounds, about a range of issues that influence their voting preferences, about their perceptions of incumbent and rival parties and candidates, about when they make their decisions, and much more. Their answers reveal important things that escape quick polls which mainly seek to know how they will vote, with only cursory enquiries about who they are and why they have reached their decisions. When Lokniti delves into underlying realities and perceptions, and then compares the findings with similar surveys that they have conducted at previous elections, they reveal far more about broad trends of immense importance. One of these – constructive fickleness in voters’ perceptions of their identities – is discussed below.
Three other things are needed if polling organizations are to produce high quality surveys. They must be self-critical, so that they frankly acknowledge problems that afflict their efforts. They must be willing to adjust their methods to address those problems. And they must base their strategies on information from social scientists who understand the distinctive conditions in various regions and sub-regions. Otherwise, surveys in this most complex of human societies will produce misleading findings.
O
ne example from Lokniti’s work will illustrate all three points. Many years ago, they conducted a poll in a large Indian state which predicted gains for one political party, but when the votes were counted, it turned out that the party had lost ground. Lokniti acknowledged its error and investigated what had gone wrong. A specialist in the society and politics of that state was asked to examine their polling strategy. He saw that their survey had paid undue attention to the preferences of voters in one sub-region which was untypical of the state as a whole, because the composition of society in that sub-region differed from the rest of the state. Lokniti then made adjustments in later surveys and, as a result, subsequent polls there were more accurate.Sub-regional diversity within states has one other important implication. Because sub-regions vary, it is unwise for pollsters to look only at the total percentages of voter preferences across an entire state when predicting election outcomes. The danger is that support for different parties may be heavily concentrated or spread thin – within certain sub-regions, or within urban or rural areas – so that the totals for the whole state are misleading.
L
et us first consider the urban/rural divide, which brings us back to the 2015 Bihar state election. A Lokniti survey before the campaign began gave the NDA a lead across the whole state of around 4% over its rival, Nitish and Lalu’s Grand Alliance. But people who focused only on that aggregate figure overlooked a serious problem for the NDA. Its votes were heavily concentrated in urban areas where only 11.2% of voters lived. Its early lead in urban centres was a massive 20%, but it led in rural areas by only 2%. Elections in Bihar and nearly all other states are won and lost in rural areas where most people reside.In Bihar, even a small swing against the NDA in rural areas could have left it with more votes but fewer seats. In the end, however, the NDA slumped badly during the election campaign. Its initial 4% lead in the total vote share across the state turned into a deficit of 6.8%. Its lead in urban areas shrank to a mere 2.4%, but in rural parts, a major swing gave the Grand Alliance an edge of 9.3%. (The NDA won only 17 of 30 urban seats while the Grand Alliance swamped it in rural areas, 165 seats to 41.)
10The message is clear. Pollsters need to pay attention to varied concentrations of votes along the urban/rural divide. Sadly, in the scramble to provide media organizations with hot news during election campaigns, not all surveys do that.
What about variations in the concentration of votes in different sub-regions of a state? Let us look at the example of the Karnataka state election of 2004. Here is the result.
|
BJP |
Congress |
Janata Dal (Secular) |
|
|
No. of seats |
79 |
65 |
58 |
|
Share of seats |
35.27% |
26.64% |
25.90% |
|
Share of votes |
28.33% |
35.27% |
20.77% |
The Congress got more votes than either of its rivals but won fewer seats than the BJP. This happened because support for Congress was evenly spread across all regions while the BJP’s was concentrated in northern Karnataka and the Janata Dal (Secular) vote was concentrated in the southern region. This partly had to do with caste. Lingayats, who strongly backed the BJP, mostly reside in the North. Vokkaligas, who gave solid support to the Janata Dal (Secular), are almost entirely found in the South. Neither group decides elections, but they greatly influence outcomes. Ignoring sub-regional variations is fraught with risks.
F
inally, we must recognize two valuable contributions to our understanding that scrupulous opinion surveys make – especially the most reliable surveys, conducted by Lokniti. The first is connected to the limitations of their surveys: they candidly alert us to important questions which their data cannot answer. The second is connected to the promise of their surveys: they also alert us to important trends which their data can help to illuminate.Consider two examples of limitations. Analysts at Lokniti report findings on the percentages of voters who support various parties during election campaigns, but they scrupulously refrain from translating those figures into projected seat totals – because they know that such a step entails so much guesswork that it can go wrong. For evidence of how often it goes wrong, we need only recall the variations between other pollsters’ projections of seat totals, and the inaccuracy of many. Lokniti’s restraint should be seen as a warning to readers about media reports of seat totals – reports which ignore an important limitation of even the best opinion surveys.
Lokniti also provides us with a second example of how one of the limitations of their surveys points to an important issue. Their polls have repeatedly indicated that anti-incumbency sentiments have been strong at constituency level even when incumbent parties were returned to power in state and national elections. Their face-to-face interviews seek information on a wide range of matters from respondents which, if studied carefully, reveal a great deal about why voters make their decisions, how those decisions shape election outcomes – and about the character of India’s democracy.
11 But they acknowledge that their data cannot explain the strength of constituency-level anti-incumbency. They know that it occurs but not why it occurs. Their admirable candour about the limitations of their data is useful because it poses a challenge to scholars to address this question by other means.
R
eaders are all too familiar with media reports during election campaigns in which pollsters predict election outcomes. But dependable opinion surveys have two other uses that are more important than predictions which often prove to be misleading. We can determine which surveys are ‘dependable’ by checking their predictions against election results. Those which turn out to be accurate are usually dependable.
T
he first way in which reliable surveys are valuable is in developing post-mortems on election results. Studies that seek to explain election outcomes – such as those cited in notes 1 and 14 of this paper, which drew upon Lokniti’s polling data – can use details from surveys conducted during election campaigns to show which groups supported different parties, how voters viewed ruling parties and their rivals, which issues had strong (or weak) appeal to different groups, how different leaders are perceived, etc.The second way in which polling can enlighten us is by providing insights that enrich our understanding of India’s society, its politics – and their interactions. We can learn from surveys that are conducted both during and between election campaigns. Here are four examples. The first three are opportunities that have not been seized. The fourth illustrates what can be accomplished.
First, the architects of opinion surveys between elections might examine analyses based on other, non-election surveys as they search for promising lines of enquiry. One remarkable source of insights will illustrate this. India’s National Council of Applied Economic Research and the University of Maryland conducted surveys of the same 44,554 households across the country in 2004 and 2012. They asked respondents questions about a great diversity of issues, several of which might be followed up by other pollsters. Consider two surprising findings. They discovered that in those years household incomes increased in percentage terms faster in rural areas than in urban. They also found that household incomes of several disadvantaged groups (OBCs, Dalits, adivasis and Muslims) had risen (again in percentage terms) faster than the incomes of more prosperous high caste Hindus.
12 These insights – and many others that emerged from that study – provide pollsters with promising openings for further enquiries.
A
second formidable study, which has had too little attention, is similarly promising. A three-member team combed through rich data-sets on panchayati raj institutions in numerous Indian states, using highly sophisticated quantitative techniques. The result is the most important book on democratic decentralization to appear anywhere in the world since 2000.13 They provide a ‘warts and all’ assessment that deals with corruption, service delivery and numerous other topics that provide valuable openings for pollsters. That study reached one startling conclusion. When panchayats are allowed to function with at least semi-adequate funds and powers for several years, poor people acquire sufficient awareness, confidence and skills in the grassroots political game to prevent ‘elite capture’ and to ensure that local democracy helps to reduce poverty.This source can help pollsters to examine a number of important issues – for example, variations in the functioning of panchayats across states where governments have given them generous or only limited support; variations between states in which local dwellers have only weak identification with political parties (e.g,, Madhya Pradesh) and states where partisan fervour is more intense (e.g., Kerala or Andhra Pradesh); and the varied implications across states for gram panchayats of the weakness (everywhere in India) of panchayats at sub-district level, and thus of bureaucrats’ dominance at the latter level.
A
third example refers to an opportunity to separate appearances from reality. On current evidence, it appears that efforts to promote communal polarization are more likely to succeed in some Indian states than in others. At the Bihar state election in 2015, an attempt late in the campaign to foment distrust between communities largely fell flat. One BJP MP from Bihar stated flatly: ‘Bihar is not Uttar Pradesh’.14 He was referring to the apparently successful polarization in Uttar Pradesh during the 2014 Lok Sabha election, and to current efforts to sustain it as the 2017 state election approaches. He might also have cited apparently similar trends in Gujarat state elections since 2002.Opinion surveys conducted between elections in various states might help to establish whether these apparent differences across states have substance – and why. Commentaries on BJP state election victories in (for example) Madhya Pradesh, Rajasthan and Karnataka have noted that that party succeeded because it ‘played down’ Hindutva. Carefully framed questions in surveys could yield insights into important aspects of popular perceptions which make it more (and less) likely that polarization will attract votes.
F
inally, let us turn to an extremely important question which Lokniti’s opinion surveys have helped to answer. It connects to the polarization issue noted just above. How do voters’ political identities influence their decisions at election time? Findings from Lokniti surveys at successive elections (state and national) since the early 1990s clearly demonstrate that while ‘identity politics’ is usually important, the specific identities that influence voters often change from election to election. At one election, voters may be preoccupied with their linguistic or religious identity; with their national, regional or sub-regional identity; with one of three types of caste identity (jati, jati-cluster or varna); with their class identity; or with their urban or rural identity. But at later elections, they tend to shift their preoccupations from one to another of these identities, and then to another – often, and with great fluidity. This ‘constructive fickleness’ is extremely important because it prevents tension and conflict from building up along a single fault line in society – as occurred in Sri Lanka, with ghastly results. Lokniti’s findings demonstrate this more authoritatively than any other body of evidence.Their surveys also inform us about many important aspects of change and continuity over time because they ask many of the same questions at election after election – and between elections. This does not mean that their questionnaires never change. Before each election, they ask scholars with specialized knowledge to suggest new questions that might be incorporated, so that their surveys can extract insights into an even broader set of key issues. Their open-mindedness – which leads them to reach out for such suggestions – offers an instructive example for others to follow.
Footnotes:
1. For more detail, see J. Manor, ‘The Congress Party and the "Great Transformation",’ in S. Ruparelia, S. Reddy, J. Harriss and S. Corbridge (eds.), Understanding India’s New Political Economy: A Great Transformation? Routledge, London and New York, 2011, pp. 217-18.
2. Interviews with two senior Congress strategists, New Delhi, 15 and 17 July 2004.
3. See for example, Fareed Zakaria in Newsweek, 23 May 2009. This article used more moderate language than euphoric references in the Indian press to the ‘Gandhi Whirlwind’ (Economic Times, 4 June 2009) and to ‘Rahul The Man. The Magic. The Politics’ (Outlook cover, 1 June 2009). But Newsweek, which Zakaria edited, followed up his report with another by Sudip Mazumdar entitled ‘A Revolution is Underway in India’ on 8 June 2009. Two other commentators whose judgements are usually sound echoed this view in more measured terms: Chris Morris of the BBC and Vir Sanghvi, both speaking on television on 16 May 2009.
4. The differences between voting patterns by different age groups were small, but Congress and its allies fared better as the ages of voters increased.
5. Interviews with Congress strategists from Uttar Pradesh, Madhya Pradesh, Andhra Pradesh and Karnataka in New Delhi, Hyderabad and Bangalore between 19 May and 3 June 2009. Rahul Gandhi was not interviewed, but he made it clear he had decided not to accept a ministerial post and to focus on regenerating the party because he knew that the task remained to be completed.
6. Frontline, 13 and 27 November 2015.
7. The Indian Express, 10 November 2015.
8. The Indian Express, 10 November 2015. When asked ‘Who can develop Bihar better?’, young voters preferred the Grand Alliance to the NDA, 53% to 37%.
9. See https://ig.ft.com/sites/brexit-polling/, accessed 25 April 2016. One of Britain’s leading psephologists, John Curtice, explains this in more detail in ‘The Divergence Between Phone and Internet Polls: Which Should We Believe?’, at the website whatukthinks.org, 25 May 2016.
10. For details, see J. Manor, ‘Undone by Its Own Mistakes: How the BJP Lost Bihar’, Economic and Political Weekly 61(10), 5 March 2016, pp. 60-69.
11. Even before Lokniti was created, scholars at the Centre for the Study of Developing Societies (where it is based) were using opinion surveys and other methods to explore these issues. This comment is based on discussions with Bashiruddin Ahmed, Ramashray Roy, Ashis Nandy and their colleagues during the late 1970s and early 1980s. For a taste of this, see, R. Roy, The Uncertain Verdict: A Study of the 1969 Elections in Four Indian States. Orient Longman, New Delhi, 1973.
12. This is the India Human Development Survey. See http://ihds.info, accessed 10 May 2016.
13. Hari K. Nagarajan, Hans P. Binswanger-Mkhize and S.S. Meenakshisundaram, Decentralization and Empowerment for Rural Development. Cambridge University Press, New Delhi, 2014.
14. He was quoted by Vijaya Pushkarna in ‘Loss of Face’, The Week, 22 November 2015, available at http://theweek,in/theweek/cover/after-the-Bihar-election-narendra-modis-invincibility-is-a-thing-of-the-past.html, accessed 5 February 2016. For more detail, see James Manor, ‘Undone by its Own Mistakes: How the BJP Lost Bihar’, Economic and Political Weekly 61(10), 5 March 2016, pp. 60-69.
![]()