Putting the ŌscienceÕ into political science

 

notes for a biography of a discipline in transition

 

 

 

by

 

Mark N. Franklin

 

 

 

 

 

 

 

Inaugural lecture as the first Stein Rokkan Professor of Comparative Politics delivered

at the European University Institute, Fiesole near Florence, November 23rd 2006

 

Putting the 'science' into political science: notes for a biography of a discipline in transition [Show title slide]

 

Thank you, Professor Mˇny, for that kind introduction. Good evening. An inaugural lecture is a strange animal. In it, I am supposed to say something as an expert in my field which will be accessible to an audience that goes far beyond my field. I thought I would take the opportunity to reflect on the nature of my discipline.

 

I have been a political scientist for forty years, and in that time I have seen my discipline transformed from a member of the humanities to a member of the social sciences. This is a story with many strands, and I can only talk about the developments I myself experienced. Other members of my department would have somewhat different stories to tell, but I venture to suppose that those stories would amount to much the same thing. More importantly, the story I am going to tell is one that touches upon the work of Stein Rokkan – the great Norwegian social scientist whose name adorns the chair I hold. I did not know Stein well, but I knew him well enough to know that he did much more than write important books and articles. He also collected data, and was one of the instigators of a movement that plays a central part in the transformation I am about to describe.

 

The transformation in fact took much longer than the span of my career. The behavioral revolution in political science is supposed to have started in Chicago in the 1930s [slide 2]. I did my doctorate at Cornell University during the 1960s

 

but Cornell was a place to which the behavioral revolution did not come until after my time there. Indeed, my supervisor, Clinton Rossiter, held a joint appointment in the History department at Cornell and is much better known as a historian of the American Presidency and of American Political Thought than as a political scientist. So I was trained in the pre-scientific ethos of the department of Government at Cornell, and only discovered the scientific approach because a new member of the faculty, who arrived when I was about to embark upon my dissertation, urged me to attend the Summer School in Research Methods at the University of Michigan – the first point at which my career intersected with that of Stein Rokkan who had attended the University of Michigan a few years previously.

 

That summer school was an eye-opener for me. The year was 1968, and among those attending the summer school that year were some of the most distinguished political scientists of the day. There were very few graduate students in the classes. Most of those attending the summer school were faculty. The chairs of half a dozen political science departments had brought virtually their entire staffs to receive a basic training in the new approach; and I met any number of people that summer who went on to become distinguished practitioners. This was not the first year that the summer school had been taught, but it was the year that it really took off and became accepted as a requirement for anyone who was serious about doing behavioral research. For at least twenty years thereafter there was virtually no-one studying political behavior who had not either received their training at Michigan or received their training from someone who had themselves been trained at Michigan.

 

I did not return to Cornell. I begged an invitation to follow in SteinÕs footsteps by becoming a visiting scholar at the University of Michigan, where I sat in on classes while writing up my dissertation. And the following summer I again attended the summer school, which again was a hotbed of intellectual ferment. In the period that I spent at Michigan I met many of the people who went on to create the new discipline of political science, and it was in watching their careers as much as in developing my own that I gained the insights that I am going to talk about today.

 

So what does put the 'science' into political science? Before I try to answer that question, I should tell you that not everyone who works in a political science department considers themselves to be a political scientist. Beyond the subfield of political behavior, not everyone sees themselves as employing the scientific method. Indeed, to this day there are some distinguished scholars who still scoff at the very idea of a 'science of politics.' But the first thing I have to tell you is that political science is not a science of politics. Nor does it purport to provide political practitioners with blueprints for political success, though many political scientists have had distinguished careers as political advisors, following in the five hundred year old footsteps of Niccolo Machiavelli, the most famous of those advisors whose villa still stands only a mile or so from here.

 

[Slide 3] Those who employ the scientific method in political science are trying to map out the ways in which people govern themselves, and to establish how different political institutions work and why they work the way they do. Does it matter whether a country is governed by a parliamentary or a presidential system? Does it make any difference whether the judiciary is independent of the executive? Do electoral systems that try to produce out­comes that are proportional to the votes cast (as in the Netherlands or Israel) work better than electoral systems used in the United States or France or Britain that focus on identifying a single 'winner?' Above all, why do people vote and why do they vote the way they do? And do the conduct and outcomes of elections provide any sort of guidance as to what policies should be pursued and which politicians get to pursue those policies? Does that guidance system work better in some situations (countries, states, cities, epochs) than in other situations, and can we design institutions that improve the responsiveness of the guidance system in those places and times where it works badly or not at all?

 

These are important questions, because government policies have the poten­tial to determine how people lead their lives and whether certain activities are pursued. One example that should hit home in an institution of higher learning comes from the fact that government policies largely determine the funding available for academic research. Why those policies are the way they are and whether and how they might be changed is thus an important subject of study.

 

And this brings us to the scientific approach [slide 4]. What puts the 'science' into the sorts of questions I just listed is the attempt to answer them in terms of general principles rather than of anecdotal evidence. Long before the behavioral revolution many people could have told you in what ways British political life was different from political life in France, for example, based on observation and experience. What the scientific approach tries to add is knowledge of what it is about Britain and France that makes political life in those countries different, such that if particular features of one or other country were exported to some third country, political life in that third country would mimic in predictable respects political life in the country from which the features were exported.

 

In other words, the scientific approach tries to replace the proper names of countries with measurements that characterize those countries and which tell us something about their properties in just the same way that astronomers try to replace the proper names of stars with measurements that characterize those stars and which tell us something about their properties. In political science, we adopt this procedure not only when studying countries but also when studying people. Again, we try to replace the proper names of individuals by measures of their characteristics.

 

The main difficulty faced by political science is the same difficulty that faces astronomers. We cannot conduct experiments. We cannot take a random sample of people and give them a new political system to see what happens. We have to make use of variations that occur naturally from one country to the next, from one person to the next, or to the same country or person over time. Unfortunately, because we cannot conduct experiments, we cannot easily rule out the possibility of contamination. When we seem to see a connection between cause and effect, it is always possible that the real reason for the connection is some unmeasured factor. The supposed tendency of individuals to become more conservative as they age turns out to have been due to the fact that, when age effects were first studied, back in the 1950s, most older people had been born before the rise of socialist parties in many countries. Nowadays people seem to become more liberal as they age, because people over forty were mostly born in an era that was more socialist than the world of today. At neither period was there in reality any link between the aging process and politics. What really happens is that people get stuck in their ways, and this inertia makes them carry forward in time the characteristics of the period when they grew up.

 

If those who conducted the original research in this field could have put people into a laboratory and watched them age, they would have discovered the truth quite easily. But because they had to make use of naturally occurring variation and did not think of all the ways in which their data might be contaminated, early researchers were misled.

 

In some of the natural sciences contamination can be ruled out by careful cleaning and calibration of measuring instruments. In certain other sciences, contamination can be ruled out by randomly assigning subjects to different treatments. In political science we can do neither of these things. Though much good political science is done by means of carefully chosen case studies, I will not consider that work here. In my branches of political science, the way we handle contaminated data is the same way astronomers do: by measuring every possible source of contamination and then teasing out the effects that interest us by statistical manipulation. But this means we have to know what kinds of contamination occur and how to measure this contamination before we can discover the effects of real interest. And when we try to study the effects of contaminants, those effects too are contaminated -- by other contaminants and by the things we really want to study.  So there is an important respect in which we need to know everything before we can know anything, making it very hard to get off the ground. No wonder a friend of mine has been heard to remark, only partly tongue in cheek, that he no longer talks of natural sciences and social sciences but of hard sciences and É easy sciences. I suspect there are many in the audience today who will wryly recognize what he had in mind with that remark.

 

Because political scientists have to measure everything that could possibly contaminate their findings, they need to measure a very great number of things. A typical election study asks over a thousand questions of the respondents who are interviewed. And because we need to know so much about each individual, we also need a very large number of individuals so as to be able to disentangle all the different effects. When studying individuals (for example, to try to understand their voting behavior) this means we need big expensive surveys. Asking a thousand questions of three thousand individuals would be typical. When studying countries (for example, to try to understand the effects of different institutional arrangements) we are in trouble. There simply are not enough countries for us to be able to disentangle all the effects that we need simultaneously to disentangle in order to evaluate the phenomenon of interest. Often things are even worse. Often the phenomenon we want to study exists only in one country, or in a very small number of countries, so that there is no prospect of teasing out the causes of that phenomenon by statistical analysis of the kind I was talking about a moment ago. In such situations we need to take a different approach.

 

Several approaches are available, some of them adopted by scholars in this room. However, I want to focus on one approach in particular – and this is where we come back to the work of Stein Rokkan [slide 5]. We can measure the phenomenon of interest and all the things that might contaminate our understanding of that phenomenon, and then take the measurements again, and again, and again. Over the passage of time we can wait for change to occur in the phenomenon we are studying and in the supposed causes and concomitants of the phenomenon. Eventually, if we continue this process for long enough, we will have enough data to be able to use statistical methods to tease out the relationships of interest by analysing variability over time rather than variability over space. This approach derives an added advantage from the fact that many of the things that contaminate our findings remain constant over time, and we can employ research designs that take advantage of this fact to reduce the number of things that need to be measured. Unfortunately, to perform reliable time-series analyses we need a lot of data points. Since each data point is separated in time from the previous data point, it follows that a lot of time must elapse in order for the data to become available.

 

The most important thing that has happened during my career as a political scientist is that we have continued to measure the things that interest us, and with the passage of time we have arrived at a point where we finally have enough data to be able to start to make sense of what is going on. Because we need to understand everything in order to understand anything, once things start to fall in place, lots of things fall in place all at once. That is what is happening in political science today. It is a very exciting time.

 

The role played in this story by data archives and other data services is a fundamental one, and this is where Stein RokkanÕs name again comes into my story. Stein was an inveterate collector of data, and a great stimulus to others to do the same. A moment ago I referred to the fact that he helped to found the Norwegian Data Services – a model for such facilities elsewhere.He also edited a newsletter chronicling the attempts being made to produce data that would be stored and made available by those facilities. I can remember being chivvied by Stein into writing an article on data for studying party systems in Europe that was published in the fifth issue of his newsletter – another point of contact between Stein and myself.  Sitting in the audience today is Bjorn Henrikson who worked with Stein to set up that great Norwegian institution and is here today to represent not only that institution but also SteinÕs many students and friends, together with the Rokkan family, who were not able to be with us today.

 

In the rest of this talk I will tell a story that I think illustrates the way in which our understanding of political phenomena has been illuminated by changing our focus from comparing individuals and countries at one point in time to looking at what happens over the passage of time.

 

One enduring question asked repeatedly by political scientists is 'do democratic institutions work?' [slide 6] More specifically, 'What (or who) do representatives represent, and does public policy reflect the desires of the electorate?' The importance of this question is indisputable, but how does one go about answering it? In countries such as Britain and the United States, where elected representatives are tied to particular geographic areas that the British call constituencies and the Americans call districts, one possibility is to look at the policy preferences of individuals, dividing them into people who have different representatives, and see whether the lawmaking activities of those represen­tatives reflects differences in the priorities of their constituents. If residents of some districts are characterized by greater concern for lower taxes (say) than residents of other districts, then do the representatives of districts that favor lower taxes distinguish themselves by fighting for that policy?

 

In the early days of the behavioral revolution this was, indeed, the only way to approach the question; and early researchers attempted to construct a test that would answer the question posed in that way. Those early researchers indeed went further than simply looking at the behavior of representatives. They also questioned the representatives about the reasons for their behavior so as to be able to tell whether representatives who were following the desires of their constituents were aware of those desires and following them deliberately, or whether they simply shared the same values as their constituents but were following their own preferences. The different routes by which representation could be achieved are illustrated in what is by now a very famous diagram, generally referred to as the Miller-Stokes Representation Paradigm [slide 7]. In this illustration, a correspon-

 

correspondence between the roll call votes of representatives and the policy preferences of their constituents could be achieved either because representatives held the same preferences as did their constituents [slide 8] or because they were aware of their constituents' preferences [slide 9], or both [slide 10]. The research design also allowed for the possibility that representatives followed what they thought were the preferences of their constituents, incorrectly imputing to them their own preferences [slide 11]. That is, the representative might erroneously suppose that their constituents agreed with them on the policy in question. In the area of Civil Rights (a highly salient issue in the United States in 1958 when this research was carried out) conformity between policy-making activities and the desires of constituents was apparently achieved by a mixture of three different

 

routes [slide 12] with the predominant effect (70% of the total effect [circle]) being due to represen­tatives bowing to the wishes of their constituents. The

remaining portion of the linkage was split evenly between representatives following their own preferences which happened to be in line with constituency opinion [underline], and representatives following perceived constituency preferences that were only accidentally correct [underline] -- accidentally because, although the representative was imputing his own preferences to his constituents, those preferences happened to be the same as those of his constituents.

 

This finding was quite satisfying. Unfortunately, policies other than civil rights did not yield similarly satisfying findings. In those other areas, incorrect imputation of preferences to constituents was widespread, and the apparent willingness of representatives to take account of constituentsÕ preferences was minimal -- especially in foreign and social policy.

 

Things were even worse when attempts were made to employ this approach outside the United States. By coincidence, my own PhD dissertation focussed on ways of discovering the policy preferences of legislators in the British House of Commons -- a body where roll call votes can be predicted with virtual certainty on the basis of party allegiance. Because during my year in Michigan I was sponsored by one of the authors of the Miller-Stokes Paradigm, it was perhaps inevitable that I would acquire the task of trying it out on British data. What I found was most disappointing [slide 13]. The only issue where Members of Parliament had even the most rudimentary awareness of the policy preferences of their constituents was the issue of state ownership of the means of production (nationalization) [underline] -- an issue that quintessentially demarcated the two major British parties at the time of my research, and the one on which it would thus have been easiest for a representative to guess the position of his or her constitu­ents. Moreover, in no case did the representative's legislative behavior accord with the measured desires of his or her constituents [circle 4 times]. The lower right-hand arrow is missing in every one of the mini-diagrams, indicating that the question of whether Members of the British Parliament correctly perceive the wishes of their constituents is irrelevant, since this research approach never shows them following those wishes in any case.

 

Even worse, the approach proved impossible to apply to the majority of countries. Most countries, as I am sure you are all aware, use a form of proportional representation in which it does not even make sense to ask which voters are the constituents of which representative.

 

Does this mean that democracy works even less well in countries other than Britain and the US than it does in those countries? Not necessarily. What it means is that the important question of how well democracy works cannot be addressed comparatively using the Miller-Stokes approach. And the fact that the approach does not work in countries outside the United States also raises the question whether it is an appropriate approach to use even within the United States itself. The book that was supposed to definitively explore the nature of the representa­tion process in the United States using this approach -- the book universally known among scholars of my generation as 'Miller and Stokes forthcom­ing' -- never forthcame, and though the authors were themselves less than forthcoming about why the book was not written, it can be safely assumed that, clever men that they were, they realized that their approach was flawed.

 

Why was their approach flawed? I would like to focus on this question for a few minutes because it illustrates the fundamental importance of testing a proposition by use of methods that are appropriate. The question of whether individual representatives actually represent the wishes of their specific constituents seems to be the right way to approach the question of representation in Britain and the United States because the way in which the British House of Commons and American Congress are elected assumes that this is how representation will occur. In fact, however, it is not necessary for representation to occur in this fashion. Indeed, in countries where the supporters of different political parties are represented in the legislature in proportion to their strength in each country, rather than according to where they live, it is, as I already pointed out, impossible to approach the question in this way. Proportional Representation -- or PR  -- systems assume that representation will occur through the mechanism of party: and that the party with most votes will be given the greatest power to enact its policies. Such a mechanism focuses on outcomes rather than processes: on the policies that are enacted rather than on the orientation and motives of representatives [slide 14]. And such an approach has the inestimable advantage of enabling us to ask whether change in policy results from change in public preferences. Since our major interest in election outcomes is whether they will result in policy change, it is a big disadvantage of the Miller-Stokes approach that it cannot tell us whether the outcome of an election is in accord with the will of the people, only whether individual representatives reflect the policy preferences of their constituents.

Of course, Miller and Stokes had no alternative. When they developed their approach, research strategies involving a single time-point (what are called 'cross-sectional' research designs) were all that were possible. The only data available to those researchers were the data they had collected themselves. But one of the things that these giants bequeathed to the political science profession was a data archive in which they lodged the data they had collected, and in which subsequent investigators lodged theirs. Today this archive, along with the Norwegian Data Services and several others, contains half a century of election studies and enormous quantities of other data, much of which has been collected annually or more frequently over most of that period. So today we have alternatives to the cross-sectional design, and these alternatives have made possible a more powerful approach to the study of representation.

 

In 1991 James Stimson published a little book called Public Opinion in America: Moods Cycles and Swings in which he showed that the weight of public opinion on a variety of issues tended to move roughly in step from liberal to conservative and back again in a sort of cycle [slide 15]. Along with two co-authors (one of whom, Bob Erikson, is in the audience tonight) Stimson had already established that what they called 'policy mood' was decisive in determining certain election outcomes -- especially the defeat of Jimmy Carter in 1980. But there is something unsettling about the mood cycle. It does not exactly parallel the election cycle. Look at what happens when we superimpose presidential eras on a picture of mood which is the average of all the issues in the previous slide [slide 16] (I have updated the graph from data presented on StimsonÕs web site). Mood starts to

 

shift AFTER a new president takes office, as though in reaction to his tenure. With the single exception of the Nixon years, mood moves in a liberal direction when Republicans are in office and in a Conservative direction when Democrats are in office. So while mood may well be a cause of election outcomes such as that in 1980, it also seems to be a result of presidential incumbency: the longer a president stays in office the more the public mood swings against him.

 

The Clinton years are an interesting anomaly. After an initial move in the expected direction, from about 1994 onwards mood oscillates up and down as though in response to the centrist policies that were all that Clinton could get through the Republican Congress newly elected in that year. But then, during the first term of the second George Bush, we see mood shifting strongly in a liberal direction as though in reaction to that presidentÕs conservative policies.

 

How is this possible? The central insight was made by a young political scientist by the name of Christopher Wlezien, who was working under Stimson at the time when the data were being assembled for Moods Cycles and Swings. Wlezien looked at this same chart and said to Stimson "It's a thermostat! Look: people want new policies, but when they get what they want they don't necessarily continue to want more of the same." Stimson, in the time honored reaction of established scholars the world over, said "ThatÕs nice, Chris, but I have things I need to be doing" -- or words to that effect. I am reminded of the story in The Hitch Hiker's Guide to the Galaxy of a young lab assistant who discovered the secret of faster than light travel, only to be lynched by an angry mob of distinguished scientists for whom the only thing worse than ignorance was a smartass.

 

It is a small world. Chris was a graduate student at the University of Iowa. We met each other when I was a Visiting Fulbright Scholar there in 1985. Soon afterwards Stimson vacated his position at the University of Houston to move to Iowa where he became ChrisÕ mentor. When Chris completed his PhD, his first job was at the University of Houston, and he arrived in Houston at about the same time as I arrived there to fill the line vacated when Stimson moved to Iowa. The world of behavioral political science is a small one!

 

It was 1989 when both Chris and I arrived at the University of Houston. Chris' burning ambition was to prove by his research that public opinion responded to policy outputs and functioned like a thermostat, demanding more right-wing policies as leftist policies accumulated, and then moderating the demand for right-wing policies as those policy demands in turn were met, eventually swinging back to the left as right-wing demands were satisfied. I read his draft papers and was struck by the power of his argument, as was Stimson's collaborator Bob Erikson, then still at the University of Houston, who eventually succeeded in persuading the author of Moods Cycles and Swings what Chris himself had been unable to persuade him -- that the public is responsive to policy outputs. People notice what policies they get. If they do not get what they want, they 'throw the rascals out'; but if they do get what they want they notice that too, and eventually stop wanting more of the same. The insight is best formulated, for those who are not frightened by equations, as a very simple expression that describes the public's Relative Preference (R) for more or less policy in a particular domain as being the difference between their absolutely preferred level of policy (P*) and the level of policy currently in place (P) [slide 17]

 

On the basis of this equation it is easy to see that the desire for more or less policy (the relative preference for policy, R) is determined both by how much is wanted (P*) and how much is provided (P). A negative R (a preference for less policy) will result from P becoming greater than P*. This can be due to a decrease in desire, or (and this is ChrisÕ insight) to an increase in provision. In regard to provision, public opinion operates like a thermostat -- a public thermostat -- helping politicians to regulate the level of policy provision. When more or less policy is wanted the public sends a signal to this effect (often the outcome of an electoral contest), and when policy is sufficiently adjusted the signal stops.

 

Consider how similar this is to the thermostats that some of us have in our living rooms. We set the desired temperature (lower when we are away, higher when we are at home) and when the room temperature drops the thermostat turns on the heat. Once the room reaches the desired temperature the thermostat turns the heat off again. Apparently, it is the same with public preferences for policy.

 

This wonderful insight explains the 'swing of the pendulum' noticed by commentators everywhere as applying to politics the world over; but explains it in a manner consistent with a rational, thinking, responsive public, rather than (as theretofore was common) in terms of the ungrateful reactions of a fickle electorate. We could express the insight in terms of an arrow diagram like the ones used to express the Miller-Stokes paradigm but these diagrams are now out of fashion in political science for the good reason that they do not express the nature of a relationship as clearly as does a simple equation.

 

Chris confirmed his theory by collecting data on the amount of money spent by the U.S. government on various policies, and relating this expenditure to the desire of the American public for more or less expenditure on the policies in question. Preferences for spending were shown to be determined both by changes in the objective situation (for instance, a reduction in threat from the Soviet Union would reduce the desire for defense expenditure) and by changes in expenditure (more spending on defense would reduce the desire for such spending). The equations described shifts in preferences with spectacular accurately in certain policy areas (like defense and welfare) but were much less accurate in other areas (like foreign aid and space exploration).

 

As so often happens in science, the answer to one question raised a different question. Why does the thermostat operate in some policy areas but not in others? Chris assumed that the answer was to be found in the fact that some policy areas are more important to the public than others. Important policy areas have greater visibility, and the public responds to policymaking activity in those areas, but not to policymaking activity in areas of lesser visibility. His equation could accommodate to this elaboration through the addition of one more term -- the letter S for salience, as in the following equation [slide 18].

 

Where visibility of a policy area is low, the value of S approaches 0 and P has no effect. In more highly visible areas, the value of S approaches 1 and P is 'turned on' allowing the public's relative preference for more or less policy to reflects the level of policy outputs.

 

I do not throw a second equation at you just to put spots before your eyes. I do so because the story has another episode -- one in which I was myself once again a player. I started out by telling you that the main problem with the Miller-Stokes paradigm for studying representation was that it did not work outside the United States. The question of whether this little equation would work outside the United States was thus of considerable interest to me. And I could offer a research venue -- a sort of laboratory -- in which to confirm that what made the difference between the equation working and the equation not working was whether the policy domain was publicly visible and important. That laboratory is the one in which I have been working for the past two decades -- the laboratory provided by elections to the European Parliament. In the countries that are members of the EU we have good measures of public opinion regarding European unification going back to the early 1970s, a time when the visibility of such policies was very low. And we have measures of European policy outputs for that early period that continue to be available as European unification emerged from relative obscurity to become one of the most visible of policy areas in these countries. In this laboratory we could test what had so far only be assumed in the US context: that changes in the visibility of a policy area would show up in the extent to which the public notices and reacts to policy outputs.

 

Before I tell you about the results of this test let me digress to emphasize the fact that this is a story about how we put the 'science' into political science. I will put up a slide you saw earlier, to remind you of what we were talking about [slide 19]. The story started with a puzzle: why does public opinion seem to respond to

 

presidential incumbency? We answer that puzzle by introducing a new principle: thermostatic responsiveness of public opinion to policy outputs. Introducing a measure of policy outputs into our explanation of changes in public opinion turned a contaminant into a measure and removed the discrepancy we first observed, but only in some policy areas. Now we needed to discover what it is that distinguishes those policy areas. An elaboration of the original theory suggests that what distinguishes the policy areas is their salience, so we have to find a means of measuring that. In so doing we replace the proper names of policy areas by a measure of what distinguishes them.

 

In fact we did not do exactly that. We replaced one research venue (American public opinion) by another research venue (European public opinion) where we had a policy area that was known to have increased in salience, and re-tested the thermostat hypothesis by conducting a quasi-experiment in that new venue. I have no doubt that a test that involved measuring the salience of different US policy areas would have worked just as well, but it would not have involved me.

 

 

I will not bore you with a table of coefficients. Instead I will show you a graph [slide20]. This graph plots R (the relative preference for, in this case, European Unification – the green line) against P*-P (in this case the difference between preferred and actual unification policies – the red line). It shows clearly that, during most of the 1970s, while European unification was still lacking in visibility among European publics, there is no relationship between the two lines on the graph. The red line (which starts and ends below the other), representing the right hand side of the equation, moves apparently at random during those years in relation to the green line (which starts and ends above the other), representing the left hand side of the equation. During those years, R does not equal P*-P.  From about 1978 (the year marked with the arrow on the graph) the two series start to move in step, however, just as we would expect of a policy area that had achieved public visibility.

 

I must tell you that we ourselves were staggered by these findings. We had expected to discover a general correspondence between the two series starting at about the end of the 1970s. We were not expecting the degree of correspon­dence that this graph shows.

 

The finding is staggering mainly because no-one who studies European unification had ever supposed that European publics pay the  slightest attention to the volume of unification policies being enacted by Brussels. We only suspected that the relationship might exist by extrapolation from the U.S. findings. Indeed, many scholars would deny that European publics have any way to become aware of the current level of unification policy. The measure of policy employed to construct the P*-P line on the graph was the number of lines of directives and regulations promulgated by the European Community in each year, a very obscure statistic. How could the European public possibly know whether the volume was rising or falling? One would be hard put to find an elected member of the European Parliament who knew the answer to that question. How then do mere citizens know the answer? Could our finding be a statistical fluke?

 

It is not reasonable to suppose that the degree of correspondence seen in the chart could arise by chance. Those are not two trends that just happen to move together because both are rising or both are falling. Once they start to move together, the two trends jig and jag virtually in unison across the chart. Though there is evidently more going on than is characterized by the thermostatic relationship (the lines do not move exactly in step), this relationship accounts for 80% of change in public demand for unification policies. Moreover, other research has shown that the correspondence seen here over the European Union as a whole is echoed in each individual member country. This is a situation with analogies in many other sciences. Theory calls for a relationship to exist that has not been observed. Research establishes by indirect means that the relationship does exist. Observers then have to scramble to find the object (in this case the mechanism) that science has told us is there. The same thing happened in the study of our solar system when astronomers calculated that there had to be a planet far beyond the orbit of Uranus. Eventually someone looked hard enough in the right place and found Pluto. I have no doubt that those studying the Euro­pean Union will one day figure out how it is that European publics become aware of the volume of policy emanating from Brussels.

 

Whatever the mechanism may be, European publics are able to correctly identify changes in their lives that are due to the operations of the European Union as distinct from those that are due to the operations of their own national govern­ments. This suggests a level of public sophistication that goes far beyond what we previously imagined; but one of the features of contemporary political science is that we are repeatedly discovering that people are smarter than we ever imagined.

 

What puts the 'science' into political science? The accumulation of theoretical insights that give rise to new measures which in turn progressively enable us to characterize relationships with greater and greater accuracy. This is a hallmark of the process of scientific research in all disciplines. My little story shows the same process at work in one subfield of political science. The story also shows how we can take a set of findings from one research venue and replicate them in another research venue. That too is a hallmark of scientific research.

 

I chose this story because it is neatly self-contained, involves relatively few players, and because, among those players, I had a small role to play at both ends of the story.  I also chose it because it illustrates the way in which the scientific study of politics depends on the accumulation of data, which links the story to the work of Stein Rokkan in a way that is relatively unfamiliar. Many people have read SteinÕs papers and books. These are cited ubiquitously. But SteinÕs work in supporting and encouraging the collection of data, and its storage in such a way as to make it publicly available to scholars, is less well-known. Although it was not SteinÕs data that I used in the story I just told, the story itself is a vindication of SteinÕs faith in the importance of data collection for social research, and an illustration of the role played by institutions that he helped to found in the storing and disseminating of such data.

 

I could tell stories that do involve SteinÕs data that would illustrate the same themes as are illustrated by the thermostat story, but there was not time in the space of one inaugural lecture to do justice to more than one story. Still, I hope I have given you some of the flavor of what behavioral political science is all about, at this exciting time in the transition of my discipline from humanity to social science.

 

Thank you for your attention and your patience. I would be happy to answer any questions [slide 21].