Platt, J. (1996) 'Has Funding Made A Difference To Research Methods?', Sociological Research Online, vol. 1, no. 1, <>.

Copyright Sociological Research Online, 1996


Has Funding Made A Difference To Research Methods?

by Jennifer Platt
University of Sussex, UK

Received: 6/2/96      Accepted: 6/3/96      Published: 29/3/96


It has been argued that foundation funding has distorted methods in American sociology in the direction of quantification. This argument rests on a number of assumptions, of which the key one is that in the absence of such funding, method would have developed differently. Data on the methods of funded and unfunded research articles are analysed, and show that the trend to increasing levels of quantification is almost equally present in unfunded work, which suggests that funding should not be held responsible for the trend.

Foundations; Research Funding; Quantitative Methods; US Sociology

Work on which this article is based was funded by ESRC grant R 000 23 4322 and a Rockefeller Archive Center grant; this support is acknowledged with gratitude, as is the help of the Columbia University, Ford Foundation and Rockefeller archives used.


It has been argued that the course of empirical work in sociology has been influenced by patterns of research funding and, in particular, that either or both of foundation and governmental funding have had such influence. Insofar as the argument relates to research methods, such funding is seen as having played an important role in promoting quantification and other 'scientific' aspects of method - which the writers commonly characterize as a distortion. This paper considers the plausibility of the suggested causal connection with foundation funding, with reference to US sociology from the 1920s to the 1980s, and discusses the methodological and evaluative issues raised by such interpretations. Space limitations dictate the omission of much material on governmental funding, though Fisher (1993) sees foundation funding as closely linked to the interests of the state[1].

One of the earlier writers to argue in this way was Gouldner (1970: pp. 444-5). He sees a 'growing instrumentalism ... [which] finds its expression in "theoryless" theories... and a corresponding emphasis upon seemingly neutral methods, mathematical models, investigational techniques and research technologies of all kinds ... their "hard" methodologies function as a rhetoric of persuasion. They communicate an image of a "scientific" neutrality ... their conceptual emptiness allows their researches to be formulated in terms that focus directly on those problems and variables of administrative interest to government sponsors. They thus avoid any conflict between the applied interests of their government sponsors and the technical interests of a theoretically guided tradition.'

Fisher has been particularly prominent recently, and he argues similarly:

The commitment to 'academic science' and the strengthening of the professionalization process placed the social sciences above reproach. These 'new intellectuals' could then become the technical experts who would provide unbiased, objective solutions to social problems... Rockefeller philanthropy was actively involved in confirming through reproduction those parts of the dominant ideology that placed faith in objectivity, professionalism, science and the academy. (Fisher, 1983: p. 224).
Fisher has since been involved in controversy with Martin Bulmer who, on the basis of his own research into the workings of the Rockefeller foundations, argues that they were in practice disinterested and that their very considerable financial support for sociology left sociologists able to pursue their own intellectual priorities. This, however, led to more quantitative work because without such external support academic sociologists had simply not got the resources to collect large-scale data and compile figures from it (Bulmer and Bulmer, 1981: pp. 402-3; see also Bulmer, 1984a, Fisher, 1984). Bulmer has elsewhere (1984b) made an intensive study of one key department and its use of such funds. Useem (1976), in a study of several social sciences, finds that the receipt of federal funds is strongly associated with conducting quantitative research, and concludes that federal research money does make a difference. Against this background, we assess the state of the argument and relate it to some systematic data on sociology.

Setting The Context

First, we consider what data would be needed to support an adequate interpretation, since different authors have approached the issues in different ways, and have not always deployed the full range of material which their conclusions seem to require. What would need to be shown in order to demonstrate that funding had made the imputed difference to research methods?
  1. Funding activity had been present in the relevant field.
  2. Funding agencies had followed policies which, whatever their controllers' conscious intentions, in practice discriminated between some methods and others, favouring the 'scientific' in what they chose to fund.
  3. The research actually funded was consistent with those policies.
  4. The funding was sufficiently influential to produce an outcome different from what it would otherwise have been.
For such a difference to have been made intentionally, we would need to add to that list:
  1. Agency policies deliberately favoured some research methods over others.

Item four on this list is the one which raises the crucial difficulty, because it requires a counterfactual conditional: what would have happened if the agency funding had not been present? The other items listed are necessary conditions for the effect of funding, but without this one the set does not provide sufficient conditions for the proposition to hold. However prominent the funding, however discriminatory the funders' policies, however well the work funded fitted their priorities, whatever their intentions, if the net outcome was not different from what it would have been in the absence of that funding it cannot be explained by the funding's presence. We review the main works which have provided data potentially relevant to these questions, and consider how adequate they are.

McCartney (1971) concludes that major funding agencies have a preference for statistical and 'scientific' work which is skewing the methodological style of sociology. His evidence for the agencies' preferences is thin; the criteria for evaluating proposals which he quotes as an example (1971: p. 34) are quite ambiguous. The main weight of his data, however, comes from journals. He shows that increasing proportions of articles use statistics, and that the statistics used have become more sophisticated. Over the same period, more articles come to acknowledge financial support, and the supported articles more often use statistics than the unsupported ones; moreover, the specialities with the higher proportions of support were more likely to be expanding. He concludes that the use of statistics is more attractive to sponsors, and that financial support is affecting the growth and decline of speciality fields. Unfortunately, however, although his data are consistent with these conclusions they are equally consistent with other conclusions. We do not know that funders rejected more non-statistical applications, or that financial support was not led by growth of interest rather than vice versa; the causal connections are not proven.

Examining Rockefeller foundations between the wars, Fisher's (1980) paper shows that their policy for social science in Britain favoured the empirical, 'scientific' and 'realistic', although much money was given to institutions rather than specific projects. His analysis is at the institutional level, and does not look at specific works or systematically compare the funded and the unfunded. His assertion that there was control over methodology rests mainly on the foundations' stated preference for the empirical over the theoretical. He recognizes that there was a general trend in that direction anyway (1980: p. 299). However, he does not provide any evidence of discrimination between projects on methodological grounds. His 1983 paper is explicitly concerned with the policy-making process rather than its impact, but sees Rockefeller as supporting a dominant ideology of science and objectivity which maintains the existing social order. He recognizes that the policies and outcomes did to some extent serve the interests of the social-science community (Fisher, 1983: p. 222). His book (1993) focuses on the (US) Social Science Research Council (SSRC) and its relation to Rockefeller, arguing - though again there is little on outcomes at the level of method - that it moulded the social sciences to take an objective, scientific approach to suit an applied technical role rather than one which was critical or fundamental. Fisher recognizes that some social scientists played an active part in this process.

Bulmer and Bulmer (1981), without giving detailed documentation, agree with Fisher that in the 1920s the Rockefeller foundations' funding had the effect of encouraging more systematic empirical research and, in particular, quantitative work. On research methods, however, they see the crucial difference made being to enable academics to do such research within their own institutions, rather than for it to be conducted elsewhere. Bulmer (1984a) has also made a detailed study of the University of Chicago, and shows how Rockefeller money was used there. It made possible relatively large-scale quantitative research, but was also used to fund a range of qualitative work which has become famous as characterizing the 'Chicago School'. It was not methodologically discriminatory funding because, in line with general Rockefeller practice of the time, it was given as block grants to the institution. Bulmer (1984b) argues that Fisher's 1983 interpretation puts too much stress on the initiating role of the foundations, since social scientists influenced them as well as vice versa. Ahmad's review suggests that the truth lies somewhere between them, and rightly adds that both tend (Ahmad, 1991: p. 516) to assume that foundation policies were necessarily successful. All these authors, although they suggest general conclusions, are writing empirically only about the Rockefeller foundations[2] and the interwar period; it is unwise to assume, without further evidence, that even totally convincing data on that also cover later times and other agencies.

Useem (1976) has questionnaire data on anthropology, economics, political science and psychology. He shows that, for his subjects, using quantitative data over the past five years was correlated with having received federal research money over the same period; he concludes that it is clear from this - which it is not! - that 'government money is preferentially allocated to social scientists involved in quantitative ... research' (1976: p. 152). He shows that the correlation between receipt of funds and use of quantitative data is almost unaffected when recent citations in social-science journals - taken as a measure of disciplinary recognition - are controlled, and concludes that government priorities in allocating money were different from those of the disciplines. However, the period over which funding was asked about is 1968- 73, and the citations counted are from 1973; it would have been more appropriate to have a recognition measure which preceded the funding. (Lipset and Ladd's data [1972: pp. 82-4] show that in sociology high achievers in the discipline more often received federal money). Useem's data on the effects of federal funding on research plans are more convincing. He asked whether recent reductions in levels of funding had affected research plans, and found that they quite often had, more often for those whose research expenses were high and for those who in the past had applied for federal funds and only sometimes been successful. He concludes that the situation is one where both substantive and methodological aspects of research are 'subject to government influence' (1976: p. 158). This is plainly true, although to summarize the findings in that way is somewhat misleading. The types of methodological effect which respondents had anticipated included 'reduction in research scale' and looking for 'new sources of funding' (1976: p. 154), both of which are obvious direct responses to financial cutbacks, but need not imply any change in the basic character of the research; this is a very different picture from one where government actively intervenes to affect methods. From 11% (Economics) to 26% (Psychology) anticipated use of 'less costly research methods', but there is no indication whether these would be less quantitative.

It is noticeable that the writers divide into two camps: those who have detailed material about foundation dynamics but only broad generalities about the social sciences, and those who have detailed material about social scientists or their publications but little about the role played by funding in producing them. This means that no writer, except perhaps Bulmer in relation to Chicago, has built a fully convincing case connecting the two. A limitation of agency-based research is that it naturally focuses on what the agency did fund, and does not show what was done without funding; this limitation is intensified when the agency files are (as is commonly the case) mainly on proposals accepted, not those rejected. This makes the comparisons which are crucial to demonstrating a causal difference hard.

Summarizing the data presented by other authors on our questions, we find: (a) clear evidence that quantitative work has increased and is more often funded, (b) little material on detailed methodological discrimination by foundations; and (c) nothing systematic on ways in which the outcome differs from what might otherwise have occurred.

Methodological And Conceptual Issues

One problem in treating the issue empirically is that some kinds of research cannot be carried out at all without funding, and others are much facilitated by funding. If such research is done, it is highly likely that it will have been externally funded. It does not follow that the researcher would not have chosen to do that research anyway if s/he had been able to. Under those circumstances, the increased availability of funding will increase the amount of such research done, but will not have changed the underlying methodological preferences. This suggests that it is desirable to distinguish, in discussing the question, between research which did not require funding, or could have been carried out tolerably without it, and that which did require it. (Such a distinction would be relatively easy in extreme cases - an international survey versus participant observation in the researcher's normal environment - though many intermediate cases might make it unclear). To the extent that such a distinction could be made, one could then draw a line between funding which merely made a practical difference and that which changed intellectual directions. A practical difference is still a difference, but maybe not a 'distortion'. A related issue is that of whether sociologists affected, even determined, agency policy, rather than vice versa. That would not make funding agencies causally irrelevant, but it would imply that their role was that of an intervening rather than an independent variable. If funding simply facilitated work of kinds sociologists wanted to do anyway (Bulmer and Bulmer, 1981: p. 402), it would implement rather than changing. (For some material on these issues, see Platt, 1996: pp. 167-9).

To decide where to look for the impact of funding, we need to think about how and why funding agencies might affect matters. If agencies simply provided more money, impartially distributed, what would that buy? It could merely pay for a larger total number of separate projects, in which case there would be no direct influence on methods. Indirectly, however, there could be consequences. If the larger number of projects involved a larger number of individual researchers, there could be greater diversity of character than before. If the larger number of projects was achieved by the same researchers as before doing more each, each would gain more experience and technical skill faster; this might just mean efficient assembly-line production, but it seems likely that it would also lead to at least minor innovations producing technical improvements. (These two are, in fact, potentially connected, in that a continuing flow of research justifies establishing a relatively permanent research capability with advanced capital equipment and specialists in a division of labour - but once such a capability is set up it needs to justify its existence, and that may imply keeping on doing more of the same kind of research).

If the money did not simply buy more projects, it could buy more expensive projects: larger numbers of cases and more representative samples, more time in the field, more longitudinal studies, more and more highly qualified hired hands, more complex data-processing, more cross-national research ... and so on. Impartially distributed money would buy more of whatever has fundable costs. If, on the other hand, agency largesse were guided by specific methodological tastes, it would produce more of whatever those tastes favoured - but that might be the same things. Only when the guiding tastes were not for the things which are more expensive could one observe the difference between partiality and impartiality. But the really inexpensive does not need external funding, so it may not appear as funded research even if agencies are thoroughly in favour of it. (If it is funded at all, the funding is likely to take the form of fellowships rather than project grants.) These considerations point to serious practical difficulties in using outcomes systematically to evaluate the methodological tastes of the funders.

What, then, is it practically feasible to look at in order to explore the questions raised further? The first point on our initial list is not problematic; it can be taken as given. The second has become more complicated, because it has become evident that ideally we need to distinguish between discrimination between research which does and does not need funding, and discrimination among fundable projects using different methods. The fourth point has become both simpler and more complicated. It is simpler, because for research to be funded whose methods could not be used without funding self-evidently leads to a different outcome. It is more complicated, because funding may not change the intellectual choices of the people doing the research, and so could be seen as irrelevant to the character of the discipline.

The gaps in the evidence already available relate to discrimination in foundation policy, the character of unfunded research, and the counterfactual of what would have happened in the absence of funding. Below some data are deployed which bear on these issues. They are drawn from foundation and SSRC publications and archives, journals, and interviews with participants.


Did foundations discriminate methodologically? No major foundation had the remit of supporting social-scientific research as an end in itself, though some saw doing so as instrumental to their remits. It is not surprising, therefore, that for them other ends, most often related to social policy, were formally the salient ones. When they did support social science, that was often a minor part of their total portfolio, even if the sums involved were huge to social scientists; they could and did move in and out of social science over relatively short periods, and had internal arguments about the wisdom of pursuing such an indirect and long-term strategy as improving knowledge, rather than going straight for solutions to important problems. The decision to do so was likely to be the effect of the idiosyncratic interest of particular foundation officers. This general background is not one which supports a picture of long-term trustee policy to develop a social science useful for their purposes, let alone to fund some types of social-scientific work rather than others. It is thus also unsurprising that no foundation had a general aim which related to research methods; any top-level policies they had which affected methods must have arisen for other reasons, or had the effect as an accidental by-product of other concerns. Some quotations from records exemplify these points:

It is inadvisable to attempt to influence the findings or conclusions of research and investigations through the designation of either personnel, specific problems to be attacked, or methods of inquiry to be adopted.... (RAC)
...the only way to judge a proposal is by virtue of the intellectual capacity, the record of imagination, of dedication, and of interest of the man who proposes to do this work. The specification of the problem is almost always unimportant.... (RFOH: p. 430)
Regarding the famous methodological work of the American Soldier (Stouffer et al., 1950):

...remember that the Carnegie interest in Stouffer started with the applied problem of morale. ... The methodology was Stouffer's interest ... appropriations from the Carnegie Corporation ... were rarely ... for straight methodological development, they were always focused on an important social question. (CCOH: p. 77)
The foundations did play a significant role in the funding of quantitative work, and of the development and diffusion of quantitative methods, but it does not follow that they were thereby showing a bias in that direction. Two points demonstrate that they were not. The first is that they quite often gave grants in such a form that they had no control over what they were used for; the second is that they also funded much qualitative, and indeed non-empirical, work. We do not have numerical data on the proportions of work of different kinds funded, but unsystematic material is quite sufficient to show that if they wanted to discriminate in favour of quantitative research they were strikingly inefficient in implementing this policy. For instance, Ford supported work in social theory under the direction of Talcott Parsons, and the compilation of cases (which became Junker, 1960) for training in methods of field observation; Carnegie funded Toward A General Theory of Action (Parsons and Shils, 1951), and Making the Grade (Becker et al., 1968).

The character of unfunded research is addressed by analysis of a sample of articles from the generally recognized 'major' journals for the period 1923-88. The sample is all articles appearing in alternate issues of the American Journal of Sociology, American Sociological Review and Social Forces in years ending with three or eight. This is not, and does not claim to be, representative of research in American sociology. For the argument, however, it does not need to be, since it is generally agreed that the tendencies imputed to funding are most salient in hegemonic mainstream work, and in articles rather than books; of that it does claim to be a representative sample. This sample should, therefore, make the strongest case for the impact of funding. Each article has been coded as funded or not[3] and, where funded, the source has been categorized as foundation, governmental or other. (The tables overestimate the number of projects receiving funding, because for some articles the funding recorded was a student stipend rather than financing the data). Second, the content of each article has been categorised as empirical or non-empirical. For the empirical, the methods used have been divided into quantitative, qualitative and mixed. (Since most writers in this area have talked simply of quantitative versus qualitative work rather than making more sophisticated distinctions, we follow them in that). It has also been noted whether the data used were the author's own, and whether they were collected on the author's own college. The non-empirical have been divided into those which favoured quantification and 'science', those which did not, and those which were neutral. What do the results show?

First, the proportion of empirical articles has risen steadily; the largest jump is from the 38% of the 1920s to the 64% of the 1930s, and thereafter it increases by about 5% per decade until the 89% of the 1980s (see Table 1). In the earlier years, a negligible proportion of even the empirical articles appear to arise from funded work; in the 1930s and 1940s it was about 20% of the empirical, and almost none of the non-empirical. Thereafter it rose for the empirical to 40% in the 1950s, nearly 60% in the 1970s and nearly 70% in the 1980s, while for the non-empirical it reached around 20% in the 1960s and 1970s before leaping to over 60% (of a very small total) in the 1980s.

Over the same time-span, the proportion of articles which are quantitative or favour quantification has risen strikingly (see Table 2), to include a substantial majority of the total by the 1960s. The rise has not, however, been even, with relatively large changes between the 1940s and the 1950s, and the 1950s and 1960s.

Table 1: Proportions of sample articles not externally funded.

Empirical articles Non-empirical articles
Date % N % N
1920s 94 50 100 81
1930s 81 102 98 57
1940s 83 121 98 54
1950s 60 116 100 41
1960s 41 108 84 25
1970s 41 134 77 26
1980s 31 126 38 16

Table 2: Proportions of sample funded and unfunded empirical articles whose data were quantitative.*
Funded articles Unfunded articles
Date % N % N
1920s (50) 2 41 41
1930s (89) 9 47 73
1940s 60 20 35 88
1950s 74 43 55 65
1960s 92 68 67 42
1970s 96 75 78 50
1980s 92 85 74 35

* Articles with data of mixed character are omitted here.

How well does this pattern fit an explanation in terms of funding? We look first at data internal to the sample of articles. A higher proportion of the empirical articles was funded. In no year did more than 10% of them have foundation funding; the main source of variation is in the proportion with government funding, which rose markedly in the later 1960s when government became the largest funding source, a position which it has maintained. A very high proportion of the funded articles use quantitative methods; it has been 90% or more since the 1960s. However, as Table 2 shows, this does not distinguish funded articles sharply from unfunded ones, since these too have been predominantly quantitative since the 1950s, although the proportion has been lower.

Looking at the same data the other way round, we see that a higher proportion of quantitative than non-quantitative articles has received funding. It might be suspected that those without funding would be less likely to have been able to collect their own data, and a comparison between the funded and unfunded research very broadly supports that interpretation. However, there is a marked tendency from the later 1960s for the proportion of funded researchers using their own data to decline, until in 1988 it was only 44%, the same figure as that for the unfunded. That clearly requires some explanation other than the mere presence of funding.

No account which stresses foundations as a funding source can do much to explain the overall pattern, since foundations have simply not funded enough of the articles[4]; it would need to be discussed in terms either of government funding or of all sources combined. The proportion of empirical articles has increased rather smoothly over time, in a way which does not correspond to the more abrupt discontinuities in proportions funded. Even if the two had moved together, however, it would have been as compatible with an account which saw sociologists raising the money to do what they wanted as with one which saw funding as the motor of change - and even in 1988 a quarter of the empirical articles reported no funding, so it was possible to do publishable studies with limited resources. The increasing proportion of quantitative work among both funded and unfunded articles makes funding a poor explanation of quantification; the higher level of quantification among funded work could easily be accounted for by the greater costs of collecting quantifiable data - or of high-technology processing of available data.

One reason why numbers of highly quantitative empirical articles have not required funding is that their authors have not been using data which they collected themselves. In some cases this is demographic work using Census and related data. More recently, however, secondary analysis of large-scale data sets has become extremely common. This reflects the growth of computer technology, which makes access to existing data-sets easy. It also reflects the emergence of social research institutions: the data bank, and the public-use data set such as the General Social Survey. In combination, these open up a wide range of possibilities to researchers who could not themselves have collected such data. Clearly, for instance, it is almost inconceivable that a graduate student's thesis could initiate a study using a national representative sample.

A qualitative scan of the journal articles over the period rapidly shows that a picture of the impact of funding which sees no alternative to accepting funding if one wants to undertake quantitative work, and sees that funding as automatically leading research in directions inconsistent with disciplinary interests, is misleading. There are, especially in the middle part of our period, ways in which sociologists managed to carry out quantitative empirical work without any significant external funding. On the one hand, they did relatively small, local studies, often using their students as either the subject or the research staff. It is not without reason that the sociology of sophomores has been mocked, but it certainly took place, and sometimes student subjects were used even when there was external funding. On the other hand, many sociologists also made secondary use of data already collected by others.

This has, of course, always been normal practice among demographers, who have applied increasingly sophisticated quantitative techniques to Census and other official data. It may reasonably be presumed that, in the absence of external funding, these would have been among the typical modes of empirical research; other patterns would be the one-person project drawing heavily on personal experience, and work using historical and other documentary sources. Work of a more parochial character would be likely to mean that researchers were compelled to rely more on the co-operation of officials who control access to situations and records, which would have had its own - undesirable - effects on what was done.

The careers of a number of the leading quantitative sociologists confirm that it was not necessary to have funding to pursue that line, and also show that their interest arose quite independently of any funding agency's attempt to stimulate it. In the older generation, Ogburn was a member of a cohort of Columbia students who were inspired by Giddings (and trained by econometrician H. L. Moore) to take up statistical methods, and who dispersed to train their students along the same lines; Chapin was another particularly important member of this cohort (Turner, 1994: pp. 42-6). They held to these commitments at a period when specific funding for such work did not exist, and it did not offer an obvious career strategy. Stouffer trained as a sociologist but, inspired by Ogburn (Hauser, 1961: p. 364), became interested in quantitative methods at a very early career stage; a year's fellowship in London to study with British statisticians was significant in his development, but that was a general fellowship programme where many other holders used them for quite different purposes, so the interest was strictly his own. Lazarsfeld was a forerunner of several who came to sociology from backgrounds in mathematics or natural science. Blalock, Coleman and Sewell followed the same pattern[5], and either spontaneously carried on using the skills they had acquired or were recruited by colleagues to work on problems for which they were unusually well equipped. Leo Goodman started as a sociology major, but was encouraged to learn more mathematics to understand statistics; he ended with a joint major, and then did graduate work in mathematics but with a view to sociology. At that stage he held an SSRC fellowship, but again a general-purpose one.

SSRC ran summer workshops on mathematics in the 1950s, funded by Ford but initiated entirely from within the academic community. The initiative started with the American Mathematical Society, the Econometric Society and the Institute of Mathematical Statistics; they approached other learned societies, including the American Sociological Association. It was agreed that materials to raise standards should be developed, and the SSRC was approached. The numbers actually involved in the workshops were not large, though they probably had multiplier effects through their example and teaching. In 1953 only five sociologists were among those admitted and awarded study grants, though one of those was Otis Dudley Duncan. He reports (Duncan, 1984) that he was attracted to the workshop because he had always been oriented to quantitative training, starting an undergraduate minor in maths, but without initially finding very good teaching '... for me it just sort of solidified my grasp of what I could do already'.

Foundation funding cannot account for the many applications from junior people for these workshops, or the interest shown by the more senior fellows at the (Ford-funded) Center for Advanced Studies in the Behavioral Sciences in optional mathematical courses quite distinct from the activities for which they were chosen as fellows. Lazarsfeld 's programme of methodological codification and production of advanced training materials must have helped to make method more salient for sociologists generally; it could not have been carried out without significant funding, but it was very much his baby. There was by the 1950s a common, if not universal, feeling among sociologists themselves that it was important to improve technical skills in quantitative research[6]. In funding much quantitative work foundations were, thus, not so much pursuing their own agendas as taking the advice of leading academics who identified this as an unmet need.


What, then, are the general conclusions to be reached? It has been repeatedly shown that US mainstream journal sociology has become increasingly empirical and quantitative in character. (More specialized journals may, of course, have had a different character.) That may be taken as given. There can be little doubt that the total, and proportionate, amounts of external funding for sociological research have also increased over time, although its sources have fluctuated markedly, moving from Rockefeller to governmental dominance and with spasmodic but important interventions by other major foundations. The minimal condition for a causal relationship between the two is, thus, met, and we cannot doubt that the availability of funding made possible some research which could not otherwise have been done, and so shifted the balance of the whole. This paper nonetheless argues that the strong version of the causal proposition linking them cannot be supported, because the counterfactual conditional required will not stand up. Our theses are:

However, sociology in the USA has a long enough history, sufficiently closely entangled with other academic and worldly developments, for it to be scarcely possible to construct a plausible alternative world where only the funding factor is missing.[8] That difficulty is, though, one that those it criticises share with this paper.

Hawthorn (1991: p. 158) has pointed out that discussion of alternative possibilities 'should not require us to unwind the past. And the consequences we draw from these alternatives should initially fit with the other undisturbed runnings-on in that world...'. We have tried to meet this criterion, with what success the reader may judge. Some of the interpretations we criticize have had implicit alternative worlds whose implications they have not seriously attempted to spell out. Close examination suggests that the counterfactual state of affairs envisaged is as much the one which would be preferred as the one deemed most likely.

One may infer that it is one which would have more theoretical work (of the right kind!), more qualitative research styles, more macroscopic and historical work, more work which is critical in its assumptions and more which is not directly useful to any part of the current power structure (but is useful to its opponents?). Thus there is an ideological agenda which is as important to the overall interpretation as the data, even if not all writers are as frank about this as Fisher; 'distortion' implies more than just difference from what might otherwise have happened.

Some of the features preferred might seem desirable from a purely disciplinary perspective, and this enables the critics to invoke this to support the claim that external funding has introduced distortions. But that is hardly consistent if they are themselves in favour of development in directions which favour their political agenda, rather than a purely disciplinary one. This shows up in a particularly odd form when they appear to criticize more 'scientific' work on the ground that it is more accurate and predicts better, and thus supports the power structure. At that point political as well as historical counterfactuals become relevant, and one may wonder whether the preferred types of society might not also want or need research as effective as that provided by US capitalism.[9] The argument is not that sociology should not be related to political concerns, but that for clarity of thought it is desirable to distinguish these from concern about such matters as the plausibility of alternative methods for reaching well-founded empirical conclusions.

For this paper, the political agenda is relevant only because it seems to have led some writers in directions unhelpful to explanation. To explain what actually happened, one needs to consider alternative possible lines of development chosen for their historical plausibility, not their ideological attractiveness. We conclude that, in the field of research methods, the counterfactual possibilities sketched above are more plausible. If they are, it follows that the role of external funding has been important in relation to methods, but not so determinative or so independent of factors internal to the discipline as some writers have suggested, so that the difference made cannot be characterized as a distortion.


1 The National Science Foundation (NSF) did indeed have a scientistic, pro-quantitative policy, but for special reasons. It was set up in 1950, and initially the social sciences were excluded from its explicit remit. Moves were made to introduce elements of social science (Alpert, 1954; 1955), but the external political climate continued to encourage caution, and the internal climate was dominated by natural science. The net result was that social science there adopted 'a strategy of protective coloration, of allying one's cause with stronger others...'; this, together with the political need to differentiate social science from socialism or social reform (with which it was often confused), led to choice of the solution 'to emphasize the similarities between social and "natural" science by focusing on methods of inquiry...' (Riecken, 1983: pp. 40, 41). As an agency policy this did not so much express a taste as constitute a perceived condition of survival; the alternative was seen as no NSF money for social science. The consequence was, however, undoubtedly discriminatory by method. Interestingly, this was in the context of basic research, which was NSF's distinctive mission, not the policy-oriented work with which leftish commentators have traditionally associated quantitative methods, which Riecken points out are the domain of other government agencies.

2 There are works on other aspects of Rockefeller philanthropy, several taking a line similar to Fisher's (Alchon, 1985; Brown, 1979).

3 A problem for the adequacy of the data is caused here by the Laura Spelman Rockefeller Memorial policy that their support should not be publicly mentioned (Bulmer and Bulmer, 1981: p. 382); this, however, affects only the 1920s.

4 This picture is supported by Robinson's estimates (1983: p. 36), which show the proportion of funding provided for social-science research by foundations declining from 22% in 1956 to 5% in 1980, while government funding rose from 31% to 61%. (The remainder is university funding).

5 Blalock's undergraduate degree was in maths with a physics minor (Blalock, 1988: p. 108); Coleman started in chemical engineering (Coleman, 1984); Sewell majored in sociology but also had met the prerequisites for medical school with the intention of becoming a physician (Sewell, 1983).

6 Why this should have been so is an issue which this paper cannot address; for some discussion relating to it, see Platt, 1996.

7 cf. Crawford and Biderman (1970: p. 76): '...publications have not been moving away from an academic-centred pattern toward one expected of a bureaucracy-serving field ... Federal sponsorship seemingly has been accommodated more to academic sociology than academic sociology has accommodated to its federal sponsors ... one would conclude from these data that [science rather than the state] possessed the preponderance of power.

8 It would be interesting, in future research, to compare the situation in Britain, or in other countries where the funding pattern has historically been somewhat different. Such a comparison would be complicated by both the differences in national intellectual traditions and structures of cross-national sociological hegemony.

9 The US left critique of quantification pays curiously little attention to the public arguments from the right in the 1950s which saw scientism as virtually subversive, because of its absence of value-commitment to Americanism (Platt, 1986; 1994). Left and right have sometimes shared an objection to styles of work which do not guarantee support for their preferred conclusions.


Archival References
CCOH - Oral History Archive, Columbia University: Carnegie Corporation Oral History Project, Donald Young

RAC - Rockefeller Archive Center, 'Program and policies in social sciences', 1/3/29, p. 29039, RG 3.1: 910: box 1, folder 1.

RFOH - Oral History Archive, Columbia University: Rockefeller Foundation Oral History Project, Warren Weaver.

AHMAD, Salma (1991) 'American Foundations And The Development Of The Social Sciences Between The Wars: Comment On The Debate Between Martin Bulmer And Donald Fisher', Sociology, vol. 25, pp. 511- 20.

ALCHON, Guy (1985) The Invisible Hand Of Planning. Princeton NJ: Princeton University Press.

ALPERT, Harry (1954) 'The National Science Foundation And Social Science Research', American Sociological Review, vol. 19, pp. 208-11.

ALPERT, Harry (1955) 'The Social Sciences And The National Science Foundation 1945-1955', American Sociological Review, vol. 20, pp. 653-661.

BECKER, Howard S., GEER, Blanche & HUGHES, Everett (1968) Making The Grade. New York: Wiley.

BLALOCK, Hubert M. (1988) 'Socialization To Sociology By Culture Shock', in Matilda White Riley (editor) Sociological Lives. Newbury Park, CA: Sage.

BROWN, E. R. (1979) Rockefeller Medicine Men: Medicine And Capitalism In America. Berkeley: University of California Press.

BULMER, Martin (1984a) 'Philanthropic Foundations And The Development Of The Social Sciences In The Early Twentieth Century: A Reply To Donald Fisher', Sociology vol. 18, pp. 572-9.

BULMER, Martin (1984b) The Chicago School Of Sociology. Chicago: University of Chicago Press.

BULMER, Martin and BULMER, Joan (1981) 'Philanthropy And Social Science In The 1920s: Beardsley Ruml And The Laura Spelman Rockefeller Memorial, 1922-29', Minerva, vol. 19, pp. 347-407.

COLEMAN, James S. (1984) Interview with author, 30th May 1984.

CRAWFORD, Elizabeth T. and BIDERMAN, Albert D. (1970) 'Paper Money: Trends Of Research Sponsorship In American Sociology Journals', Social Science Information, Vol 9, pp. 51-77.

DUNCAN, Otis D. (1984) Interview with author, 9th August, 1984.

FISHER, Donald. (1980) 'American Philanthropy And The Social Sciences In Britain, 1919-1939: The Reproduction Of A Conservative Ideology', Sociological Review, vol. 28, pp. 277-315.

FISHER, Donald (1983) 'The Role Of Philanthropic Foundations In The Reproduction And Production Of Hegemony: Rockefeller Foundations And The Social Sciences', Sociology, vol. 17, pp. 206-33.

FISHER, Donald (1984) 'Philanthropic Foundations And The Social Sciences: A Response To Martin Bulmer', Sociology, vol. 18 pp. 580-7.

FISHER, Donald (1993) Fundamental Development Of The Social Sciences. Ann Arbor: University of Michigan Press.

GOULDNER, Alvin W. (1971) The Coming Crisis Of Western Sociology. London: Heinemann.

HAUSER, Philip M. (1961) 'Obituary Of Stouffer', American Journal of Sociology, vol. 66, pp. 364-365.

HAWTHORN, Geoffrey (1991) Plausible Worlds. Cambridge: Cambridge University Press.

JUNKER, Buford H. (1960) Field Work: An Introduction To The Social Sciences. Chicago: University of Chicago Press.

LIPSET, Seymour M. and LADD, Everett C. (1972) 'The Politics Of American Sociologists', American Journal of Sociology, vol. 78, pp. 67-104.

McCARTNEY, James L. (1970) 'On Being Scientific: Changing Styles Of Presentation Of Sociological Research', The American Sociologist, vol. 5, no. 1, pp. 30-35.

PARSONS, Talcott and SHILS, Edward A. (editors) (1951) Toward A General Theory Of Action. Cambridge, MA: Harvard University Press.

PLATT, Jennifer (1986) 'Qualitative Research For The State', Quarterly Journal of Social Affairs, vol. 2, pp. 87-108.

PLATT, Jennifer (1994) 'Scientistic Theory And Scientific Practice', paper given at the XIII World Congress of Sociology, Bielefeld.

PLATT, Jennifer (1996) A History Of US Sociological Research Methods 1920-1960. Cambridge: Cambridge University Press.

RIECKEN, Henry W. (1983) 'The National Science Foundation And The Social Sciences', Items vol. 37 (2,3), pp. 39-42.

ROBINSON, Marshall (1983) 'The Role Of The Private Foundations', Items, vol. 37, pp. 35-39.

SEWELL, William H. (1983) Interview with author, 3rd September 1983.

STOUFFER, Samuel. A., GUTTMAN, Louis, SUCHMAN, Edward A., LAZARSFELD, Paul F., STAR, Shirley A. and CLAUSEN, John A. (1950) Measurement And Prediction. Princeton, N.J.: Princeton University Press.

TURNER, Stephen P. (1994) 'The Origins Of "Mainstream Sociology" And Other Issues In The History Of American Sociology', Social Epistemology, vol. 8 pp. 41-67.

USEEM, Michael (1976) 'Government Influence On The Social Science Paradigm', Sociological Quarterly, vol. 17, pp. 146-61.

Copyright Sociological Research Online, 1996