Copyright Sociological Research Online, 2001


John R Schmuttermaier and David Schmitt (2001) 'Smoke and Mirrors: Modernist Illusions in the Quantitative versus Qualitative Research Debate.'
Sociological Research Online, vol. 6, no. 2, <> ;

To cite articles published in Sociological Research Online, please reference the above information and include paragraph numbers if necessary

Received: 2001/1/29      Accepted: 2001/7/15      Published: 2001/8/31


The debate about the selection and proper use of theory, and their impact on validity, is actually an example of sleight of hand. It is a paradigm conflict posing as a debate about substantive issues. Within the diversionary debate, qualitative (inductive) research has been critiqued and declared a-theoretical. This paper engages with this claim and the assertions about the superiority of quantitative (deductive) research and concludes that both positions are redundant. At the outset, all research is deductive and once the data commences to be interpreted and conclusions manufactured, it proceeds as a process driven by a deductive- inductive dialectic. This dialectical manufacturing process is universal and inescapable. At best, a systematic approach to data collection and interpretation only allows for the subsequent partial deconstruction of the research construct. That construct may be valid within the confines of its own manufacturing process, but this does not open the way for claims of the imprimatur of any broader validity.

Deduction; Dialectic; Generalization; Induction; Manufacturing Process.; Theory; Validity

Induction vs Deduction - Smoke and Mirrors

If we take for granted what the postmodern social theorists tell us then the concept and reality of validity, in its universal and absolute guise, in fact in almost any guise, is dead. However, with nothing to replace it, research and researchers are consigned to a relativist realm of constructed possibilities rather than discoveries of the 'real'. This paper attempts to show that the methodology and validity debate diverts attention from the possibility that all researchers see what they want to see and then choose a method that will enhance their view. Arguments about the best method and theory obscure the limitations by celebrating the claimed advantages. These limitations are central to this paper.

Blaikie (1993:131) notes that there are four main 'research strategies' that define the type of research that researchers conduct, 'Inductive, Deductive, Retroductive, and Abductive'. This paper is mainly concerned with the first two, and claims that the other two are artefacts constructed by authors who refuse to accept the existence of a deductive-inductive dialectic. At the heart of these two strategies is the way they are used to answer: 'what, why, and how questions' (Blaikie). They are only separated by their views on 'where the researcher begins and on the logic adopted' (Blaikie). An essential difference between the two approaches, as Blaikie points out, is in defining where the research process begins. The inductive approach is believed to commence with 'observations or [the] gathering [of] data which are then used to develop explanations' (Blaikie, 1993 p.131). The deductive approach begins with a theory, 'a hypothesis or a model which is then tested by making observations or gathering data' (Blaikie, 1993 p.131). Blaikie also argues that combining the two approaches might enable the researcher to 'capitalise on their strengths and minimise their weaknesses' (p.156). But even in this the weaknesses (limitations) remain significant and preclude a clear view of reality.

Blaikie's (1993) arguments draw on the work of Wallace (1971), and Rubinstein et al.,(1984). According to Blaikie, Wallace believed that there are two 'overlapping processes' involved - 'theorising and doing research' is one, and induction and deduction' is the other. He also claims that each of these processes are related and operate in a number of 'cyclical stages' (p.156). So Wallace is in effect saying that induction and deduction operate hand in hand. Rubinstein et al., on the other hand, argue that 'science is a process that progressively explores the world by means of a systematic alternation between induction and deduction, an iterative process for refining and testing ideas' (p.159). At the fundamental level, this is a very accurate summation of the research process. Blaikie however, notes that both views are limited, and in particular that Wallace's view 'makes no provision' for the 'socially constructed character' of social reality (Blaikie, 1993 p.158). Those who claim they can gain access to the reality of the researched often make this critical point.

This paper seeks to clarify and extend the ideas of Wallace (1971), and Rubenstein et al., (1984). We suggest that the debate about the independent existence of induction and deduction is an illusion. At the heart of this debate is the notion that data, if properly collected and processed, is able to speak for itself. However, data can never do this. The interpretation of data is like a piece of furniture; it is the product of the carpenter/researcher, not of the tools used. Tools help but have no life or capacity to construct by themselves. When we select a topic and choose our tools we begin the construction process. This also means that we begin with theory in mind no matter how hazy or incomplete. Therefore, all research begins with deduction, as the choice of the topic/question is underpinned by conscious or unconscious answers that are seeking proof. Nippert-Eng (1996) agrees, and points to the work of Kuhn (1970) in support of this view:
' is driven by the ways scientists think about things' (Nippert-Eng, 1996:283).

Strauss's grounded theory approach was chosen as a reference point for this paper, because it is argued that a qualitative inductive approach offers a set of procedures that are 'systematic' and, as such, claims to marginalise the imposition of a pre-existing theoretical structure (Strauss and Corbin, 1990). However, even this method misses the important initial step, that the choice of a topic or question initiates and begins to manufacture answers and/or theory. The process does not begin with the interviews. Topic selection is not a neutral part of the process; it is an indication of intent as much as it is of interest. Therefore, are methodology and theory discrete entities, separate parts of the processing, or is the separation artificial? We argue that researchers can, and do, modify their starting position through exposure to data. In other words, as they begin to interpret their data they begin to modify the theory. We, like Wallace, and Rubinstein et al., argue that this involves a dialectical process of deduction-induction - a deducto-inducto dialectic.

Grounded Theory

According to Strauss and Corbin (1990), grounded theory generates 'testable' theories. The rationale that underpins grounded theory is that the substantive theory emerges from the data, through a process that encourages the data to 'speak for itself'. The whole approach is predicated on the desire to discover the 'real' rather than to artificially construct the 'real'. Hence the emphasis is on 'naturalistic observations, open-ended interviewing, the sensitizing use of concepts [and] a grounded (inductive) approach to theorizing' (Denzin, 1994:508). In conclusion, Strauss and Corbin (1990:24) claim that grounded theory 'is a qualitative research method that uses a systematic set of procedures to develop an inductively derived grounded theory about a phenomenon'. Strauss and Corbin use the term 'constant comparative analysis', to point out that grounded theory requires the emerging theory to be able to account for a problem that is relevant for the people being studied (Becker, 1993). This is done through 'continuous interplay between analysis and data collection' (Strauss and Corbin, 1994:273). An integral component of this interplay is theoretical sampling.

Theoretical Sampling

According to Neuman (1997), the samples taken by field researchers, in contrast to survey research sampling, are purposive rather than random because they are based on a:
smaller, selective set of observations from all possible observations. It is called theoretical sampling because it is guided by the researcher's developing theory (Neuman, 1997 p.370).

However, Hammersley (1990) goes one step further by arguing that theoretical sampling involves selecting cases in a way that 'most effectively' develops the emerging theory. It could be argued that this involves shaping rather than allowing the real to emerge. Charmaz (1988) and Hammersley argue that as researchers analyze and develop their theoretical categories, they quite often discover that they need to continually sample more data to 'elaborate a category'. This supports our position of a deduction-induction dialectic. The reason for this, according to Charmaz, is because researchers 'do not know in advance what they will be sampling...Since grounded theorists systematically build their observations, theoretical sampling is part of the progressive stages of analysis' (p.125). This progressive interplay between data and theory underlines the operation of a deducto-inducto dialectic. This, and the manufacturing process, become clearer as theoretical sampling is critiqued.

Theoretical sampling becomes necessary when the researcher's existing data do not 'exhaust' the category being developed. The researcher will know this by the way in which the interviewees tell their stories. As a theme begins to emerge, the researcher actively sets out to 'discover' the extent of the theme's 'significance' for each case and its representativeness across cases. Discovery of the theme across cases highlights the theme's relevance as a core category. At this point, the researcher requires more data to 'saturate' and exhaust the category. This is achieved through sampling more relevant data, until the point of 'theoretical saturation' (Hammersley, 1990). Although the researcher can never see the complete picture, they will know when they have reached a point of 'saturation' when nothing new emerges. In other words, the researcher feels comfortable that they have 'heard enough'. However, the 'discovery' aspect is not clear. What is meant by 'actively sets out'? How does the researcher 'discover' significance? Who and what determines if the theme is significant? The answers to these questions hinge on the researcher's interpretations, which are often read against interlineal reference points that they bring to the process. If this is so then does the researcher consciously, or unconsciously, shape the direction of the interviews? The answer to this question is implied by the use of theoretical sampling, such that the researcher consciously shapes the direction of the interviews. For example, in theoretical sampling the number of cases is not important as the focus is on the 'potential' of each case to assist the researcher to arrive at 'theoretical insights' (Taylor and Bogdan, 1984). Bryman (1993) seems to support this. He argues that the researcher only needs to interview as many individuals as is required in order to 'saturate the categories being developed' (p.117). This is really only another way of saying constructed:

After completing interviews with several informants, you consciously vary the type of people interviewed until you had uncovered the full range of perspectives held by people in whom you are interested. You would have an idea that you had reached this point when interviews with additional people yielded no genuinely new insights (Taylor and Bogdan, 1984:83).

Interviews are interactive forms of communication between the researcher and the interviewee: not one-way from the interviewee to the researcher. Therefore, as the research progresses, 'relational' or 'variational sampling' is used to 'discover' data which 'confirms, elaborates and validates' links between categories (Strauss and Corbin, 1990). Turner 1981 (cited in Bryman, 1993) argues it is at this point that the researcher will look to developing hypotheses, or 'propositions', about these links. In other words, in theory the researcher moves from induction to deduction. Turner maintains that the researcher should seek to ascertain the conditions in which these relationships emerge, and then 'explore the implications of the emerging theoretical framework for other, pre- existing theoretical schemes which are relevant to the substantive area' (Bryman, 1993:84). The last stage of theoretical sampling involves 'discriminate sampling'.

With discriminate sampling the researcher consciously selects data which tests the validity, or representativeness, of the central category and the theory as a whole (Turner, 1981). In other words, the researcher actively seeks to find the best fit. Bryman (1993:84) notes that throughout this process theories are 'derived' (induction), 'refined', and 'tested' (deduction), and gradually developed into 'higher levels of abstraction' as the researcher nears the end of the data collection stage. This process again points to a deducto-inducto dialectic. Yin (1989:53) refers to theoretical sampling as 'replication logic'. According to Yin (1989:53), the researchers must be careful in their selection of cases so that each case either '(a) predicts similar results (a literal replication) or (b) produces contrary results but for predictable reasons (a theoretical replication)'. Yin (1989:54) clarifies this:

An important step in all of these replication procedures is the development of a rich, theoretical framework. The framework needs to state the conditions under which a particular phenomenon is likely to be found (a literal replication) as well as the conditions when it is not likely to be found (a theoretical replication). The theoretical framework later becomes the vehicle for generalizing to new cases...if some of the empirical cases do not work as predicted, modification must be made to the theory.

Unfortunately, Yin does not elaborate on how this selection process takes place. In other words, how does the researcher know in advance if a case is appropriate or not? The answer must be in the deductive realm! This notwithstanding, theoretical sampling is considered a qualitative process that determines the nature and size of the sample and sharpens the emerging theoretical framework. Through this process the researcher purposively looks for cases which support the developing concepts and categories in order to augment the patterns and themes, which aid in building a theoretical framework. This indicates that the researcher is actively engaged in a process of self-reinforcing, or manufacturing, of what it is they want to see. This is revealed by the premise associated with theoretical sampling: the researcher actively looks for cases that they consider are most 'representative' of the group being studied, and which elaborate and confirm the theoretical propositions through a process of 'replication'. Through this process, the emerging theory or theories are tested and refined until 'theoretical saturation' is attained. The goal is to reach a point of understanding that gives the researcher some assurance that they are able to see, as well as is possible, what the researched see. Yet the researcher can never see the complete picture, only what the researched, and the research process, allows them to see. Therefore, theoretical sampling is a tool that only allows the researcher to focus in on the allowed 'reality' of the people being researched, and that of the researcher.


Denzin (1988) argues that although the Strauss' approach to grounded theory attempts to convey everyday concepts and their meanings, it 'may move too quickly to theory, which becomes disconnected from the very worlds of problematic experience' (p.432). This serves to reinforce the claims that this process results in a manufactured reality. Although the standardised 'theoretical sampling' technique may go some way towards enabling its impact to be deconstructed, it can never eliminate the intrusion of a wide rang of hidden variables inherent in interviewing and interpretation. Theoretical sampling involves selecting cases which are considered by the researcher as most representative of the sample and therefore, of the emerging theory. Any deviant cases, or 'isolated variables abstracted from their context', as Kvale (1996) puts it, that do not support the emerging theory are of little use to the researcher in this instance. Unfortunately, the researcher can never be sure of the significance of these deviant cases, and base their judgments on their ability to interpret their relevance. This is inevitable, no matter which approach is adopted. The best that any researcher can hope for is to come close to the perceived reality of the researched, and one that the researcher is prepared to accept.

The Debate

Theory or Proposition?

There are a number of criticisms levelled at qualitative research. One of the most frequent is the tendency for it to lean towards 'a-theoretical' investigations (Bryman, 1993:86). Qualitative researchers who fail to infuse 'theoretical elements' into their research are generally the targets of this criticism (Bryman, 1993:85). The reason why many qualitative researchers are disinclined to begin with a pre- existing hypothesis, is because they believe that doing this will result in a loss of touch with the subject's 'real world' (Bryman, 1993 p.85). That they will impose on it and manufacture a skewed version of reality. In this context, inductive grounded theory is considered objective. However, on closer examination this appears suspect.

Bryman (1993) notes that qualitative research generally insists that 'contextual understanding' be grounded in the issue being studied; this concern with uniqueness tends to restrain comparison with other situations and so 'discourages theoretical development' (p.86). Grounded theorizing rests on the researcher not bringing any 'theoretical elements' with them to the research process. This is an unrealistic proposition, as researchers always bring their ideas with them to every stage of the research. This inevitably effects the shape and direction of the primary goal of grounded theorizing. Silverman (1994:153) argues that many field researchers attempting to 'cloak' their research in the guise of grounded theory have 'sidestepped' the issues of 'validity', or 'representativeness', and 'generality', by emphasising a concern to 'generate rather than to test theories'. However, Strauss (1987a), and Strauss and Corbin (1990) argue that grounded theory has the potential for testing theory. They argue that this can be achieved by combining qualitative and quantitative methods in grounded theorizing, with an emphasis on the Hypothetico-Deductive method (see Hammersley, 1990 and Stern, 1994 for a more detailed analysis of this situation). Unfortunately, this too is problematic as, according to Hammersley (1990:201), grounded theorizing approximates the Hypothetico-Deductive model in some respects but fails to do so in others. Nonetheless, the point made by Strauss (1987a) and Strauss and Corbin (1990) does have its merits, as it recognises the importance of using both methodological approaches in research, an idea that is further supported by Henwood and Pidgeon (1994). They argue that: 'philosophically speaking, theory cannot simply emerge from data. Observation is always set within pre-existing concepts...' (p.232). Herein lays the essence of the issue - all research, including the grounded theory method, begins with deduction.

Given that observations always exist within the researcher's knowledge framework, it would be foolhardy not to conduct a comprehensive review of the literature before using any approach. This is in order to 'bracket', as much as is possible, one's pre-existing ideas about the topic area of research. And yet, Streubert and Carpenter (1995:158) argue that any researcher who decides to use grounded theory as the basis for their research should not conduct a review of the literature first. Streubert and Carpenter point to Stern (1980) who argues that reviewing the literature prior to the onset of the study is 'unnecessary' and could lead to 'prejudgments and effect premature closure of ideas; the direction may be wrong; and available data or materials may be inaccurate' (Streubert and Carpenter, 1995:158). Yet, isn't this what researchers do anyway? There is no way of extracting our judgments and expectations (pre or post) about what we hope to achieve, from the research. These may not be as clear as after the literature review but they will still be there.

Stern suggests that the literature review should follow or proceed 'simultaneously with data analysis' in order to 'fill in the missing pieces in the emerging theory' (Streubert and Carpenter, 1995 p.158). Unfortunately, how can one know if something is 'missing' if the researcher has no sense of the complete picture? Conversely, if it is really a 'grounded theory' this will only be known at the end. All researchers, both qualitative and quantitative, bring to their research an awareness of the problem, and subsequent questions, all of which have been influenced by pre-existing ideas, no matter how 'vaguely formed' or 'poorly articulated at the outset' (Dey, 1996:99). Once this is acknowledged it is a short step to recognising that the techniques of induction and deduction are not about choices of one approach over another, but about using one and then the other at different stages of the research process. The argument, that one approach has more validity than the other, is illusory, because induction and deduction are bound by a dialectic process: They do not operate independently.

The recognition of 'pre-existing ideas' or 'concepts' highlights a fundamental problem with grounded theory: it fails to acknowledge 'implicit theories which guide work at an early stage' (Silverman, 1994:47). Grounded theorizing 'tends to discount the conceptual significance of the ideas we bring to the analysis, and the wider ideas we have to relate to it' (Dey, 1996:267). As we have pointed out, qualitative research that uses semi-structured questionnaires for example, are in effect focusing on a topic in which specific questions are often generated from prior ideas or theories, posing as interest. Dey (1996:99) points out that no matter how 'non-directive' the interview may be it has to be conducted 'with some research purpose in mind'. It also involves the interactive construction of narrative that is bound by time and place. Given this likely scenario, Yin (1989:20) argues that for qualitative research using the case study approach, it is necessary to conduct a review of previous research to 'develop sharper and more insightful questions about the topic' (Yin's emphasis). It is at this point that induction interacts with deduction. Yin's argument also seems to support this view by stating that qualitative research, and in particular that of grounded theory, is theory-laden prior to the theory building stages espoused by the original model discovered by Glaser and Strauss (1967).

Denzin (1994) argues that too much emphasis has been placed on the issue of prior theory: 'This preoccupation with prior theory can stand in the way of the researcher's attempts to hear and listen to the interpretive theories that operate in the situations studied' (p.508). Yet, it is always there, and so we need to acknowledge its place as an inevitable part of the research process. Nonetheless, Denzin's point is well taken. The importance of interpretation, in the construction of the realities of those being researched, reveals that these 'realities' are manufactured. However, the fundamental problem of 'theory' does not reside in the criticism of a particular methodological approach as such, but in the belief that research should be true to either induction or deduction. Dey (1996:267) argues that 'purity of procedure' should take a back-step to a more 'pragmatic...practical orientation to analysis'. Dey (1996) elaborates:

...research methods are probably much more autonomous and adaptable than some epistemologists would like to makes more sense to consider all the available tools and not leave one half of the toolbox locked...epistemological and ontological arguments are more useful if they examine knowledge as a practical accomplishment - how research works in practice - than if they indulge in prescriptive wrangles about how we really ought to proceed (p.267).

In this context we argue that all research is dialectic: process of deduction of and induction. As Dey's toolbox analogy demonstrates, it is not the tools (methods) of choice that are important, but the skill of the researcher in using whichever tools are appropriate to the task at hand. The question that remains therefore, centres on the nature of the product shaped by these tools. Does research expose reality, or is it only a fictional reality? How valid or representative is the product?

Validity, or Representation?

The issues of 'validity and reliability of research', according to Dey (1996:261), are 'crucial to all social research regardless of disciplines and the methods employed'. Dey makes a cursory distinction between 'validity' on the one hand and 'reliability' on the other. However, Winter (2000) believes this is a very important distinction which has divided the research community over the use of the term validity. According to Winter, 'the exact nature of 'validity' is a highly debated topic', both within the qualitative and quantitative traditions. Winter attempts to sort through this debate by deconstructing the notion of validity as it has been used in the literature, and reconstructing it in terms of 'truth issues'. From this process, Winter argues that just as there are 'different forms of validity', so too are there 'different forms of truth', and that each of these constructs are 'relative to the nature, or stage, of the research process'.

The underlying premise associated with validity, are the interpretations of the researcher. These interpretations are not uninformed however, but adhere to the basic principles of the dialectical deduction-induction process. Silverman (1994) argues that by using 'analytic induction', a qualitative method used for the purposes of 'testing a hypothesis in field research' (p.160), in conjunction with the premises of grounded theorizing, the researcher is able to 'test' the emerging themes. For example, through the systematic process of data collection and analysis the researcher chooses specific cases on 'theoretical grounds', such as selecting a case which offers a 'crucial test of a theory' (p.160). Is analytic induction then that much different from deduction? As Silverman argues, through analytic induction the researcher defines the phenomenon and then generates hypotheses (Silverman, 1994), or propositions (Corbin and Strauss, 1990). The next step is to examine the relationship between the 'proposition' and the case, if there is no relationship then the 'proposition' is 'reformulated', or 'redefined', to reject the case (Silverman, 1994 p.161). This again appears to support that idea of a dialectic process. Silverman points to Fielding (1988:7-8) who elaborates on this process:
'Examination of cases, redefinition of the phenomenon, and reformulation of hypotheses is repeated until a universal relationship is shown'.
According to Silverman, analytic induction is the 'equivalent' of the Hypothetico-Deductive approach of quantitative statistical testing for random error variance. However, Silverman notes that in qualitative analysis random error is non-existent. Any exceptions, or 'deviant cases', are systematically removed by continually reformulating the hypotheses 'until all the data fit' (p.161). However, this form of processing sounds very much like the researcher has moved from induction to deduction. This notwithstanding, Silverman, like Strauss (1987a), and Strauss and Corbin (1990), advocates combining qualitative and quantitative approaches to grounded theorizing as a means of validation. At the end of the day this is what actually occurs; a construct is a construct no matter how elaborate or simple the process.

According to Pandit (1996), one of the fundamental purposes of Strauss's version of grounded theory is to:

...enable the researcher to think systematically about data and relate them in complex ways. The basic idea is to propose linkages and look to the data for validation (move between asking questions, generating propositions and making comparisons).

In other words, we discern order and linkages in chaos. This is precisely how research constructs. It also begins to sound like one is expected to reject chaos in favour of order: encouraged to look for something that might not be there? Stern (1994) agrees with this point. According to Stern, this model 'brings to bear every possible contingency that could relate to the data, whether it appears in the data or nor' (Stern, 1994 p.220; Stern's emphasis; see Pandit, 1996 for a more precise outline of this model). In other words, seeing what we want to see: an ordered rather than a disordered reality. At the very least, it demonstrates the use of deductive techniques to aid in the development and validation of theory. It also shows that once we move past reporting we start to construct new realities. This reinforces the view that the divide between induction and deduction, just as the issue of validity and reliability, may simply be illusory. At best the research only remains valid within its context and limitations, it does not open the way for uncontested generalisation or extrapolation.

Generalization, or Extrapolation?

This is a common concern for qualitative researchers, especially for those who use case studies, and/or grounded theorizing techniques, in their research. The question often asked of such research is, 'how can you generalize from a single case?' (Yin, 1989:21). The response of most qualitative researchers has usually been to conduct research that uses 'multiple-case studies', such as sampling a number of sites (Yin, 1989:21; Bryman, 1995:170), or to generalize to 'theoretical propositions' rather than to 'populations or universes' (Yin, 1989:21). The aim in the later case is to 'engender patterns and linkages of theoretical importance' (Bryman, 1995:173). Bryman (1995:173) uses Burgelman (1985:42) to explain that this form of generalization is beneficial for generating 'new insights that are useful for building theory'. This is especially relevant for the theoretical sampling technique. This technique demands the researcher incorporate a systematic process of theoretical analysis and data collection by selecting specific cases, which verify the emerging theory, and so reinforcing what the researcher wants to see. Bryman (1993) is quick to point out that any view which:

...plays down its role in relation to the testing of theory may be missing an important strength that qualitative investigations possess. In other words, there is nothing intrinsic to the techniques of data collection with which qualitative research is connected that renders them unacceptable as a means of testing theory (p.123).

Rubin and Rubin (1995) maintain that there are 'two principles' used in qualitative research that are used to generalize the findings of interviews to wider situations. The first of these is completeness, which is where the researcher conducts ongoing interviews in order to 'saturate' their categories until nothing new presents itself (p.73). Again, focusing the lens in order to see what we suspect is there. This principle is adapted from Glaser and Strauss' (1967) premise on grounded theorizing. Two features distinguish the second principle: similarity and dissimilarity. Testing for similarity involves conducting interviews at 'different sites' and under 'different conditions' in the hope of finding similar 'themes and concepts', and so validating the original findings (p.74). With dissimilarity testing the researcher interviews individuals with different 'background characteristics', or interviews individuals who are employed in different work places or environments to the original interviewees (p.74). Again, the researcher wishes to see if the same 'themes' emerge in these 'different situations' (p.74). Rubin and Rubin (1995) explain:

...sometimes dissimilarity sampling is done by choosing additional sites that have contrasting circumstances and interviewing people in each site who are in similar roles. Your reasoning is that if you change the context or setting but the same themes emerge from your interviews, you believe that what you have learned can be generalized to people in similar roles (pp.74-75).

The meaning and level of importance attached to the term generalization is usually associated with positivist approaches to research. Such research largely focuses on establishing 'rigorous causal relations under rigid experimental conditions' (Patton, 1997:258) which have been 'logically' derived 'from a possible causal law in general theory' (Neuman, 1997:67). A fundamental component demanded by this approach is that researchers remain 'detached, neutral, and objective' (Neuman, 1997). According to Neuman, this approach is considered synonymous with 'science' and is 'widely taught' as such. However, Neuman points to Dilthey (1883) who argued that there were two 'fundamentally different types of science: Naturwissenschaft and Geisteswissenschaft' (Neuman, 1997 p.68). The first is based on 'abstract explanation', while the second is based on an 'empathic understanding, or Verstehen, of the everyday lived experience of people in specific historical settings' (Neuman, 1997 p.68). It is with the second 'type of science', interpretive social science, that the meaning of generalization is used in qualitative research.

Patton (1997:258) points to Cronbach (1982) who introduced the idea of 'extrapolation rather than generalization'. According to Patton, 'extrapolation involves logically and creatively thinking about what specific findings mean for other situations, rather than the statistical process of generalizing from a sample to a larger population'. The emphasis of extrapolation is that findings are interpreted in light of the 'experiences' and 'knowledge' of both the researched and the researcher, and then 'extrapolated', or 'applied', using all 'available insights, including understandings about quite different situations' (Patton, 1997:259). Once again, signifying a deducto-inducto dialectic. Underpinning this approach is the idea that interpretation shifts from attempting to establish 'truth in some absolutist sense', which Patton argues is the aim of 'basic research', to a concern with conclusions which are 'reasonable, justifiable, plausible, warranted, and useful' (Patton, 1997:259). Patton elaborates:

Unlike the usual meaning of the term generalization, an extrapolation connotes that one has gone beyond the narrow confines of the data to think about other applications of the findings. Extrapolations are modest speculations on the likely applicability of findings to other situations under similar, but not identical, conditions. Extrapolations are logical, thoughtful, and problem-oriented rather than purely empirical, statistical, and probabilistic. (p.289)

The shift in meaning between the positivist position of generalization, and the interpretivist position of extrapolation, is an interesting clarification. However, the point raised by this discussion is to highlight that researchers construct realities based on what they want to see rather than what is necessarily there. However, this does not mean that their construction of the reality of the researched is not 'valid', or 'representative' of the group studied. It simply means that the researcher's version of reality is open to further scrutiny. This is what a theory, or proposition, is supposed to do, draw inferences from the research and make statements about causality that are then generalised, or extrapolated, to another group and context similar to the one already studied. Unfortunately, any attempts to do so will always be problematic. We argue that the most that can be done is to call on further research in the hope of shedding more light on the area of research. Validity is not transferable; it remains an internal and highly compromised quality.


Only when we accept that all research begins and ends with answers, will we understand that all research, of necessity, begins as a deductive process that then draws on induction. At no point are researchers able to divorce themselves from this dialectic process. Added to this is the limitation that at the heart of any research is the desire to understand something that interests us. From the very beginning therefore, researchers are inclined to see what they want to see and add to this once the data processing begins. The issue of whether this corresponds with what is there remains a moot point. The method chosen cannot eliminate this because two important questions remain unanswered and are unanswerable. First, do researchers see what is there? Second, are researchers allowed to see what is there?

Research is initiated because knowledge is incomplete, otherwise why conduct research in the first place. Nonetheless, researchers begin with an idea and then speculate about the answer. These ideas, and/or speculations, are generally based on what has gone before. What the researcher sees is shaped by what has been put in place by the claims of others. Methodological purity and claims about what it can achieve are really a form of self-delusion. Delusional in the sense that researchers begin with half-formed answers, and then work towards confirming what it is they see, or feel, much like a self-fulfilling prophecy. In the end, the debate about which method is the most 'correct' approach to use when conducting research only disguises the fact that all researchers set out to find what it is they want to find, regardless of the methods adopted. Therefore, any claims to universal validity are simply an illusion.


BECKER, P.H. (1993) 'Pearls, Pith, and Provocation: Common Pitfalls in Published Grounded Theory Research', Qualitative Health Research, Vol. 3, No. 2, pp. 254 - 260.

BLAIKIE, N. (1993) Approaches to Social Enquiry. Cambridge: Polity Press.

BRYMAN, A. (1993) Quantity and Quality in Social Research. London: Routledge.

BRYMAN, A. (1995) Research Methods and Organization Studies. London: Routledge.

BURGELMAN, R.A. (1985) 'Managing the New Venture Division: Research Findings and Implications for Strategic Management', in A. Bryman (1995), Research Methods and Organization Studies. London: Routledge.

CHARMAZ, K. (1988) 'The Grounded Theory Method: An Explication and Interpretation', edited by R.M. Emerson, Contemporary Field Research: A Collection of Readings. Illinois: Waveland Press, Inc, 19-35.

CORBIN, J. and STRAUSS, A. (1990) 'Grounded Theory Research: Procedures, Canons, and Evaluative Criteria', in N.R. Pandit (1996), 'The Creation of Theory: A Recent Application of the Grounded Theory Method', The Qualitative Report, Vol. 2, No. 4, < 4/pandit.html>

CRONBACH, L.J. (1982) Designing Evaluations of Educational and Social Programs. San Francisco: Jossey-Bass.

DENZIN, N.K. (1988) in M. LONKILA (1995), 'Grounded Theory as an Emerging Paradigm for Computer-assisted Qualitative Data Analysis', edited by U. Kelle, Computer-Aided Qualitative Data Analysis: Theory Methods and Practices. London: Sage Publications, 41-51.

DENZIN, N.K. (1994) 'The Art and Politics of Interpretation', in N.K. Denzin and Y.S. Lincoln (editors) Handbook of Qualitative Research. Thousand Oaks: Sage, 500-515.

DEY, I. (1996) Qualitative Data Analysis: A User- Friendly Guide for Social Scientists. London: Routledge.

FIELDING, N.G. (1988) 'Actions and Structure', in D. Silverman (1994), Interpreting Qualitative Data: Methods for Analysing Talk, Text and Interaction. London: Sage Publications.

GLASER, B.G. (1967) The Discovery of Grounded Theory. Chicago: Aldine.

GLASER, B.G. and STRAUSS, A.L. (1967) Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Aldine.

HAMMERSLEY, M. (1990) The Dilemma of Qualitative Method: Herbert Blumer and The Chicago Tradition. London: Routledge.

HENWOOD, K. and PIDGEON, N. (1994) 'Beyond the Qualitative Paradigm: A Framework for Introducing Diversity within Qualitative Psychology', Journal of Community and Applied Psychology, Vol. 4, pp. 225 - 238.

KELLE, U. (1997) 'Theory Building in Qualitative Research and Computers Programs for the Management of Textual Data', Sociological Research Online, Vol. 2, No. 2, <>

KUHN, T. (1970) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

KVALE, S. (1996) Interviews: An Introduction to Qualitative Research Interviewing. Thousand Oaks: Sage Publications.

MERTON, R.K. (1957) 'Social Theory and Social Structure, 2nd Edition', in U. Kelle (1997) 'Theory Building in Qualitative Research and Computers Programs for the Management of Textual Data', Sociological Research Online, Vol. 2, No. 2, <http://www.socreso>

NEUMAN, W. Lawrence (1997) Social Research Methods: Qualitative and Quantitative Approaches, Third Edition. Boston: Allyn and Bacon.

NIPPERT-ENG, C. (1996) Home and Work: Negotiating Boundaries through Everyday Life. London: The University of Chicago Press.

PANDIT, N.R. (1996) 'The Creation of Theory: A Recent Application of the Grounded Theory Method', The Qualitative Report, Vol. 2, No. 4, <>

PATTON, M.Q. (1997) Utilization-Focused Evaluation: The New Century Text, Edition 3. Thousand Oaks: Sage.

RUBIN, H.J. and RUBIN, I.S. (1995) Qualitative Interviewing: The Art of Hearing Data. Thousand Oaks: Sage Publications.

RUBINSTEIN, R., Laughlin, C. and McMannis, J. (1984) Science as Cognitive Process: Towards an Empirical Philosophy of Science. Philadelphia, Penn.: University of Pennsylvania Press.

SILVERMAN, D. (1994) Interpreting Qualitative Data: Methods for Analysing Talk, Text and Interaction. London: Sage Publications.

STERN, P.N. (1980) 'Grounded Theory Methodology: Its Uses and Processes', in H.J. Streubert, and D.R. Carpenter (editors) (1995) Qualitative Research in Nursing: Advancing the Humanistic Imperative. Philadelphia: J.B. Lippincott Company.

STERN, P.N. (1994) 'Eroding Grounded Theory', J.M. Morse (editor) Critical Issues in Qualitative Research Methods. Thousand Oaks CA: Sage, pp. 212 - 223.

STRAUSS, A.M. (1987a) Qualitative Analysis for Social Scientists. Melbourne: Cambridge University Press.

STRAUSS, A.M. (1987b) 'Qualitative Analysis for Social Scientists', in M. Hammersley (1990), The Dilemma of Qualitative Method: Herbert Blumer and The Chicago Tradition. London: Routledge.

STRAUSS, A.M. and CORBIN, J. (1990) Basics of Qualitative Research: Grounded Theory Procedures and Techniques. London: Sage.

(1994) 'Grounded Theory Methodology', in N.K. Denzin and Y.S. Lincoln (editors) Handbook of Qualitative Research. Thousand Oaks: Sage, pp. 273 - 285. STREUBERT, H.J. and Carpenter, D.R. (1995) Qualitative Research in Nursing: Advancing the Humanistic Imperative. Philadelphia: J.B. Lippincott Company.

TAYLOR, S.J. and BOGDAN, R. (1984) Introduction to Qualitative Research Methods: The Search for Meanings. Brisbane: John Wiley and Sons.

TURNER, B.A. (1981) 'Some Practical Aspects of Qualitative Data Analysis: One Way of Organising the Cognitive Processes Associated with the Generation of Grounded theory', in A. Bryman (1993), Quantity and Quality in Social Research. London: Routledge.

WALLACE, W.L. (1971) The Logic of Science in Sociology. Chicago, Illinois: Aldine-Atherton.

WINTER, G. (2000) 'A Comparative Discussion of the Notion of 'Validity' in Qualitative and Quantitative Research', The Qualitative Report, Vol. 4, Nos. 3 & 4, < 3/winter.html>

YIN, R.K. (1989) Case Study Research: Design and Analysis (Revised Edition). London: Sage Publications.

Copyright Sociological Research Online, 2001