Copyright Sociological Research Online, 1997

 

Kelle, U. (1997) 'Theory Building in Qualitative Research and Computer Programs for the Management of Textual Data'
Sociological Research Online, vol. 2, no. 2, <http://www.socresonline.org.uk/2/2/1.html>

To cite articles published in Sociological Research Online, please reference the above information and include paragraph numbers if necessary

Received: 11/2/97      Accepted: 2/6/97      Published: 30/6/97

Abstract

This article refers to recent debates about the potential methodological costs and benefits of computer use in qualitative research and about the relationship between methodological approaches (eg. 'Grounded Theory') on the one hand and computer-aided methods of qualitative research on the other. It is argued that the connection between certain computer-aided strategies and methodological approaches is far more loose than is often assumed. Furthermore, the danger of methodological biases and distortion arising from the use of certain software packages is overemphasized in current discussions, as far as basic tasks of textual data management ('coding and retrieval') usually performed by this software are concerned. However, with the development of more advanced and complex coding and retrieval techniques, which are regarded by some authors as tools for 'theory building' in qualitative research, methodological confusion may arise if basic prerequisites of qualitative theory building are not taken into consideration. Therefore, certain aspects of qualitative theory building which are relevant for computer aided methods of textual data management are discussed in the paper.


Keywords:
Computer-Aided Qualitative Data Analysis; Grounded Theory; Methodology; Qualitative Methods; Theory Building

1.1
In their article Qualitative Data Analysis: Technologies and Representations, Coffey, Holbrook and Atkinson (1996) have expressed their concerns that the increasing use of specific computer software could lead researchers to adopt a new orthodoxy of qualitative analysis. The authors argue that this would go strictly against current postmodernist and poststructuralist trends within ethnography which foster the acceptance and celebration of diversity. The article by Coffey and colleagues represents the most recent in a series of concerned warnings regarding potential methodological dangers of computer-aided qualitative data analysis software (cf. Seidel, 1991; Agar, 1991; Seidel and Kelle, 1995; Kelle and Laurie, 1995). Since the advent of such software, many qualitative researchers, developers of such software among them (including Seidel, 1991; Seidel and Kelle, 1995), have felt unease about the prospect that the use of computers could alienate the researcher from their data and enforce analysis strategies that go against the methodological and theoretical orientations qualitative researchers see as the hallmark of their work.

1.2
In an earlier writing, Lee and Fielding (1991: p.8) linked the fear of the computer taking over the analysis to an often used literary archetype which found its best expression in Shelley's famous Victorian novel Frankenstein or: The New Prometheus. The idea that a computer could become a kind of Frankenstein's monster and finally turn against its human creators is, as one can see from various novels and movies (eg. Arthur C. Clarke's novel 2001 - A Space Odyssey, together with Stanley Kubrick's screen adaptation), a firm part of modern mythology. But the fear of the computer alienating the qualitative researcher from their data should not simply be addressed as a phantasy derived from popular myths, since it is also rooted in differing concepts of the role of software in the production of sociological knowledge. At first glance the reluctance of many qualitative researchers concerning computers could be seen as a result of the paradigm of computer use which was dominant until the advance of the Personal Computer: in the mainframe area computers were mainly seen as 'number crunchers' performing algebraic operations with numerical data (cf. Kelle, 1995: p.8). Reservations of qualitative researchers against computer-aided methods of data analysis at least partly reflected the distance of these scholars from the mainstream methodology of quantitative survey and experimental research where, during the 1960s and 1970s, the computer became an indespensible aid.

1.3
Furthermore, a closer look at the philosophical and epistemological roots of interpretive research makes clear that a certain caution against computer technology is justified with regard to the nature of the process of hermeneutic Verstehen. Philosophical approaches which play an important role within qualitative research, such as Phenomenology, the Oxford Philosophy of Language and continental Hermeneutical Philosophy (cf. Giddens, 1976), had always stressed that ambiguity and context-relatedness have to be regarded as central characteristics of everyday language use. Following this argument - which has been further elaborated by contemporary postmodernist approaches (Denzin and Lincoln, 1994: pp.10f.) - it is impossible to make sense of written or spoken messages in everyday contexts - an operation which forms the core of hermeneutic Verstehen - without a 'tacit knowledge' which cannot easily (if at all!) be formalized. Contrary to that, the application of a 'Turing machine' (which represents the most general concept of an information processing machine) to a certain domain requires the formulation of exact and precisely stated rules which are completely context-free and contain no ambiguities. Thus, the attempt to apply the logic of a Turing machine to the domain of human understanding can be regarded as problematic, as has also been argued by critical computer scientists (see Dreyfus, 1972; Dreyfus and Dreyfus 1986; Winograd and Flores, 1986).

1.4
Nevertheless, these arguments only relate to the possibility of analyzing textual data with the help of algorithmic procedures (like quantitative data are analyzed with the help of statistical algorithms), but not to the opportunities of ordering and structuring textual material with the help of database technology. However, as Platt has demonstrated by her recent investigations, the choice of methods is not always motivated only by epistemological or methodological considerations (Platt, 1996), but by a variety of contextual factors, eg. the availability and accessibility of a certain technology. Thus, the development of software for textual data management did not start before qualitative researchers who were also ambitious computer users discovered the great possibilities for text storage and retrieval offered by computer technology. This did not take place before the advent of the Personal Computer which lead to a shift from the prominent paradigm of computer use from 'computers as number crunchers' to 'computers as devices for the intelligent management of data', incorporating facilities for the complex and convenient storage and retrieval of text. Consequently, the newly developed software programs for computer-aided textual analysis became tools for data storage and retrieval rather than tools for 'data analysis'. Nevertheless, terms used quite frequently in the ongoing debate like 'computer-aided qualitative data analysis' or 'software program for theory building' carry implicit connotations of computer programs as tools for the analysis of textual data which could be compared to software packages that perform statistical analyses. Looking at the current literature, one could identify several reasons for that. Since the computer represents a strong metaphor for systematicity, objectivity and rigour, it did not only inspire 'Frankenstein's monster' fears, but also optimistic forecasts that computers would make the qualitative research process more transparent and rigorous (Conrad and Reinarzm, 1984: p. 4; Richards and Richards, 1991), thus adding reputation to a methodology which had always suffered from the odour of enhancing unsystematic and 'impressionistic' forms of inquiry. Whereas such ideas raised suspicion among researchers rooted in constructivist and postmodernist approaches who challenge the applicability of general standards of 'validity' to qualitative research (cf. Kelle and Laurie, 1995: pp. 20f.), arguments of that kind seem to be often used by others as a strategic means to convince funding boards that the proposed research endeavour will be carried out in a rigorous and scholarly way (cf. also Lee and Fielding, 1995). Furthermore, issues of software marketing certainly also play an important role for the choice of a certain methodological terminology.

1.5
In the following sections the argument will be put forward that the danger of methodological biases and distortion arising from the use of certain software packages for qualitative research may be overemphasized, as far as basic tasks of textual data management usually performed by this software are concerned. In Section 2 these basic tasks, namely the operations of coding and retrieval, will be discussed, and it will be argued that 'coding and retrieval' represents an 'open technology' which can be creatively used in various contexts of hermeneutic work. It will be argued that the connection between certain data archiving strategies on the one hand and certain methodologies (especially 'Grounded Theory') on the other is far more loose than often assumed. In section 3 more complex coding and retrieval strategies will be presented which are often viewed as a basis of 'qualitative theory building'. These strategies indeed carry the danger of methodological confusion and distortion if basic prerequisites of qualitative theory building are not taken into consideration. Therefore, certain aspects of qualitative theory building which are relevant for computer aided methods of textual data management are discussed in section 4.

What are the basic Functions of 'Computer Programs for Qualitative Analysis'?

2.1
The general limitations of a Turing machine with regard to the understanding of the ambiguities and context-relatedness of everyday language has been already discussed in ¶1.3. Nevertheless, there are a variety of mechanic data organization procedures which play a role in qualitative research. These procedures which refer to the necessity of the analyst to identify similarities, differences and relations between different text passages, can be mechanized and thus be performed with the help of an electronic data processing machine. In order to be able to retrieve text segments from different parts of the text corpus, an organizing scheme must be constructed. In principle two possibilities are available for the construction of such a scheme which have been widely used for hundreds or even thousands of years by scholars who work with text in the historical sciences, philology, literary criticism, theology, and, nowadays, the social sciences: (1) the construction of indexes and (2) the inclusion of cross references into the text.

2.2
We are all familiar with indexes (or 'registers', or 'concordances') of various kinds; the widest applied form of an index is certainly an author and subject index in a book. An electronic index is usually constructed by storing index words together with the 'addresses' of text passages. Such an address may contain the beginning and the end, in terms of line numbers, of a certain text passage to which the index word refers. Software programs which are based on these principles have been called 'code-and-retrieve' programs (Kelle, 1995: p.4ff.; Richards and Richards, 1995).

2.3
Eletronic cross references can be constructed with the help of so-called 'hyperlinks'. By pressing a 'button' the user of a textual database can jump between the text passages which are linked together. With the advent of hypertext and hypermedia technology it has often been forgotten that their main underlying principles have been widely known and applied for hundreds of years. One can easily see this by opening an ordinary King James Bible where a multitude of 'hyperlinks' are displayed on the margins of every page. By using these links a 'bible user' can, for example, jump between a teaching of Jesus in one of the Gospels and a passage of the Old Testament to which Jesus refers in this teaching.

2.4
Such straightforward techniques of data management should not at all be considered trivial. Instead, they have a far- reaching methodological significance. The comparison of text passages ('synopsis'), for example, helped to develop the most widely accepted theory about the origin of the four gospels. Coffey and her colleagues assume a convergence between methodological approaches and the prefered technique of data organization, thereby assuming that 'indexing' or 'coding' is nearer to an 'orthodox, grounded theory oriented' style of analysis while 'cross-referencing' through 'hyperlinks' would be more adequate for a 'postmodernist' approach which celebrates diversity. Looking at biblical exegesis as a field of hermeneutics, where extended experiences with such techniques have been collected, one will not find very much evidence that confirms such assumptions: techniques of indexing or cross referencing are used simultaneously by all interpreters, regardless whether they are more 'orthodox' and 'dogmatic' or more 'liberal', that means whether they take into account or not the polyvocality and diversity of biblical authors, their intentions and their diverse cultural backgrounds. And those (mostly historical) connections that can be found between data management techniques and hermeneutical schools which really exist point to the fact that 'indexing' (or 'coding') is extremely well suited to be used as a weapon against orthodoxy. Techniques of indexing and coding were used extensively in the 18th and 19th century (and are still used) by such biblical scholars who wish to challenge claims of biblical inerrancy and infallibility. But, needless to say, also biblical literalists and fundamentalists make use of synopses, thereby denying inconsistencies between text passages by means of complicated and devious interpretations.

2.5
Reasons for the preference of indexing over cross referencing by the developers of the first software programs for qualitative analysis may be far more simple than Coffey and her colleagues assume: if a certain text is structured for the first time, indexing is much easier than the use of cross references. Let us assume that the analyst finds a text passage 'B' which contains a similarity or a substantive relation to a text passage 'A'. To now define a cross reference or the 'hyperlink' between 'A' and 'B', 'A' has to be found in the text corpus, which is much more simple if 'A' has been previously indexed.

2.6
At first glance, electronic coding and retrieval represents a mere mechanization of widely used manual indexing techniques but do not change their underlying logic. Nowadays, a variety of programs are available, which are proposed as an alternative to code-and-retrieve software (Richards and Richards, 1995) and which also have been addressed as 'third generation' software for qualitative analysis (Mangabeira, 1995). In the literature it has been emphasized that programs like NUDIST, HyperRESEARCH, ATLAS/ti, AQUAD or Hypersoft maybe of specific use for qualitative 'theory building' (Kelle 1995: p.62ff.; Weitzman and Miles, 1995; Richards and Richards: 1995). Nevertheless, these new programs (which often represent new and expanded versions of simple code-and-retrieve programs) do not provide a totally different logic of textual data management, but only more or less complicated extensions of code-and-retrieve facilities. The question now would be whether these extended features could exceed the analytic possibilities offered by manual methods.

Complex Coding Through Defining Linkages between all kinds of Different Elements of the Qualitative Database.

2.7
As has been mentioned before, in most code-and-retrieve programs coding is technically realized by defining pointers which contain the addresses of text segments and thus establish linkages between codes and text segments. In the same way it is also possible to define a linkage between one code and another. This linkage can take the form of, for example, the subsumption of one code under a more general code, or the subdivision of one code into several more refined subcategories. If researchers restrict themselves to this kind of connection their category scheme could be represented by hierarchical networks. The program NUDIST contains extensive features which support the construction of hierarchies of code categories. But linkages between codes may not only take the form of hierarchical relations but can form whole networks of categories, containing chains or loops. The program ATLAS/ti offers a variety of features for building non-hierarchical networks. Hyperlinks, offered particularly by ATLAS/ti and HYPERSOFT represent a further possibility for linking elements of the qualitative database: Text segments can be linked to each other without using codes. Most programs also contain features that allow the researcher to write short comments on the data ('memos') and to link these memos either to text segments, codes or to other memos.

Complex Retrieval Techniques for Restricting Search Procedures to Text Documents or to Text Segments

2.8
Complex retrieval techniques can help to retrieve text segments according to document-specific variables such as the age, gender or profession of an interviewee ('selective retrieval'). With selective retrievals the researcher can, for example, systematically compare the work orientations of men and women or the work orientations of members of certain professions. Another useful complex retrieval technique utilizes information on whether text segments coded with certain codes co-occur in a given document. Co-occurrences can be defined in various ways: indicated by the overlapping or nesting of text segments to which the codes under investigation are attached, or indicated by a certain specified maximum distance between the different text segments. With certain software packages (eg. THE ETHNOGRAPH and NUDIST ) it is also possible to retrieve all text segments that follow each other in a certain sequence.

2.9
Current discussions on the Qual-software mailing list, my own experiences as a methodological consultant and investigations among users (Dotzler, 1995) indicate that in practice, many users of computer software restrict themselves to ordinary coding and retrieval and do not exploit the various possibilities of 'third generation' or 'theory building' software. To understand this fact and to assess the methodological implications of computer-aided data organization techniques it is of utmost importance to look at the amounts of data collected and at the relative share of different tasks in the analytic process. Lee and Fielding have found that the median sample size, even in qualitative studies that use software for data management, is about 40 (Lee and Fielding, 1996a) which seems feasible if one bears in mind that 'representativeness' in the statistical sense of that word is usually not regarded as the crucial purpose of qualitative samplical. Qualitative research usually means the collection and analysis of unstructured textual material in order to develop concepts, categories, hypotheses, theories (or mere descriptions of social life worlds). Thus, most of the time during 'qualitative data analysis' is spent on reading, rereading, interpreting, comparing and thinking on texts. Thereby, the analysis of thousands of text segments in hundreds of interviews seems an unsurmountable task with or without a computer. But, with some dozens of cases, the relative utility of many of the complex coding and retrieval techniques compared to the ordinary retrieval techniques described in ¶2.3 is in many cases only modest. Most of the complex retrieval commands outlined in ¶2.8 can be also realized with the first versions of 'code-and- retrieve' programs, by typing in some more commands and by integrating paper and pencil work. Selective retrieval, for instance, may be realized without electronically storing case constant variables, by simply drawing on a paper list of cases for inclusion of cases with certain attributes. The use of a computer for these tasks is of course a facilitation of work. However, if one takes into account that analysts spend several hours or even days to interpret and compare several retrieved text segments, it becomes clear that the gain of typing one single instead of five commands of a retrieval procedure is often not considered to be high enough to outweigh the time and effort necessary to learn new software functions. Investigations which were conducted by Lee and Fielding (1996a) also show that users tend to cease the use of a specific software rather than adopt their own analysis strategy to that specific software. There seem to be good reasons to assume that researchers are primarily guided by their research objectives and analysis strategies, and not by the software they use.

Grounded Theory and CAQDAS: Misunderstanding the Relation Between Methodology and Data Archiving Strategies

3.1
'Code-and-retrieve' methods are useful for any researcher who wants to compare text segments coming from different sources and refer to a common topic, regardless of whether he or she is affiliated to the methodology of Grounded Theory or not. The comparison of text segments is conducted in different hermeneutic sciences, such as sociology, history, theology etc. Consequently, there should be no reason for an exclusive methodological link between Grounded Theory on the one hand and computer software for qualitative data administration on the other hand.

3.2
Nevertheless, as Lonkila has noted, user's guides as well as the methodological writings about software for qualitative data management give the impression of a strong influence of Grounded Theory (Lonkila, 1995: p.46). However, a closer look at methodological backgrounds of the developers gives the clear impression that different programs have been developed on the basis of differing conceptions of how knowledge of social reality is produced. John Seidel has developed and used his software package 'THE ETHNOGRAPH' in various projects which applied methods of discourse analysis rooted in phenomenological and ethnographical approaches (Seidel, personal communication). Udo Kuckartz based his strategy of qualitative analysis which lead to the development of the software packages 'MAX' and 'WINMAX' on Max Weber's concept of 'ideal types' (Kuckartz, 1995). Prior to the development of his program 'AQUAD' Guenter Huber belonged to a group of researchers which tried to integrate a Popperian methodological approach into qualitative research (Huber and Mandl, 1982). These ideas strongly influenced 'AQUAD'. (For further discussions about the relation between different concepts of theory and method in social science on the one hand and different software packages on the other, refer to Tesch, 1990; Weitzman and Miles, 1995: pp. 329 ff.; Mangabeira, 1995; Kelle, 1995). Most interestingly, albeit their differing methodological and theoretical background all of the developers have based their programs on 'code and retrieve' algorithms, which supports the argument that 'code and retrieve' represents an 'open technology' applicable in various theoretical and methodological contexts.

3.3
There is strong evidence that this theoretical and methodological diversity can also be found among users of software for textual data management. In their comments on the paper of Coffey et al, Lee and Fielding draw our attention to the empirical fact that 70% of a sample of qualitative studies performed with the help of computers show no explicit relation to Grounded Theory (Lee and Fielding 1996b: ¶3.2). There are other possible explanations for the fact that grounded theory is quite often mentioned in methodological writings other than the emergence of a new orthodoxy:

3.4
In the user's guides and methodological writings already mentioned, software is not only regarded as an instrument for data archiving and management but also as a tool for data analysis. Therefore, a methodological underpinning is needed. At present, proponents of the Grounded Theory approach belong to those very few authors who try to describe in detail the analytical procedures applied in qualitative research. Novices in qualitative research often welcome such detailed accounts of analysis procedures which help to overcome incertainties caused by the often bemoaned lack of explicitness of qualitative research procedures. One reason for this lack of explicitness certainly lies in the difficulty of formalizing the interpretive and hermeneutic analysis of text, and, therefore, many scholars prefer to address interpretive analysis as an artistic endeavour rather than as a 'method' (Eisner, 1981). The obvious fact that interpretive analysis in ethnography and qualitative sociology contains ineliminable subjective elements has not only always raised the suspicion of adherents of quantitative mainstream methodology but also inspired the shift of many colleagues towards 'postmodernist' and 'deconstructionist' approaches. At present, Grounded Theory seems to be almost the only approach which can meet the desire of others who look for a concrete and applicable methodology of qualitative analysis. But a closer look at the concepts and procedures of Grounded Theory makes clear that Glaser, Strauss and Corbin provide the researcher with a variety of useful heuristics, rules of thumb and a methodological terminology rather than with a set of precise methodological rules (or 'algorithms') (Kelle, 1996). Consequently, concerns about a new orthodoxy of qualitative analysis based on Grounded Theory seem to lack solid ground.

3.5
Lonkila points out that there are extensive commonalities between the terminology of Grounded Theory and the terminology used in the context of computer-use for qualitative data administration. Since the advent of the first computer programs it has become common to talk of 'coding', although the term 'indexing' (which is preferred by some authors, eg. by the developers of NUDIST) seems much more accurate if one looks at the origins of this computer-aided technique of textual data management. One of the reasons for the preference of the terms 'codes' and 'coding' maybe that these notions also play an important role within Grounded Theory. The term 'theory building', which is often connected to advanced code-and-retrieve facilities, also parallels the terminology of Grounded Theory. It is related to the assumption (often made implicitly) that the 'codes' (or 'indexes') used to organize the data material represent those theoretical categories which the researchers uses or develops through the ongoing process of analysis. Since a theory can be regarded as a network of categories the idea suggests itself that tools for connecting codes to each other could be helpful for displaying the structure of the emerging theory and that software which facilitates the connection of categories can make a major contribution towards theory building.

3.6
It has also been proposed to use techniques of complex retrieval to 'test' hypotheses which are derived from the emerging theory (Hesse-Biber and Dupuis, 1995; Huber, 1995). To give an example: a researcher who has coded the data with codes for 'critical life events' and 'emotional disturbances' may now examine the hypothesis that critical life events are always or frequently accompanied by emotional disturbances. In this way the co-occurrence of codes in a certain document may be seen as an indicator for the presence of critical evidence for or against a hypothesis. Two important caveats should be made concerning such strategies. The first one refers to the roles of 'codes' in the ongoing process of analysis as distinguished from 'theoretical categories'. The second caveat relates to two possible meanings of the term 'hypothesis': The term 'hypothesis' may denote an empirically testable statement about the exact relation of two defined variables or the term may stand for a tentative and imprecise conjecture about possible relationships between two domains of interest.

3.7
If coding is done within a hypothetico-deductive (H-D) research strategy (eg. in the context of 'quantitative content analysis') it is obvious that codes must represent the theoretical categories applied to the field under study. If a quantitative content analyst wants to find out whether newspapers with a 'liberal party affiliation' express a more positive attitude towards certain social policy measures than newspapers with a 'conservative party affiliation', s/he is well advised to operationalize these categories in a proper way and to code newspapers according to the party affiliation of their staff. But, as Charmaz points out:
Qualitative coding is not the same as quantitative coding. The term itself provides a case in point in which the language may obscure meaning and method. Quantitative coding requires preconceived, logically deduced codes into which the data are placed. Qualitative coding, in contrast, means creating categories from interpretation of the data. Rather than relying on preconceived categories and standardized procedures, qualitative coding has its own distinctive structure, logic and purpose. (Charmaz, 1983: p.111)

3.8
In qualitative analysis, codes are often used not to denote facts but to 'break up' the data (Strauss and Corbin, 1990: pp. 61ff.). Such codes represent 'perspectives' of the researcher rather than clear-cut empirical contentful categories (cf. Becker and Geer, 1960: p. 280). According to Becker and Geer, these perspectives and the 'areas to which they apply' are only 'tentatively identified' when the coding begins. Coding is then done by going 'through the summarized incidents, marking each incident with a number or numbers that stand for the various areas to which it appears to be relevant'. Consequently, the coding of text does not serve to condense relevant information and to decide whether a certain person or event falls under a certain class of events or persons, but simply to make sure 'that all relevant data can be brought to bear on a point'. Here, the function of coding is restricted to sign-posting: codes are stored together with the 'address' of a certain text passage and, drawing on this information, the researcher can locate all the possible information provided by the textual data on the relevant topic.

3.9
Such qualitative codes which represent 'perspectives' and which serve as 'sign-posts' are not very useful to test empirically statements about the exact relationship between two defined variables. If a researcher wants to test the hypothesis that 'critical life events' (CLE) frequently co- occur with 'emotional disturbances' (EMO) it would be of utmost importance that it is possible to decide whether a given observed event is a case of 'CLE' or 'EMO' or not. The codes used in such a hypothesis must be mutually exclusive: that means, it must be possible to decide whether a certain event is 'critical' or 'non-critical', and whether a given person is 'emotionally disturbed' or 'not disturbed'. In other words, such codes must represent clearly defined empirical events. This is not the kind of 'hypothesis testing' which can be employed using the kind of qualitative codes mentioned in ¶3.7; and it is also not the kind of hypothesis examination frequently employed in qualitative research. If the researcher has coded all segments where the interviewees talk about 'critical life events' and those passages where 'emotions' are mentioned s/he may retrieve all text segments which were coded with codes referring to both 'critical life events' and 'emotions' in order to explore the emotional significance of life events. The background for that exploration could be the somewhat vague idea about a relation between life events and emotions. Such hypotheses, when they first come into a researcher's mind, are usually not highly specified and definite propositions about certain facts, but tentative and imprecise, sometimes very vague conjectures about possible relationships. Instead of calling them hypotheses one should rather call them hypotheses about what kind of propositions, descriptions or explanations will be useful in further analysis. They are insights that 'whatever specific claim the successful H(ypothesis) will make, it will nonetheless be an hypothesis of one kind rather than another' (Hanson, 1971: p. 291). The notion of hypothesis testing would be rather misleading here, if one understands it as an attempt to falsify an empirically contentfull statement. But a heuristic idea can lead to the development of falsifiable statements, for example if one finds that the interviewees in the sample with specific life events also show specific (negative) emotions. This process is of course not hypothesis testing in the tradional sense. That means the application of a set of precisely defined rules that are intended to help the researcher with the decision whether a certain statement is true or false. Instead, the concept of 'Analytic Induction' which Lee and Fielding (1996b) mention in their paper (eg. Cressey, 1971; Lindesmith, 1968) can be seen as a framework that provides researchers with heuristic rules on how to develop a theory via the successive refinement of working hypotheses.

Using Textual Material and Computers for Theory Building: The Qualitative Approach

4.1
The previous section should have demonstrated that the mere implemention of an H-D approach in qualitative research may carry the danger that incompatible research logics are confused. The consequence of such a confusion can be that researchers neglect crucial prerequisites for the application of a certain technique (eg. by using 'fuzzy codes' to 'test' precise hypotheses). In the following I will use some examples from research conducted in the life course research centre 'Sfb 186' in Bremen to demonstrate how software for the management of textual data can be used to support the process of qualitative concept building, typology construction and theory development.

4.2
An orthodox methodological mainstream position presented in numerous textbooks requires the development of theories before the data collection takes place. In contrast, qualitative methodologists have emphasized that in qualitative research theories can be developed on the basis of the data material. One of the main arguments in favour of such a strategy is that theories are more 'empirically grounded' when developed on the basis of data material (eg. Glaser and Strauss, 1967: pp. 3f.). Unfortunately, this approach has given rise to a popular methodological myth which depicts qualitative research as a merely 'inductive' endeavour. Following this view qualitative researchers approach their empirical field without any theoretical concepts whatsoever. To make the situation even worse, this myth has been nurtured by Glaser's and Strauss' early methodological writings. In their famous book The Discovery of Grounded Theory these authors encourage researchers 'literally to ignore the literature of theory and fact on the area under study, in order to assure that the emergence of categories will not be contaminated...' (Glaser and Strauss, 1967: p. 37). Most ironically, this stance represents one of the main roots of modern positivism. In the early days of modern natural science many researchers followed the claims of empiricist philosophers like Bacon or Locke who were convinced that the only legitimate theories were those which could be inductively derived by simple generalization from observable data. However, one of the most crucial and widely accepted insights of contemporary epistemology and cognitive psychology is the fact that 'there are and can be no sensations unimpregnated by expectations' (Lakatos, 1982: p. 15). This is not only true for scientific knowledge but also for the common sense knowledge that provides the actors in a given empirical domain with the 'lenses' and conceptual networks that serve as a means for structuring everyday experience. The philosophical critique of inductivism highlights the role of previous knowledge as one of the crucial prerequisites of Fremdverstehen (Kelle, 1995: p. 38): Qualitative researchers who investigate a different form of social life always bring with them their own lenses and conceptual networks. They cannot drop them, for in this case they would not be able to perceive, observe and describe meaningful events any longer - confronted with chaotic, meaningless and fragmented phenomena they would have to give up their scientific endeavour.

4.3
Both Glaser and Strauss took the 'theory-ladenness' of empirical observation into account to a far greater extent in their later methodological writings than in their 'Discovery' book. Strauss developed (partly in cooperation with Juliet Corbin) the 'paradigm model' to denote those theoretical concepts which are used in qualitative analysis to structure empirical observations (Strauss and Corbin, 1990: pp. 99ff.). According to these authors, a coding paradigm represents a general theory of action which can be used to build a skeleton or 'axis' of the developing 'grounded theory'. Strauss and Corbin have also taken a more liberal position concerning the role of literature in the research process, maintaining that 'all kinds of literature can be used before a research study is begun...' (Strauss and Corbin, 1990: p. 56). Glaser, although he has fully repudiated Strauss' concepts in his last book (Glaser, 1992), proposed a similar idea: 'theoretical codes' represent those theoretical concepts which the researcher has at his or her disposal independently from data collection and data analysis (Glaser, 1978).

4.4
The application of a coding paradigm or of 'theoretical codes' to empirical data is based on a logic of discovery which is neither inductive nor deductive. Instead it represents a special kind of logical reasoning whose premises are a set of empirical phenomena and whose conclusion is a hypothesis which can account for these phenomena. Hypothetical reasoning, as this form of inference can be called, is based on two forms of logical inference which were described by the pragmatist philosopher Charles Sanders Peirce: qualitative induction and abduction (cf. Hanson, 1965; Kelle, 1995: p. 39ff.): With qualitative induction a specific empirical phenomenon is described (or explained) by subsuming it under an already existing catogory or rule; whereas abductive inference helps to find hitherto unknown concepts or rules on the basis of surprising and anomalous events. Abductive inference combines in a creative way new and interesting empirical facts with previous theoretical knowledge. Thereby, it often requires the revision of pre-conceptions and theoretical prejudices - assumptions and beliefs have to be abandoned or at least modified. Thereby, the theoretical knowledge of the qualitative researcher does not represent a fully coherent network of explicit propositions from which precisely formulated and empirically testable statements can be deduced. Rather it forms a loosely connected 'heuristic framework' of concepts which helps the researcher to focus his or her attention on certain phenomena in the empirical field.

4.5
Consequently, the theoretical preconceptions which qualitative researchers (whether they apply 'grounded theory' methodology or not) normally use to structure the data material (and which play a role in their abductive inferences) are quite different from those theoretical concepts, that the H-D approach expects us to formulate prior to data collection and data analysis. Theoretical preconceptions used in qualitative analysis often do not represent explicit propositions about empirical facts. Rather, they should be referred to as 'heuristic concepts' which can be used to formulate 'orientation hypotheses' (Merton, 1957: p. 88). Such heuristic concepts, which serve as the lenses for the perception of the empirical world, are often implicit. In qualitative research and methodological writings from the qualitative tradition one will find at least three different kinds

4.6
The first of these concepts are heuristic concepts derived from 'grand theories', that is highly abstract concepts about the relations between actors or between actors and society in general. The following quotations found in classical writings may be viewed as examples of this use of theoretical concepts, like Blumer's definition of Symbolic Interactionism: 'Symbolic Interactionism sees meaning as social products, as creations that are formed in and through the defining activities of people as they interact' (Blumer, 1969: p. 5). It is important to note that such a statement (although it certainly is a theoretical statement) is not empirically testable in the sense already discussed. Any attempt to deduce logically from it an empirical statement which can, in principle, 'falsify' the theory would fail or provoke endless philosophical discussions about the meaning of terms like 'interaction', 'meaning' etc.

4.7
The second type of concept that plays an important role in qualitative research is 'theories of the members of the investigated culture'. These are the 'first-order constructions' or the stock of common sense knowledge of the actors who live in the investigated social world.

4.8
The third type of concept is much closer to the use of the term 'theory' as it is found in H-D research: theories developed by a sociological expert about a certain field of social action that have enough empirical content to be tested. A statement from Cressey's study about embezzlement may be used as an example to clarify this point: '...trust violators usually consider the conditions under which they violated their own positions of trust as the only "justifiable" conditions, just as they consider their own trust violation to be more justified than a crime such as robbery or burglary.' (Cressey, 1973: pp. 104f). This statement can (unlike the theoretical statement of Blumer cited in ¶4.6) at least in principle be falsified, if someone undertakes the effort of drawing a sample of trust violators and investigating whether they see their trust violations as justifiable. But, unlike in H-D research, such a theory that consists of empirically contentful statements is not the starting point of the qualitative research process, but its result.

Coding Examples

5.1
Qualitative research often starts with concepts of the first and the second kind (¶4.6 and ¶4.7) and then proceeds to the construction of a theory of kind (¶4.8). Thereby grand theories play the role of a theoretical axis or a 'skeleton' to which the 'flesh' of empirically contentful concepts from the members' common-sense knowledge are added to construct mid or low range concepts or theories about the empirical field under investigation.

5.2
This process can be supported by the structuring of the textual material with the help of categories (or 'codes') which are derived either from common sense concepts (¶4.6) or from abstract theoretical concepts (¶4.7). Code categories developed from common sense knowledge and from abstract theoretical concepts have something in common: they fit various kinds of social reality, and it is not necessary to know something concrete about the investigated domain in order to use these concepts. In other words: they cannot be used to construct empirical propositions without additional information about empirical facts. This makes them rather useless in the context of an H-D strategy, but it is their strength in the context of exploratory, interpretative research. Concepts derived from common-sense knowledge (¶5.3) or from abstract theoretical concepts (¶5.4) can serve as a heuristic tool for theory building.

5.3
The main categories (1, 5, 8) shown in Figure 1 represent examples of the first type of categories which are drawn from common-sense knowledge. This code scheme comes from a research project that studies the transition from school to the labour market (Heinz, 1996). Open interviews were conducted in order to reconstruct the decision processes of school graduates who entered vocational training courses. In the interviews all text passages were coded where the interviewee talked for instance about experiences in his/her job, about relevant institutions, about his/her family etc. These code categories were developed from the material through a process similar to 'open coding' (Glaser, 1978: pp. 56 - 61; Strauss and Corbin, 1990: pp. 61 - 74). Thereby the coders applied code categories either drawn from their own 'common-sense knowledge' or applied in 'vivo codes' (words which were used by interviewees) to code the material.

5.4
The second type of code categories frequently used for qualitative coding are codes derived from abstract theoretical concepts: according to Strauss and Corbin one may call a code scheme developed from such concepts a coding paradigm; using the terminology of Glaser one could speak of 'theoretical codes'. In our project the decision processes described by the interviewees were structured according to the following three categories: (1) aspirations, which represent the respondents' preferences that were used to account for occupational options, (2) realizations, which consist of the actual steps of action that were taken to fulfil realizations, (3) evaluations, which were the respondents' assessments of the relations between aspirations, conditions and consequences of action. These categories represent the sub-codes (1.1-1.3; 5.1-5.3, 8.1-8.3) shown in Figure 1.

1 job and profession
1.1job and profession/aspirations
1.2job and profession/realizations
1.3job and profession/evaluations
(.....)
5cohabitation
5.1cohabitation/aspirations
5.2cohabitation/realizations
5.3cohabitation/evaluations
(.....)
8children
8.1children/aspirations
8.2children/realizations
8.3children/evaluations

Figure 1: An Extract from a Coding Scheme

5.5
The process of coding the data is the preliminary for the actual analysis in which the analyst tries to make sense of the data, in order to construct 'meaningful patterns of facts' (Jorgenson, 1989: p. 107) by looking for structures in the data. This is often achieved by comparing the different text passages in order to find commonalities or differences between them (or in other words, to conduct a 'synopsis', see also ¶2.5). The necessary prerequisite for this is to retrieve all text segments belonging to the same code. The actual analysis can then be conducted through a fine-grained hermeneutic analysis of the text segments in order to find those aspects (or 'dimensions') which can serve as criteria for a comparison. The result of this 'dimensionalization' (Strauss and Corbin, 1990: pp. 69ff.) is a new typology which helps to sociologically describe (or, in some cases, also to 'explain') interesting facts in the empirical domain under study. Another example of a qualitative research project conducted at the Bremen Life Course Research Centre may clarify how this process works in general (Braemer, 1994; Krueger, 1996). These researchers conducted lengthy open interviews with 60 to 70 year old men. The main topic was the division of different kinds of labour (work outside the home, housework) between men and women. In addition, many normative aspects of marriage and family were stressed. How did the interviewees evaluate the behaviour of their own children with respect to cohabitation and marriage? To answer this question the researchers coded those text passages where the topic 'cohabition of people of the younger generation' was stressed. Take as an example these three text passages:

I mean, that was ... how we found ourselves, how we lived together after that, the marriage and how we lived together when we married, somehow, I liked it more. As it is today, I mean, I mean that this is not good today. It is not ideal, how they are together or not ... I don't know, that's nothing for me. (Case 60)

Well, I mean, if one moves together in one apartment, (...) one should marry, or be at least engaged in the beginning. That has perhaps something to do with morality, since we were educated that way. Morals have become a little bit loose today.... (Case 98)

Well, I like it that one does obviously not live, so to speak, under that strong pressure (to get married) today, and the children can do it today in different ways... But there is also some regret, that it is a little bit too loose today.... (Case 46)

5.6
All of three cases regret the decreasing morality, but there are also differences. Case 60 and Case 98 both show general disapproval against the kind of cohabitation they think the younger generation prefer. Case 46 is more or less ambivalent. Furthermore, many of the interviewees have in common that they express some uncertainty and sometimes a defensive position concering their moral values (often using phrases like 'perhaps, I don't know...'). Using a greater number of text passages from different interviews a typology of parents' attitudes towards their childrens' style of cohabitation could be constructed comprising the main categories 'regret of decreasing morality' and 'ambivalence', whereby the first category can be further differentiated into the sub-categories 'unambigously traditional' and 'morally uncertain'.

5.7
The process of concept formation and typology building with the help of qualitative data material which has been outlined above comprises three steps. Contrary to a quantitative analysis technique (like 'logistic regression' or 'cluster analysis') none of these steps can be conducted with an algorithm alone. In other words, at each step the role of the computer remains restricted to an intelligent archiving ('code-and-retrieve') system, the analysis itself is always done by a human interpretor.

5.8
The first step is the structuring of the material with the help of common-sense concepts or abstract theoretical concepts. Thereby, the code scheme can be developed before the coding takes place ('axial coding', 'selective coding') or while the material is being coded ('open coding', whereby 'in vivo' codes may be used).

5.9
Coding is the necessary prerequisite for a systematic comparison of text passages: text segments are retrieved and analyzed in order to discover 'dimensions' which can be used as a basis for comparing different cases.

5.10
It is this comparison which becomes the basis of the construction of concepts, types and categories that form the building blocks of an emerging theory.

Conclusion

6.1
In recent discussions about software use in qualitative research the danger of a 'Frankenstein's monster' methodology, which alienates the researcher from his or her data or which leads to a 'new orthodoxy' in qualitative research has often been over-emphasized. Theoretical and methodological concepts of developers and users of computer software for textual data managemant are much more diverse and heterogenous than is often assumed. Frequent references to the methodology of Grounded Theory in their methodological writings maybe due to the fact that (1) developers often look for a methodological underpinning for rather mundane techniques of data management and draw on grounded theory as an established 'brand name' in qualitative research, that (2) proponents of the Grounded Theory approach belong to those very few authors who try to describe in detail many of the folklore techniques widely applied in different qualitative approaches, especially the indexing (adressed within the Grounded Theory approach with the somewhat misleading term 'coding' ) and comparison of text passages is such a folklore technique which has been used for centuries in different hermeneutic sciences. This technique is applicable in various methodological contexts where different text passages that relate to a similar topic are compared. Consequently, indexing and comparing text segments ('coding' and 'retrieval' with the help of a computer) can be and has been applied not only in projects with a Grounded Theory background, but also by researchers who employ methods of discourse analysis or critical ethnography.

6.2
However, if newly developed complex coding and retrieval techniques are applied without taking the necessary methodological prerequisites into consideration, software for the management of textual data can indeed exert a harmful influence on the qualitative research process. This danger arises mainly with recently proposed methodological strategies for qualitative theory building and hypothesis examination which draw on the methodology and rhetoric of classical hypothesis testing. By seeking to 'test hypotheses' without having observed the necessary prerequisites, that is by applying strict rules to vague and 'fuzzy' codes, one can easily produce artefacts. The methodological confusion emerging from several concepts of computer-aided 'qualitative theory building' or 'qualitative hypothesis examination' are at least partly the result from various misinterpretations of the role of theories and hypotheses in the qualitative research process. This is, on the one hand, an inductivist position which assumes that in qualitative research concepts and theories simply 'emerge' from the data material, if the researcher does approach the empirical domain without any theoretical preconceptions whatsoever. However, 'an open mind is not an empty head' (Dey, 1995). Qualitative researchers draw on theoretical concepts and develop hypotheses before and during the analysis process. On the other hand, some qualitative methodologists use the concepts of theory and hypothesis as they are used within the hypothetico-deductive approach. Theories and hypotheses applied in the beginning of the qualitative research process are often not precisely formulated propositions about well-defined empirical events which can be empirically tested in order to 'verify' or 'falsify' them. Rather they are (sometimes very vague) assumptions and conjectures about possible relations between certain domains. To examine these hypotheses means to return to the material in order to explore this possible relationship by a thorough analysis of textual data. This interpretative analysis of text (segments) may then form the basis for the clarification and modification of the researchers' initial (general or vague) assumptions. The notion of hypothesis testing would be rather misleading here, if one understands it as an attempt to falsify an empirically contentful statement.

6.3
Many of the fears of the computer taking over analysis as well as fears of a new methodological orthodoxy emerging from computer use, do not so much reflect the basic capabilities of software for the management of qualitative data which helps the researcher with necessary but analytically mundane tasks of ordering the data material. But as a reaction to some misunderstandings in current methodological debates these fears may become a starting point for clarifying methodological concepts. These debates, and also current writings about computer use, often fail to 'rationally reconstruct' actual processes of data management and data analysis. Instead, concepts from other methodological traditions like 'hypothesis testing' are implemented, and the role of the computer in the analytic process is sometimes overemphasized. Thereby, notions which the author of this paper has also used himself in current debates, like 'third generation' computer programs, or software for qualitative 'theory building', may add to the wrong idea of qualitative computer software as doing 'qualitative analysis' instead of clarifying their basic, usually very straightforward functions. Software programs like THE ETHNOGRAPH, ATLAS/ti or NUDIST are tools to mechanize clerical tasks of ordering and archiving texts used in the hermeneutic sciences now for hundreds of years. To be clear about this issue we should address these programs as software for 'data administration and archiving' rather than as tools for 'data analysis'. And we should think about whether the growing economic competition between software developers may go against our need for a realiztic picture of the possibilities of methodological techniques, since it fuels the motivation to present straightforward techniques of data management as groundbreaking methodological innovations. Thereby, popular computer myths in the tradition of the 'Frankenstein's monster' archetype may be responsible for the fact that for many researchers the idea of software capable of 'theory building' does not sound as absurd as the idea of an index card system performing theory building.

References

AGAR, M. (1991) 'The Right Brain Strikes Back' in R. Lee and N. Fielding (editors) Using Computers in Qualitative Research. London: Sage.

BECKER, H. and GEER, B. (1960) 'Participant Observation: The Analysis of Qualitative Field Data' in R. N. ADAMS and J. J. PREISS (editors) Human Organization Research: Field Relations and Techniques. Homewood, IL: Dorsey Press.

BLUMER, HERBERT (1969) Symbolic Interactionism: Perspective and Method. Englewood Cliffs: Prentice Hall.

BRAEMER, G. (1994) Wandel im Selbstbild des Familienernîhrers? Reflexionen ˝ber vierzig Jahre Ehe-, Erwerbs- und Familienleben, Working Paper No. 29, Sonderforschungsbereich 186, Bremen.

CHARMAZ, K. (1983) 'The Grounded Theory Method: An Explication and Interpretation' in R. M. EMERSON (editor) Contemporary Field Research: A Collection of Writings. Prospect Heights: Waveland Press.

COFFEY, A, HOLBROOK, B. & ATKINSON, P. (1996) 'Qualitative Data Analysis: Technologies and Representations', Sociological Research Online, vol. 1, no.1, <http://www.socre sonline.org.uk/socresonline/1/1/4.html>.

CONRAD, P. and REINARZ, S. (1984) 'Qualitative Computing: Approaches and Issues', Qualitative Sociology, vol. 7, pp. 34 - 60.

CRESSEY, D. (1971) Other People's Money: A Study in the Social Psychology of Embezzlement. Belmont: Wadsworth (1st edn, 1953).

DENZIN, N. and LINCOLN, Y. (editors) (1994) Handbook of Qualitative Research. Thousand Oaks: Sage.

DEY, I. (1995) 'Reducing Fragmentation in Qualitative Research' in U. Kelle (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

DOTZLER, H. (1995) 'Using Software for Interpretive Text Analysis: Results from Interviews with Research Teams' paper presented at SoftStat '95: 8th Conference on the Scientific Use of Statistical Software, March 26 - 30, Heidelberg, Germany.

DREYFUS, H. L. (1972) What Computers Can't Do: A Critique of Artificial Reason. New York: Harper.

DREYFUS, S. E. and DREYFUS, H. L. (1986) Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford: Blackwell.

EISNER, E. (1981) 'On the Differences between Scientific and Artistic Approaches to Qualitative Research', Educational Researcher, vol. 10, pp. 5 - 9.

GIDDENS, A. (1976) New Rules of Sociological Methods: A Positive Critique of Interpretive Sociologies. London: Hutchinson.

GLASER, B. and STRAUSS, A. (1967) The Discovery of Grounded Theory: Strategies for Qualitative Research. New York: Aldine de Gruyter.

GLASER, B. (1978) Theoretical Sensitivity: Advances in the Methodology of Grounded Theory. Mill Valley: The Sociology Press.

GLASER, B. (1992) Emergence vs. Forcing: Basics of Grounded Theory Analysis. Mill Valley: The Sociology Press.

HANSON, N. R. (1965) Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science. Cambridge: CUP.

HANSON, N. (1971) 'The Idea of a Logic of Discovery' in S. TOULMIN (editor) What I do not Believe and Other Essays. Dordrecht: Reidel.

HEINZ, W. (1996) 'Transitions in Youth in a Cross Cultural Perspective: School-to-work in Germany' in B. Galaway and J. Hudson (editors) Youth in Transition to Adulthood: Research and Policy Implications. Toronto: Thompson Educational Publication.

HESSE-BIBER, S. and DUPUIS, P. (1995) 'Hypothesis Testing in Computer-Aided Qualitative Data Analysis' in U. Kelle (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

HUBER, G. (1995) 'Qualitative Hypothesis Examination and Theory Building' in U. Kelle (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

HUBER, G. and MANDL, H. (editors) (1982) Verbale Daten. Weinheim : Beltz.

JORGENSON, D. L. (1989) Participant Observation: A Methodology for Human Studies. Newbury Park, CA: Sage.

KELLE, U. (editor) (1995) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

KELLE, U. (1996) 'Die Bedeutung Theoretischen Vorwissens in der Methodologie der Grounded Theory' in R. Strobl and A. Böttger (editors) Wahre Geschichten? Zu Theorie und Praxis qualitativer Interviews. Baden-Baden: Nomos.

KELLE, U and LAURIE, H. (1995) 'Computer Use in Qualitative Research and Issues of Validity' in U. Kelle (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

KRUEGER, H. (1996) 'Normative Interpretations of Biographical Sequences' in A. Weymann (editor) Society and Biography: Interrelations between Social Structure, Institutions and the Life Course. Weinheim: Deutscher Studien Verlag.

KUCKARTZ, U. (1995) 'Case-Oriented Quantification' in U. KELLE (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

LAKATOS, I. (1982) The Methodology of Scientific Research Programmes. Philosophical Papers: Volume 1. Cambridge: Cambridge University Press.

LEE, R. and FIELDING, N. (editors) (1991) Using Computers in Qualitative Research. London: Sage.

LEE, R. M. and FIELDING, N. G. (1995) 'Users' Experiences of Qualitative Data Analysis Software' in U. Kelle (editor) Computer- Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

LEE, R. & FIELDING, N. (1996a) 'Computer-Assisted Qualitative Data Analysis: The User's Perspective' in F. Faulbaum and W. Bandilla (editors) SOFTSTAT '95: Advances in Statistical Software 5. Stuttgart: Lucius.

LEE, R. & FIELDING, N. (1996b) 'Qualitative Data Analysis: Representations of a Technology: A Comment on Coffey, Holbrook and Atkinson', Sociological Research Online, vol. 1, no. 4, <http://www.socre sonline.org.uk/socresonline/1/4/lf.html>.

LINDESMITH, A. (1968) Addiction and Opiates. Chicago: Aldine (1st edn, 1947).

LONKILA, M. (1995) 'Grounded Theory as an Emerging Paradigm for Computer-Assisted Qualitative Data Analysis' in U. Kelle (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

MANGABEIRA, WILMA (1995) 'Computer Assistance, Qualitative Analysis and Model Building' in R. Lee (editor) Information Technology for the Social Scientist. London: UCL Press.

MERTON, R. K. (1957) Social Theory and Social Structure, 2nd edition. Glencoe, IL: Free Press.

PLATT, J. (1996) 'Has Funding Made a Difference to Research Methods?' Sociological Research Online, vol. 1, no.1, <http://www.socre sonline.org.uk/socresonline/1/1/5.html>.

RICHARDS, L. and RICHARDS, T. (1991) 'The Transformation of Qualitative Method: Computational Paradigms and Research Processes' in R. Lee and N. Fielding (editors) Using Computers in Qualitative Research. London: Sage.

RICHARDS, T. and RICHARDS, L. (1995) 'Using Computers in Qualitative Research' in N. Denzin and Y. Lincoln (editors) Handbook of Qualitative Research. Thousand Oaks: Sage.

SEIDEL, J. (1991) 'Method and Madness in the Application of Computer Technology to Qualitative Data Analysis' in R. Lee and N. Fielding (editors) Using computers in Qualitative Research. London: Sage.

SEIDEL, J. & KELLE, U. (1995) 'Different Functions of Coding in the Analysis of Textual Data' in U. Kelle (editor) Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice. London: Sage.

STRAUSS, A. and CORBIN, J. (1990) Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Thousand Oaks: Sage.

TESCH, R. (1990) Qualitative Research: Analysis Types and Software Tools. New York: Falmer Press.

WEITZMAN, E. and MILES, M. (1995) Computer Programs for Qualitative Data Analysis. Thousand Oaks: Sage

WINOGRAD, T. and FLORES, F. (1986) Understanding Computers and Cognition. Chicago: Ablex.

Copyright Sociological Research Online, 1997