Copyright Sociological Research Online, 1996


Edith de Leeuw and William Nicholls II (1996) 'Technological Innovations in Data Collection: Acceptance, Data Quality and Costs'
Sociological Research Online, vol. 1, no. 4, <>

To cite from articles published in Sociological Research Online, please reference the above information and include paragraph numbers if necessary

Received: 30/9/96      Accepted: 4/11/96      Published: 23/12/96


Whether computer assisted data collection methods should be used for survey data collection is no longer an issue. Most professional research organizations, commercial, government and academic, are adopting these new methods with enthusiasm. Computer assisted telephone interviewing (CATI) is most prevalent, and computer assisted personal interviewing (CAPI) is rapidly gaining in popularity. Also, new forms of electronic reporting of data using computers, telephones and voice recognition technology are emerging. This paper begins with a taxonomy of current computer assisted data collection methods. It then reviews conceptual and theoretical arguments and empirical evidence on such topics as: (1) respondents and interviewer acceptance of new techniques, (2) effect of computer assisted interviewing on data quality, (3) consequences for survey costs and (4) centralized vs. decentralized deployment of CATI.

Acceptance; CADAC; CAPI; CATI; Computer Assisted Data Collection; Costs; Data Quality; DBM; EMS; Interviewer Effect; Response Rate; Telepanel; VRE


Computer assisted data collection methods are increasingly replacing paper-and-pen methods of survey data collection. In Europe and North America, most professional research organizations - academic, governmental, and commercial, now employ these new methods for much if not all of their survey data collection. Computer assisted telephone interviewing (CATI) is most prevalent and computer assisted personal interviewing (CAPI) is rapidly gaining in popularity. Also, new interesting forms of computerized data collection, for instance automatic speech recognition, and surveys through the internet are emerging. This raises the question, what influence computer assisted data collection methods have on the quality of the data. In this article we review the empirical literature on this topic, focusing mainly on the three basic forms of data collection in surveys: the face-to-face interview, the telephone interview, and the (mail) questionnaire. For a review of electronic observation techniques such as barcode scanning, or automatic registration of T.V. watching (people- or T.V.-meter) we refer to Saris (1989). For an interesting discussion on the potentials of people meters that goes beyond mere registration and into full-fledged media research see Samuels (1994). For a description of automatic speech recognition see Blyth (in press).

We start our review with a taxonomy of different types of computer assisted interviewing and a discussion of data quality. Next, we present a model of the factors that may lead to differences in data quality between computer assisted and traditional interview procedures. Subsequently, we give an overview of the results of empirical research on data quality differences. Finally, we discuss the consequences of our findings for social research.

Taxonomy of Different Forms of Data Collection

Computer assisted methods for survey research are often summarized under the global terms CADAC (Computer Assisted DAta Collection), CASIC (Computer Assisted Survey Information Collection), and CAI (Computer Assisted Interviewing); in this context the traditional paper-and-pen methods are often denoted by PAPI (Paper-and-pen Interviewing). For a comparative review, see Nicholls et al. (in press) and Weeks (1992).

The early developments in Europe differed from those in North America. In the USA telephone interviewing started earlier and is more prominent then in Europe, and as a consequence Computer Assisted Telephone Interviewing started in American market research as early as the seventies. Also most computer assisted 'mail' surveys, including use of the internet for data collection is more prominent in the USA. However a special form of computer assisted panel research was initiated in Holland as early as the eighties. In Europe, which has more emphasis on face-to-face interviewing, Computer Assisted Personal Interviewing started, using the first truly 'portable' computers. Statistics Sweden and Statistics Netherlands were among the first developers of CAPI. Reviews that also offer some insight in the usage of computer assisted methods in European countries are Hox et al. (1990), Martin & Manners (1995), Porst et al. (1994), and Saris (1989).

Characteristic of all forms of computer assisted interviewing is that questions are read from the computer screen, and that responses are entered directly in the computer, either by an interviewer of by a respondent. An interactive program presents the questions in the proper order, which may be different for different (groups of) respondents. There are three main survey modes where CADAC-technology may be employed.

  1. CATI. Computer Assisted Telephone Interviewing. This is the oldest form of computer assisted interviewing (cf. Nicholls & Groves, 1986). Originally CATI would be employed centrally using a minicomputer system. Each interviewer is sitting behind a terminal and asks the questions that appear on the screen; the respondent's answer is then typed into the computer by the interviewer. Supervisors are present for quality control and to assist with specific problems. This is still the most usual CATI setup, with a computer network replacing the minicomputer system. However, the technological change to personal microcomputers makes it also possible to conduct a decentralized CATI survey, for instance from the interviewers' own homes.

  2. CAPI. Computer Assisted Personal Interviewing. In CAPI interviewers visit respondents with a portable computer (generally a notebook) and conduct a face-to-face interview using the computer. After the interview the data are sent to a central computer, either electronically by modem or by sending a data disk by mail. Similarly, interviewer instruction and new sampled addresses can be sent to the interviewer in this way (cf. Baker, 1992; Martin & Manners, 1995).

  3. CASI. Computer Assisted Self Interviewing. Characteristic for CASI is that the respondents themselves read the questions on the screen and enter the answers. There is no interviewer; the interviewing program guides the respondent through the questionnaire. In the US, the term CASI is gaining broad acceptance as the descriptive term for self-interviewing introduced by an interviewer. Self-administered computerized interviewing without an interviewer is therefor often referred to as CSAQ, computerized self-administered questionnaire.

CASI can appear as part of a CAPI session where the interviewer hands over the computer to the respondent for a short period, but remains available for instructions and assistance. Scherpenzeel (1995) uses in this case the acronym CASI-IP, with the added IP for interviewer present. This form is equivalent to the procedure used in traditional PAPI face-to-face interviews where an interviewer might give the respondent a paper questionnaire containing sensitive questions. A new form is Audio-CASI or A-CASI, where the respondent listens to the questions read by a digitized voice or a tape, sees the questions on the screen and responds with keys on the keyboard. As a contrast traditional CASI is now sometimes called V-CASI (visual only).

Two different computer assisted equivalents of the mail survey are the Disk By Mail (DBM) and the Electronic Mail Survey (EMS). In DBM a disk containing the interviewing program is sent to the respondent, who runs the program on his or her own computer and then returns the disk with the responses (cf. Higgins et al., 1987). It is obvious that at present this works only with special populations, who have access to a computer. These may be private persons, but also corporations. Especially within corporations this method is used to collect inventory data for recurring stock-taking (cf. Weeks, 1992). DBM has also been used in surveys of teachers and pupils, using the school's computer (cf. De Leeuw, 1989; Van Hattum & De Leeuw, 1996) and of experienced PC-users (Jacobs, 1993).2 In EMS the survey is sent by electronic mail through existing computer networks, electronic mailing systems, and bulletin boards. Users of such systems receive a request to participate in a survey, and if they comply, they either are asked a number of questions by an interviewing program, or they receive an electronic form to fill in at a later stage. This is at present only possible with special populations, but the limited experience so far is positive (cf. Kiesler & Sproull, 1986; see also Fisher et al, 1995 for some examples).

A related form of self-administered interviewing without an interviewer present, is the Tele-interview (Saris, 1991). This is a form of computer assisted panel research (CAPAR) where respondents fill in an electronic questionnaire about once a week.3 For this, a large number of selected households receive a microcomputer and a modem. At regular intervals, the modem automatically queries a remote computer, and the computer receives new questionnaires for selected members of the household. After the questionnaires have been answered using the interviewing program, the data are sent back to the remote computer. For questions and technical problems a help desk is available through a toll-free number. The tele-interview has the advantage that it is not confined to special populations with access to computers. However, the tele-interview shares all methodological problems of traditional panel research (see Kasprzyk et al., 1989), although experience has shown that the bonus of having a free home computer leads to very low panel loss. A variation on the tele-interview is an electronic diary for time budget and consumer behaviour research (Kalfs, 1993).

Both Weeks (1992) and Saris (1991) mention two very specific applications of CASI: Touchtone Data Entry (TDE) and Voice Recognition (VR) or Automatic Speech Recognition (ASR). In the first case a respondent is called by a computer, the questions are asked by a computer voice, and the responses are given by punching the appropriate number. In VC the respondent has to answer 'yes' or 'no' verbally. Automatic Speech Recognition has far more potential; in ASR a large vocabulary of meaningful words, such as holiday destinations, can be understood and acted upon by the interview system (Blyth & Piper, 1994; Blyth, in press). For a review and comparison with interviewer techniques, see Havice & Banks (1991).

Table 1 presents a systematic overview of the various computer assisted interviewing methods.

Table 1: Taxonomy of Computer Assisted Interviewing methods

General name: CADAC (Computer Assisted Data Collection), CASIC (Computer Assisted Survey Information Collection, CAI (Computer Assisted Interviewing)

Specific methodComputer assisted form
Face-to-face interviewCAPI (Computer Assisted Personal Interviewing)
Telephone interviewCATI (Computer Assisted Telephone Interviewing)
Self-administered form:CASI (Computer Assisted Self Interviewing); CSAQ (Computerized Self-Administered Questionnaire)
Interviewer presentCASI of CASIIP (computer assisted selfinterviewing with interviewer present. CASI-V (question text on screen: visual). CASI-A (text on screen and on audio)
Mail surveyDBM (Disk by Mail) and EMS (Electronic Mail Survey)
Panel research CAPAR (Computer Assisted Panel Research), Teleinterview, (Electronic diaries)
Various (no interviewer)TDE (Touchtone Data Entry), VR (Voice Recognition), ASR (Automatic Speech Recognition)

A Model for the Influence of CADAC on Data Quality

Computer assisted interviewing has become rapidly popular partly because of the expectation that it would lead to better data quality than traditional methods. As early as 1972, Nelson et al. pointed out that automatic routing to the next question and range checks were potential great helps in enhancing data quality. A priori there are three groups of factors that may affect data quality: (1) the technological possibilities of CADAC programs, (2) the visible presence of a computer, and (3) the effect of CADAC on the interviewing situation. Also, the consequences of CADAC for costs will be addressed.

Technological possibilities

Compared to an optimally implemented paper-and-pen interview, the optimally implemented computer assisted interview has five apparent advantages.

  1. There are no routing errors. If a computer system is correctly programmed, routing errors, that is, errors in the question order, skipping and branching, do not occur. Based on previously given answers the program decides what the next question must be, and so both interviewer and respondent are guided through the questionnaire. Missing data because of routing and skipping errors do not occur. Also, questions that do not apply to a specific respondent are automatically skipped. As a result, automatic routing reduces the number of data errors.

  2. Data can be checked immediately. An optimally implemented CADAC program will perform some internal validity checks. The simplest checks are range checks, that compare the given response to the range of possible responses. Thus the program will refuse the response '8' to a seven-category Likert scale, and then ask to correct the response. Range checks are straightforward when the question has only a limited number of response categories. More complicated checks analyze the internal consistency of several responses. Consistency checks are more difficult to implement; one must anticipate all valid responses to questions, list possible inconsistencies, and devise a strategy for the program to cope with them. In PAPI, internal validity checks have to be conducted in the data cleaning stage that usually follows the data collection stage. However, when errors are detected, they can only be recoded to a missing data code because it is no longer possible to ask the respondents what they really meant. In a CADAC session there is an opportunity to correct range and consistency errors, and therefore CADAC should lead to fewer data entry errors and missing data.

  3. The computer offers new possibilities to formulate questions. One example is the possibility to randomize the order of questions in a scale, giving each respondent a unique question order. This will eliminate systematic question order effects. Response categories can also be randomized, which avoids question format effects (e.g., recency effects). The computer can also assist in the interactive field-coding of open questions using elaborate coding schemes, which would be unmanageable without a computer. Finally, the computer can be used to employ question formats such as drawing line lengths as in psychophysical scaling, which are in PAPI methods awkward to use.

  4. There is no separate data entry phase. This means that the first tabled results can be available soon after the data collection phase. On the other side, construction and programming of the questionnaire takes considerable time in CADAC. Thus, a well- planned CADAC survey has a real advantage when the results must be quickly available (as in election forecasts).

  5. The knowledge that the system accurately records information about the interview process itself (e.g. time and duration of the interview, the interval between interviews and the order in which they are carried out) inhibits interviewers to 'cheat'. Computer assisted interviewing provides a research organization with greater interviewer control and offers a protection against unwanted interviewer behaviour.

Time and Money: CADAC and its Consequences for Costs

Related to these technological or operational factors are issues of time and money. Going from paper- and-pen to computer assisted interviewing asks for initial investments, not only in equipment, but also in time. One has to invest in hardware, in software and in acquiring the hardware- and software-related knowledge and skills.

As mentioned above, basic interviewer training now needs to include a training in handling a computer and using the interview software. But in contrast to this extended general interview training, training for actual surveys is less costly. Many topics (skipping, branching, selection rules) need not be taught because the interview software now handles this (see Porst et al, 1994). Also, executives, research directors and field managers have to learn and appreciate computer assisted interviewing.

After the initial investments are made a CADAC-survey may be cheaper than traditional data collection, but it all depends on the study, its complexity, its size, and its questionnaire. To evaluate the cost efficiency of CADAC a distinction should be made between front-end processing and back-end processing. In general, a well-designed computer assisted data collection requires investing more time, effort, and money in the beginning of the research (front-end processing), time that is saved at the end stage (back-end processing). Especially the design and implementation of range- and consistency-checks (front-end) reduces the time needed to prepare the data for the analysis (back-end); and no questionnaires have to be printed and coded.

In other words: developing, implementing and testing the questionnaire is more expensive, but no data-entry is needed and data-editing and data-cleaning costs less. In general, there is no difference in the total time needed for the research. But, once the interviewing has started, results are available much faster than in traditional paper-and-pen interviewing. Samuels (1994) mentions a reduction of delivery time of 50% for the results of an omnibus survey. When timeliness and a fast release of results are important for a client, this is an important advantage of CADAC over paper-and-pen methods.

The above factors are all concerned with operational data quality. The following factors are all concerned with methodological data quality, defined by an absence of nonsampling survey bias and error (cf. Groves, 1989).

Visible Presence of the Computer

The visible presence of a computer may affect the data quality, apart from the technical aspects of using a computer. As with most technological innovations these effects are temporary. After some time everybody gets used to the new machine, and its influence on the situation is small. Now we are clearly in a transition period; the computer is no longer an unimaginable technological wonder, but it is also not yet a common household item.

Compared to the traditional PAPI methods, the visible presence of a computer could lead to four effects on the way the respondents or the interviewers perceive the interview situation.

  1. Less privacy. When one is totally unfamiliar with computers there could be a 'big brother' effect, leading to more refusals and socially desirable answers to sensitive questions. When researchers first started to use CAPI, this was a much feared effect.

  2. More privacy. Using a computer could also lead to the expectancy of greater privacy by the respondents; responses are typed directly into the computer and cannot be read by anyone who happens to find the questionnaire. In the western world, where computers are widespread and familiar, this reaction is more likely than the 'big brother' reaction.

  3. Trained interviewers may feel more self confident using a computer, and behave more professionally. This could lead to more confidence of the respondent in the interviewing procedure. Social exchange theory, as applied to the survey process (Dillman, 1978) predicts that this should lead to more willingness to comply with the interviewers' requests.

  4. In panel research the availability of a free home computer acts as a reinforcement for the respondents to continue to participate faithfully. Disk by mail (DBM) and electronic mail surveys (EMS) both have at present a strong novelty effect: the survey request is highly visible, and not likely to be incorrectly perceived as junk mail. This should lead to a higher willingness to participate.

Effect of the Computer on the Interview Situation

The effect of CADAC on the interview process depends strongly on the amount of training and/or experience the interviewers have with this method of data collection.

Inexperienced interviewers may direct much of their attention to keeping the computer running and correctly typing in the answers. If interviewers cannot type blindly, typing in long answers may lead to less eye contact between interviewers and respondents, causing the interviewers to miss nonverbal reactions of the respondents. If the computer is located between the interviewer and the respondent, even the physical distance may be greater than in PAPI. The methodological survey-literature stresses the importance of good (nonverbal) communication and rapport between interviewers and respondents. If using the computer weakens the relation between interviewer and respondent, the interview will not be conducted optimally, and in consequence the data quality may suffer.

On the other hand, an experienced interviewer can rely on the computer for routings and complex question sequences, and therefore pay more attention to the respondent and the social processes involved in interviewing. Sometimes, for instance in asking sensitive questions, less eye-contact is an advantage (cf. Argyle & Dean, 1965); experienced interviewers can use the presence of a computer to their advantage by directing their attention to the screen when asking sensitive questions.

The conclusion is that in a CADAC-survey we need interviewers that are well trained and experienced in computer assisted data collection techniques. This means that above a thorough basic interview training an additional training in computer use and computer assisted interviewing is needed (see Woijcik et al., 1992, for a training program for computer assisted interviewing). Given well trained and experienced interviewers, the altered interview situation is likely to have more advantages than disadvantages, especially with sensitive questions.

In the sections below, we will review the results of empirical comparative research on the effects of computer assisted interviewing versus paper-and-pen methods on data quality, in face-to-face, telephone, and self-administered interviews. Since acceptance of the computer assisted methods is an important criterium by itself, we will also include research on the attitudes and opinions of interviewers and respondents. When possible, data on cost comparisons have been added.

Data Quality in CAPI

Effect on the Respondent

Although the first users of CAPI were afraid of a negative effect on the response rate, even in the first applications of the method in Sweden and the Netherlands this did not occur (Van Bastelaer, Kerssemakers & Sikkel, 1987, p 39; Van Bastelaer et al, 1988). Later studies confirm that CAPI and paper-and-pen methods yield comparable response rates in studies in the U.S.A. (Bradburn et al., 1992; Sperry et al., 1991; Thornberry et al., 1991), England (Martin et al., 1994), Sweden (Statistics Sweden, 1989) and Germany (Riede & Dorn, 1991). These studies also report very low percentages of spontaneous negative reactions by respondents (1 - 4%). Most reactions are neutral or positive.

When respondents are explicitly asked for a reaction to using the computer they generally react positively and are found to prefer (cf. Woijcik & Baker, 1992). Baker (1990, 1992) reports that most respondents find CAPI interesting and amusing, and attribute a greater degree of professionalism to CAPI. The social interaction with the interviewer is generally described as comfortable and relaxed. Only a small percentage (5%) reports negative feelings. When explicitly asked about the data-privacy, 47% has more trust in the privacy of computer collected data, 5% has more trust in traditionally collected data, and 48% sees no difference.

Beckenbach (1992, 1995) conducted a small scale and well controlled study comparing CAPI, CSAQ and a paper-and-pen face-to-face interview. After the interview, both interviewers and respondents filled in a questionnaire with questions about the interview itself. Neither interviewers nor respondents report problems with eye contact or social interaction. In the computer assisted methods (both CAPI as CSAQ) respondents were more positive about data- privacy, and judged answering sensitive questions as less unpleasant.

Effect on the Interviewer

Interviewers are in general markedly positive about computer assisted interviewing. They appreciate the support that a good CAPI system offers when complex questionnaires are employed (Riede & Dorn, 1991; Edwards et al., 1993), they like working with the computer (Martin et al., 1994), and derive a feeling of professionalism from it (Edwards et al., 1993). Riede and Dorn (1991) point out that the one important complaint by interviewers is about the difficulty of grasping the overall structure of the questionnaire. CADAC questionnaires are typically screen-oriented, and it is not always possible to backtrack to earlier sections of the questionnaire for corrections or additions to earlier answers. Advanced CADAC programs have this flexibility, but they still have more constrains than paper-and-pencil methods (Weeks, 1992).

The studies in the previous paragraph all employed well-trained and computer-experienced interviewers. This is important, because Van Bastelaer et al. (1987) found clear differences between interviewers with and without experience in computer assisted interviewing. They report that in the first week of data collection the percentage of interviewers that prefers CAPI was 52%, while in the third week this percentage had increased to 71%. When starting in CAPI for the first time, or expanding the existing interviewer corps with new and yet inexperienced interviewers, one should pay extensive attention to the interviewers needs. An intensive training in using the computer and the specific CADAC program is essential (cf. Bennet and Goodger, 1993; Woijcik et al., 1992). Once trained, most interviewers prefer CAPI to paper-and-pen interviewing (Couper & Burt, 1993; Woijcik & Baker, 1992). With good training, even older interviewers and interviewers without any previous computer experience can also enjoy using the computer and conduct good interviews (Edwards et al., 1993).

At first, interviewers may experience problems with open ended questions. When they are not keyboard literate and lack typing skills, entering a detailed answer to an open ended question can be slow and laborious. However, when interviewers gain keyboard experience they become fast enough typists to correctly record answers verbatim (Bond, 1991; Denny & Galvin, 1993). A very interesting study was performed by Couper et al. (in press); they analyzed keystroke files from a CAPI survey in order to discover the types of keyboard errors made by interviewers. They concluded that in general interviewers are well able to use the CAPI functions on which they are trained. By careful analysis of the mistakes made, they come with several suggestions for further improving interviewer performance. Among these are providing laptops with key templates for the prevention of function key errors, and a better ergonomic design for laptop computer keyboards.

Besides keyboard layout, other ergonomic aspects have also been investigated. Beckenbach (1992, 1995) reports that 80% of the interviewers have no problems with screen and 92% have no problems with the keyboard, while 75% report no problems at all. For an interesting discussion on how to improve the question display on the screen and other ergonomic tools see Edwards et al.(1995). The weight of the computer is sometimes mentioned as a problem (Edwards et al., 1993). In a study of the ergonomical aspects of microcomputers used in computer assisted interviewing Couper & Groves (1992) also conclude that weight is an important ergonomic factor. Finally, in the comparative study by Edwards et al. (1993) about three in four interviewers report that they found PAPI more tiring!

Effect on Data Quality

The acceptance of computer assisted face-to-face interviewing is high for both respondents and interviewers, and there are no indications that using a computer disturbs the interviewing situation (Beckenbach, 1992). In addition, a well implemented CAPI system prevents many interviewer mistakes. As a result, we may expect that compared to traditional paper-and-pen methods, computer assisted interviewing has a positive effect on data quality.

Empirical studies tend to confirm this expectation. The percentage missing data is clearly lower in CAPI, mostly because interviewers cannot make routing errors (Sebestik et al., 1988; Olsen, 1992). Bradburn et al. (1992) find in a pilot CAPI study that the number of missing data caused by respondents ('don't know', 'no answer') also diminishes, but in the main study this is not replicated (Baker & Bradburn, 1992; Olsen, 1992). Other studies also fail to find a difference in respondent induced missing data (Bemelmans-Spork et al., 1985; Martin et al., 1994).

Little is known about data quality with open questions. Baker (1992) summarizes a study by the French national institute for statistical and economical research (INSEE) that does not find any difference between PAPI and CAPI in this respect.

An early comparative study by Waterton (1984, see also Waterton & Duffy, 1984) reports a positive effect of CAPI with a sensitive question about alcohol consumption; using the CAPI method more alcohol consumption was reported, which means that presumably CAPI was less affected by social desirability bias. However, in the CAPI mode the sensitive question was asked by letting the respondent type their own answers into the computer, unseen by the interviewers (CASI-IP), which makes this part of the interview like a self-administered questionnaire. In the PAPI mode the question was asked by the interviewer and the answer was taken down by the interviewer. Since self- administered questionnaires typically show less social desirability bias than face-to-face interviews (De Leeuw, 1993), the reported difference between PAPI and CAPI in this study may well correspond to a difference between an interview and a self-administered questionnaire, and not to a technology effect.

Studies that do compare PAPI- personal interviewing and CAPI-personal interviewing and therefore focus on the effect of this new technology more purely, do report slightly less social desirability bias with CAPI (Baker & Bradburn, 1992; Bradburn et al., 1992; Martin et al., 1994). However, the differences are very small, generally being smaller than differences typically found in comparisons of face-to-face versus telephone interviews or experienced versus inexperienced interviewers (Olsen, 1992).

Effect on Cost Efficiency

There is very limited data on cost comparisons between CAPI and Paper-and-pencil personal interviews. Bond (1991) states that even when computers are used frequently in the fieldwork it will take about a year before the investment starts to pay back. Besides frequency of use, sample size is also a key factor for cost efficiency. Only with large sample sizes are the cost savings in printing, despatch, and data entry and editing (back-end costs) greater than the extra costs of questionnaire design and implementation (front-end costs). For example a long interview with closed questions only using a sample of 2,000 or more will lead to a savings of 30%, a shorter questionnaire with a couple of open ended questions and a sample of around 200 will only save around 5% (Bond, 1991). In these cost calculations the initial investment in equipment and in special training of staff has been excluded.

Two studies systematically assess costs for CAPI: initial investment in hardware and software was excluded, but extra fieldwork costs for training and supervision were included. Sebestik et al. (1988) compared costs in a small scale CAPI experiment (total sample CAPI+PAPI 200). Their conclusion is that overall CAPI was more expensive, mostly because of added costs in training and supervising interviewers. In a larger experiment (around 300 respondents in each condition) Baker and Bradburn (1992) conclude that CAPI was still more expensive (±12%) than PAPI; the cost reduction in entering and cleaning data was not large enough to offset the higher training and supervision costs. Baker (1990) extrapolates these findings and concludes that when hardware costs are excluded approximately 1500 CAPI interviews are needed to reach the break-even point between increased front-end and decreased back-end costs. However, several key cost elements will decline as organizations gain experience in computer assisted interviewing and hardware costs continue to fall.

Computer Assisted Telephone Interviewing (CATI)

Effect on the Respondent

In telephone interviewing the respondent will generally not notice whether a computer is used or not, therefore we may expect little if any difference between traditional telephone interviewing and CATI. This is confirmed by comparative studies. Groves and Nicholls (1986) conclude in a review that there are no differences in nonresponse, a conclusion also reached in comparative studies by Catlin and Ingram (1988) and Groves and Mathiowetz (1984).

Respondents may occasionally hear keyboard clicks, or be told by the interviewers that a computer is used. No systematic research has been done on the effects of this knowledge, but the general impression is that it makes no difference to respondents if they know that their answers are typed directly into a computer (Catlin & Ingram, 1988; Groves & Nicholls, 1986; Weeks, 1992). This is similar to results found in the comparisons of traditional versus computer assisted face-to-face interviewing reviewed above.

Effect on the Interviewer

There is little research on the effect of CATI on the interviewers. Groves and Nicholls (1986) report that interviewers generally have a positive attitude toward CATI. They remark that acceptance of CATI strongly depends on the speed and reliability of the CATI system that is employed.4 Weeks (1992) concludes that modern CATI systems are fast and reliable, and that interviewers prefer CATI to paper-and-pen methods. Spaeth (1987) in her survey of survey organizations also reports that staff members in general (both supervisors and interviewers) preferred CATI above PAPI.

Computer assisted interviewing often leads to a greater standardization of the interview, to the extent that interviewers sometimes complain about 'rigidity' (Riede & Dorn, 1991, p 51). In general researchers will appreciate this greater standardization, because this minimizes interviewer bias (Fowler, 1991). Furthermore both Spaeth (1987) and Berry & O'Rourke (1988) reported that survey organizations tend to spend more time training interviewers for CATI than for PAPI, and sometimes also employ more supervisory staff. There is some confirmation of greater standardization of interviewer behaviour in CATI: in a controlled comparative study, using the same interviewers both for traditional and for computer assisted interviews, Groves and Mathiowetz (1984) found less interviewer variance in CATI than in the paper- and-pen method.

Effect on Data Quality

Although CATI is the first form of computer assisted interviewing that came into general use, there is little research on the influence of CATI on the data quality. In their review Groves and Nicholls (1986) conclude that CATI leads to less missing data because it prevents routing errors, but this effect is only important with complex questionnaires. For the same reason, post hoc data cleaning finds more errors with traditional paper-and-pen methods than with CATI. They find no difference in respondent induced missing data because of 'don't know' and 'no answer' responses.

More recent research by Catlin and Ingram (1988) confirms these conclusions. Catlin and Ingram paid special attention to the possible effects on open questions; they found no differences in typing errors, codability or length of answer (number of words used). This is similar to results found in CAPI (cf. Baker, 1992).

Effect on Cost Efficiency

Most studies that attempt to weigh the costs and advantages of CATI conclude that the investments pay off only in large scale or regularly repeated surveys. A rule of thumb is that the break- even point is at about thousand interviews. Below that number, the argument of cost reduction is by itself not sufficient to use CATI (cf. Weeks, 1992).

Computer Assisted Self Interview (CASI)

Computer assisted self- administered questionnaires are a relatively new development. CASI differs clearly from both CAPI and CATI by employing a different interviewing situation. The computer has taken the role of the interviewer. Theoretically, this combines the advantages of traditional self-administered questionnaires, such as more openness with sensitive questions, with the possibility to use complex question structures.

A disadvantage of CASI is that at present only selected populations can be studied. Comparative research on CASI has also mostly been done on selected populations, which either had access to computers, or received a computer for the duration of the study (cf. Saris, 1991).

Effect on Respondent

Respondents generally like CASI; they find it interesting, easy to use, and amusing (Zandan & Frost, 1989; Witt & Bernstein, 1992). Beckenbach (1992, 1995) reports that more than 80% of the respondents had no problem at all using the computer and the interviewing program, and that few respondents complained about physical problems such as eye-strain.

The general positive appreciation of CASI also shows in the relative high response ratio with Disk By Mail (DBM) surveys, and in the low panel mortality in the tele-interview. Saris (1989) reports for a large Dutch panel a mean response of the active panel members of 98% per week and a panel mortality of 15% per year. (However, the initial nonresponse for the panel was ±50%, cf. Kalfs, 1993.) DBM response ratio's vary between 25% and 70%, and it is not unusual to have response ratio's of 40 to 50 percent without using any reminders (Saltzman, 1992). Assuming that this is a special population interested in the research topic, an ordinary well conducted mail survey using no reminders may be expected to yield about 35% response (Dillman, 1978; Heberlein & Baumgartner, 1978). Of course, one should realize that DBM is restricted to special populations who have access to a personal computer.

Effect on Data Quality

Respondents are generally positive about CASI. We expect that respondents will experience a higher degree of privacy and anonymity, which should lead to more self-disclosure and less social desirability bias. Strong support for this hypothesis is given by Weisband and Kiesler (1996). In a meta-analysis of 39 studies they found a strong significant effect in favour of computer forms. This effect was stronger for comparisons between CASI and face-to-face interviews, but even when CASI was compared with self-administered paper-and-pen questionnaires self- disclosure was significantly higher in the computer condition. The effect reported was larger when more sensitive information was asked. Weisband and Kiesler (1996) also report the interesting finding that the effect is diminishing over the years, although it did not disappear! They attribute the diminishing effect to a growing familiarity of computers and its possibilities among the general public.

A similar picture emerges in studies of electronic mail questionnaires. Sproull and Kiesler (1991) report about five experiments on decision making in small groups. Using an electronic network for communication leads to more open communication, more ideas and general participation in the discussion. In the face-to-face situation the discussion tended to be dominated by one or two high-status individuals. This may also be the result of differences in the social interaction. However, in a direct comparison of a mail questionnaire and an electronic mail health-questionnaire Kiesler and Sproull (1986) also found fewer socially desirable answers in the electronic version. They also investigated other aspects of data quality in this study. Both the item-nonresponse and the number of errors were lower with CASI. The responses to open questions did not differ, until the edit facilities of the CASI program were improved; then CASI led to longer and more personal answers.

The effect of computerization on the quality of the data in self-administered questionnaires has also been a concern in psychological testing. The American Psychological Association's Guidelines for Computer-Based Tests and Interpretations (1986, p 18) explicitly states that '... the equivalence of scores from computerized versions should be established and documented before using norms or cutting scores obtained from conventional tests.' The growing popularity of computerized psychological testing has led to several studies that assess the equivalence of conventional psychological tests and their computerized versions. In general, no differences between computer assisted and paper-and-pencil tests were found in reliability and validity of the tests (Harrel & Lombardo, 1984; Parks et al., 1985). One study (Canoune & Leyhe, 1985) found that questions involving social pressure (conformity, evaluation) were answered differently in computerized and face-to-face questioning, with the face-to-face version leading to more social desirable answers and more tension reported by respondents, but other studies (Koson et al., 1970; Rezmovic, 1977) did not find this effect. A meta-analysis of 29 studies comparing conventional and computerized cognitive tests (Mead & Drasgow, 1993) found that power tests (ability tests without restrictive time limits) were highly equivalent (the cross-mode correlation is 0.97), but speed tests (cognitive tests measuring cognitive processing speed) were less equivalent (the cross-mode correlation is 0.72). Mead and Drasgow interpret the mode- effect for speeded tests as an effect of the importance of perceptual and motor skills in responding quickly to such tests.

The general conclusion is that paper- and-pen and computer assisted psychological tests are highly equivalent. This conclusion is corroborated by a study by Helgeson and Ursic (1989), who conclude from protocol-analyses that there are no clear differences in the cognitive processes employed in responding to a traditional or a computer assisted psychological test. The differences between the computerized and paper-and-pen tests are the result of differences in motor skills under time pressure and possibly in less inhibition of respondents on highly sensitive topics.

Effects on Cost Efficiency

There are no systematic cost comparisons for CASI. The literature about disk-by-mail reports that DBM is generally more expensive than a comparable paper-and-pen mail survey. However, the gain in response in a single mailing is thought to be worth the extra costs (e.g., Wilson, 1989).

Summary and Discussion

Computer assisted data collection has a high potential regarding increased timeliness of results, improved data quality, and cost reduction in large surveys. However, for most of these potential advantages the empirical evidence is still limited. The majority of studies investigates the acceptability by respondents and some aspects of data quality. A systematic comparison of costs is difficult (see Groves, 1989), and consequently these are rare. When the total costs of paper-and-pen and computer assisted survey research are compared, the evidence for cost reduction is not very strong (Baker, 1990; Catlin & Ingram, 1988; Nicholls & Groves, 1986). The investments will only pay off in large scale or regularly repeated surveys, and this will have consequences for the way surveys will be designed and conducted. The increasing emphasis on computer assisted data collection combined with the large initial investment this requires, will have far reaching effects on the way research institutes are organized. Smaller institutes and agencies may have to merge or to subcontract data collection to large agencies (cf. De Leeuw & Collins, in press).

At present, there is still little empirical research into the effect of computerized interviewing on data quality. Most studies have instead investigated the acceptance of the new techniques by interviewers and respondents. There is little evidence that CATI or CAPI improves the response rate. Conversely, there is also no evidence for a decrease in response rate. In panel research (CAPAR) and in disk-by-mail surveys (DBM) there are advantages in the form of less panel attrition in CAPAR and better response rates in DBM. However, all relevant studies on CAPAR and DBM used selected groups of respondents. In general, both interviewers and respondents evaluate computer assisted interviewing positively, and CAI is accepted without problems. Computer assisted interviewing makes it possible to supervise interviewers more closely and study interviewer behaviour by analyzing computer files. However, comparative research has paid little attention to the effect of computerization on the interviewer variance and on other aspects of interviewer behaviour (exceptions are Groves & Mathiowetz, 1984; Couper et al., in press).

Computerized methods of data collection generally have a positive effect on data quality. The improvements in data quality are similar for CAPI and CATI, and for CASI the improvements reported point in the same direction. Overall, there are advantages in using the computer with sensitive questions; respondents are less inhibited and show more self-disclosure, but there is some evidence that this effect may be diminishing over time (cf. Weisband & Kiesler, 1996). Meanwhile, computer forms are now being used more and more to investigate sensitive topics or detect risk behaviour (cf. Locke et al., 1992). A new promising area of research is Audio-CASI, where a respondent reads the questions from the screen and at the same time listens to the questions being read by a digitized voice. This method shows promise in surveys on sensitive topics.

A strong feature of computer assisted data collection is the potential to prevent errors by controlling routing and executing range and consistency checks, but the several forms of computer assisted data collection are not being used to their full potential, and the various aspects of data quality that have been studied are too limited. The strength of computer assisted data collection methods is the ability to increase the power of interviewing and thus to answer more complex research questions. We should explore the potential of the computer and use techniques for data collection that are impossible or impractical with paper-and-pencil methods. For instance, randomization of question order and randomization of the order of response categories can be implemented to avoid well-known order effects. Also, with the aid of computer assisted interviewing very complex questions can be asked and continuous response scales can be used in 'standard' interview situations (eg. computerized diaries, vignettes, magnitude estimation). Measurement techniques that would be almost impossible to use without a computer are natural grouping, adaptive conjoint analysis and tailored or controlled dependent interviewing.

Clearly computer assisted data collection is no panacea for good data quality. They require one to do almost everything that is needed with a good paper-and-pen interview, and to add extra efforts in computer implementation, in testing the questionnaire, in designing an ergonomic screen layout, and in extra interviewer training. However, this investment is earned back by far less interviewer error and the error-free administration of complex questionnaires. If special efforts during the implementation are made and if the new possibilities computers offer us are really used, we have the opportunity for obtaining not only better data, but clearly superior data with computers. We should therefore use computer assisted data quality to its full potential and invest in the development of new applications. In every survey the available tools do effect the type of questions we can ask, and computer assisted data collection is offering us a large and sophisticated methodological toolbox indeed!


1. The views expressed in this paper are attributable to the authors and do not necessarily reflect those of the Census Bureau or Statistics Netherlands. The authors would like to thank Joop Hox and Vijay Verma for their helpful and insightful comments.

2. The DBM survey of Jacobs was a survey of experienced PC- users on the amount of illegally copied software they owned. Thus, DBM was not only practical with this population, but also profitable because it ensured complete privacy.

3. In CAPAR the teleinterview may be alternated by CAPI or CATI, thus leading to a mixed mode design.

4. At the time of their review, CATI systems were running on mainframe or minicomputer systems, and both system speed and reliability were not always optimal for interactive interviewing. Modern microcomputer networks are much better in this respect.


AMERICAN PSYCHOLOGICAL ASSOCIATION (1986) Guidelines for Computer-Based Tests and Interpretations. Washington, DC: APA.

ARGYLE, M. & DEAN, J. (1965) 'Eye-Contact, Distance and Affiliation', Sociometry, vol. 28, pp. 289 - 304.

BAKER, R. P. (1990) 'What We Know About CAPI: Its Advantages and Disadvantages', paper presented at the Annual Meeting of the American Association of Public Opinion Research, Lancaster, Pensylvania.

BAKER, R. P. (1992) 'New technology in survey research: Computer assisted personal interviewing (CAPI)', Social Science Computer Review, vol. 10, pp. 145 - 157.

BAKER, R. P. & BRADBURN, N. M. (1992) CAPI: Impacts on Data Quality and Survey Costs. Information Technology in Survey Research Discussion Paper 10 (also presented at the 1991 Public Health Conference on Records and Statistics).

BECKENBACH, A. (1992) Befragung mit dem Computer, Methode der Zukunft? Anwendungsmöglichkeiten, Perspektiven und Experimentelle Untersuchungen zum Einsatz des Computers bei Selbstbefragung und Personlich-Mundlichen Interviews. [In German: Computer Assisted Interviewing, A method of the Future? An Experimental Study of the Use of a Computer by Self- Administered Questionnaires and Face-to-face Interviews]. PhD thesis. Universitat Mannheim.

BECKENBACH, A. (1995) 'Computer Assisted Questioning: The New Survey Methods in the Perception of the Respondents', BMS, vol. 48, pp. 82 - 100.

BEMELMANS-SPORK, M., KERSSEMAKERS, F., SIKKEL, D. & Van SINTMAARTENSDIJK, H. (1985) Verslag van het Experiment 'Het Gebruik van Draagbare Computers bij Persoons en Gezinsenquetes'. Centraal Bureau voor de Statistiek. [In Dutch: Report of an Experiment on the Use of Portable Computers in Person and Household Surveys. Netherlands Central Bureau of Statistics].

BENNET, D. & GOODGER, C. (1993) 'Interviewer Training for CAI at OPCS', paper presented at the 1993 Conference of the Study Group on Computers in Survey Analysis. City University, London.

BERRY, S. H. & O'ROURKE, D. (1988) 'Administrative Designs for Centralized Telephone Survey Centers: Implications of the Transition to CATI' in R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls II & J. Waksberg (editors) Telephone Survey Methodology. New York: Wiley.

BLYTH, W. & PIPER H. (1994) 'Speech Recognition - A New Dimension in Survey Research', Journal of the Market Research Society.

BLYTH, W. (in press) 'Developing a Speech Recognition Application for Survey Research' in L. E. Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, & D. Trewin (editors) Survey Measurement and Process Quality. New York: Wiley.

BOND, J. (1991) 'Increasing the Value of Computer Interviewing' in Proceedings of the 1991 ESOMAR Congress.

BRADBURN, N. M., FRANKEL, M. R., BAKER, R. P. & PERGAMIT, M. R. (1992) A Comparison of CAPI with PAPI in the NLS/Y. Chicago: NORC. Information Technology in Survey Research Discussion Paper 9 (also presented at the 1991 AAPOR-Conference, Phoenix, Arizona).

CANOUNE, H. L. & LEYHE, E. W. (1985) 'Human Versus Computer Interviewing', Journal of Personality Assessment, vol. 49, pp. 103 - 106.

CATLIN, G. & INGRAM, S. (1988) 'The Effects of CATI on Costs and Data Quality: A Comparison of CATI and Paper Methods in Centralized Interviewing', in R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls II & J. Waksberg (editors) Telephone Survey Methodology. New York: Wiley.

COUPER, M. P., HANSEN, S. E. & SADOVSKY, S. (in press) 'Evaluating Interviewer Use of CAPI Technology' in L. E. Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, & D. Trewin (editors) Survey Measurement and Process Quality. New York: Wiley.

COUPER, M. P. & GROVES, R. M. (1992) 'Interviewer Reactions to Alternative Hardware for Computer Assisted Personal Interviewing', Journal of Official Statistics, vol. 8, pp. 201 - 210.

COUPER, M. P. & BURT, R. M. (1994) 'Interviewer Attitudes Toward Computer-Assisted Personal Interviewing (CAPI)', Social Science Computer Review, vol. 12, pp. 38 - 54.

De LEEUW, E. D. & COLLINS, M. (in press) 'Data Collection Method and Data Quality: An Overview' in L. E. Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, & D. Trewin (editors) Survey Measurement and Process Quality. New York: Wiley.

De Leeuw, E. D. (1989) 'Computergeleid Enqueteren; Een Overzicht Van Technologische Vernieuwingen [In Dutch: Computer Assisted Data Collection; A Review of Technological Changes]. Tijdschrift voor Onderwijs Research, No. 4, pp. 201 - 213.

De Leeuw, E.D. (1993) Data quality in mail, telephone and face-to-face surveys. Amsterdam: TT-Publikaties.

DENNY, M. & GALVIN, L. (1993) 'Improved Quality at the Touch of a Button: The Use of Computers for Data Collection' in Proceedings of the Market Research Society Conference, Birmingham, March, 1993.

DILLMAN, D. A. (1978) Mail and Telephone Surveys: The Total Design Method. New York: Wiley.

EDWARDS, B., BITTNER, D., EDWARDS, W. S. & SPERRY, S. (1993) 'CAPI Effects on Interviewers: A Report from Two Major Surveys', paper presented at the U.S. Bureau of the Census Annual Research Conference, Washington D.C.

EDWARDS, B., SPERRY, S. & SCHAEFFER, N. C. (1995) 'CAPI Design Techniques for Improving Data Quality', in Proceedings of the International Conference on Survey Measurement and Process Quality. Alexandria: American Statistical Association.

FISHER, B., RESNICK, D., MARGOLIS, M., BISHOP, G. (1995) 'Survey Research in Cyberspace: Breaking Ground on the Virtual Frontier', in Proceedings of the International Conference on Survey Measurement and Process Quality; contributed papers. Alexandria: American Statistical Association.

FOWLER, F. J. Jr. (1991) 'Reducing Interviewer-Related Error Through Interviewer Training, Supervision and other Means' in P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz & S.Sudman (editors) Measurement Errors in Surveys. New York: Wiley.

GROVES, R. M. (1989) Survey Errors and Survey Costs. New York: Wiley.

GROVES, R. M. & MATHIOWETZ, N. A. (1984) 'Computer Assisted Telephone Interviewing: Effects on Interviewers and Respondents', Public Opinion Quarterly, vol. 48, pp. 356 - 369.

GROVES, R. M. & NICHOLLS, W. L. II (1986) 'The Status of Computer-Assisted Telephone Interviewing: Part II-Data Quality Issues', Journal of Official Statistics, no. 2, pp. 117 - 134.

HAVIS, M. J. & BANKS, M. J. (1991) 'Live and Automated Telephone Surveys: A Comparison of Human Interviewers and an Automated Technique', Journal of the Market Research Society, vol. 33, pp. 91 - 102.

HARREL, T. H. & LOMBARDO, T. A. (1984) 'Validation of an Automated 16PF Administration Procedure', Journal of Personality Assessment, vol. 48, pp. 216 - 227.

HEBERLEIN, T. A. & BAUMGARTNER, R. (1978) 'Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature', American Sociological Review, vol. 43, pp. 447 - 462.

HELGESON, J. G. & URSIC, M. L. (1989) 'The Decision Process Equivalency of Electronic Versus Pencil-and-Paper Data Collection Methods', Social Science Computer Review, vol. 7, pp. 296 - 310.

HIGGINS, C. A., DIMNIK, T. P. & GREENWOOD, H. P. (1987) 'The DISKQ Survey Method', Journal of the Market Research Society, vol. 29, pp. 437 - 445.

HOX, J. J., De BIE, S. & De LEEUW, E. D. (1990) 'Computer Assisted (Telephone) Interviewing: a Review' in J. Gladitz & K. G. Troitzsch (Editors), Computer Aided Sociological Research. Berlin: Akademie- Verlag.

JACOBS, M. A. (1993) Software Kopen of Kopieren? Een Sociaal-Wetenschappelijk Onderzoek naar PC-Gebruikers [In Dutch: Buying or Copying Software? A Study of PC-Users]. Amsterdam: Thesis-publishers.

KALFS, N. (1993) Hour by Hour: Effects of the Data Collection Mode in Time Use Research. Amsterdam: Nimmo.

KASPRZYK, D., DUNCAN, G. J. & KALTON, G. (1989) Panel Surveys. New York: Wiley.

KIESLER, S. & SPROULL, L. S. (1986) 'Response Effects in Electronic Surveys', Public Opinion Quarterly, no. 50, pp. 402 - 413.

KOSON, D., KITCHEN, C., KOCHEN, M. & STODOLOSKY, D. (1970) 'Psychological Testing by Computer: The Effect of Response Bias', Educational and Psychological Measurement, vol. 30, pp. 803 - 810.

LOCKE, S. E., KOWALOFF, H. B., HOFF, R. G., SAFRAN, C., POPOVSKY, M. A., COTTON, D. J., FINCKELSTEIN, D. M., PAGE, P. L. & SLACK, W. V. (1992) 'Computer-Based Interview for Screening Blood Donor Risk of HIV Infection', Journal of the American Medical Association, vol. 268, pp. 1301 - 1305.

MARTIN, J., O'MUIRCHEARTAIGH, C. & CURTICE, J. (1994) 'The Use of CAPI for Attitude Surveys: An Experimental Comparison with Traditional Methods', Journal of Official Statistics, vol. 9, pp. 641 - 661.

MARTIN, J. & MANNERS, T. (1995) 'Computer Assisted Personal Interviewing in Survey Research' in: R. M. Lee (Editor) Information Technology for the Social Scientist. London: UCL Press.

MEAD, A. D. & DRASGOW, F. (1993) 'Equivalence of Computerized and Paper-and-Pencil Cognitive Ability Tests: A Meta-Analysis', Psychological Bulletin, vol. 114, pp. 449 - 458.

NELSON, R. O., PEYTON, B. L. & BORTNER, B. Z. (1972) Use of an Online Interactive System: Its Effects on Speed, Accuracy, and Cost of Survey Results, paper presented at the 18th ARF conference, New York City, November 1992.

NICHOLLS, W. L. II & GROVES, R. M. (1986) 'The Status of Computer Assisted Telephone Interviewing: Part 1- Introduction and Impact on Cost and Timeliness of Survey Data', Journal of Official Statistics, no. 2, pp. 93 - 115.

NICHOLLS, W. L. II, BAKER, R. P., & MARTIN, J. (in press) 'The Effect of New Data Collection Technologies on Survey Data Quality' in L. Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, & D. Trewin (editors) Survey Measurement and Process Quality. New York: Wiley.

OLSEN, R. J. (1992) 'The Effects of Computer Assisted Interviewing on Data Quality', paper presented at the 4th Social Science Methodology Conference, Trento.

PARKS, B. T., MEAD, D. E. & JOHNSON, B. L. (1985) 'Validation of a Computer Administered Marital Adjustment Test', Journal of Marital and Family Therapy, vol. 11, pp. 207 - 210.

PORST, R., SCHNEID, M. & Van BROUWERSHAVEN, J. W. (1994) 'Computer-Assisted Interviewing in Social and Market Research' in I. Borg & P. Mohler (Editors) Trends and Perspectives in Empirical Social Research. Berlin: Walter de Gruyter.

REZMOVIC, V. (1977) 'The Effects of Computerized Experimentation on Response Variance', Behaviour Research Methods & Instrumentation, vol. 9, pp. 144 - 147.

RIEDE, T. & DORN, V. (1991) Zur Einsetzbarkeit von Laptops in Haushaltsbefragungen in der Bundesrepublik Deutschland [In German: Acceptance of Laptops for Household Surveys in Germany]. Wiesbaden: Statistisches Bundesamt. Heft 20 der Schriftenreihe Ausgewählte Arbeitsunterlagen zur Bundesstatistik.

SALTZMAN, A. (1992) Improving Response Rates in Disk-By-Mail Surveys. Sawtooth Software Conference Proceedings. Evanston: Sawtooth Software.

SAMUELS, J. (1994) 'From CAPI to HAPPI: A Scenario for the Future and its Implications for Research'. Proceedings of the 1994 ESOMAR Congress: Applications of New Technologies.

SARIS, W. E. (1989) 'A technological revolution in data collection', Quality & Quantity, vol. 23, pp. 333 - 349.

SARIS, W. E. (1991) Computer-Assisted Interviewing. Newbury Park: Sage.

SCHERPENZEEL, A. C. (1995) A Question of Quality: Evaluating Survey Questions in Multitrait-Multimethod Studies. Leidschendam: Royal PTT, Netherlands.

SEBESTIK, J., ZELON, H., DeWITT, D., O'REILLY, J. M. & McCOWAN, K. (1988) 'Initial Experiences with CAPI', paper presented at the U.S. Bureau of the Census Annual Research Conference, Washington, D.C.

SPAETH, M. A. (1987) 'CATI Facilities at Survey Organizations', Survey Research, vol. 18, pp. 18 - 22.

SPERRY, S., BITTNER, D. & BRANDEN, L. (1991) 'Computer Assisted Personal Interviewing on the Current Beneficiary Survey', paper presented at the AAPOR 1991 Conference, Phoenix, Arizona.

SPROULL, L. & KIESLER, S. (1991) 'Computers, Networks and Work', Scientific American, pp. 84 - 91.

STATISTICS SWEDEN (1989) Computer Assisted Data Collection in the Labour Force Surveys: Report of Technical Tests. Stockholm: Statistics Sweden.

THORNBERRY, O., ROWE, B. & BIGGAR, R. (1991) 'Use of CAPI with the U.S. National Health Interview Survey', Bulletin de Methodologie Sociologique, vol. 30, pp. 27 - 43.

Van BASTELAER, A. M. L., KERSSEMAKERS, F. A. M. & SIKKEL, D. (1987) 'A Test of the Netherlands Continuous Labour Force Survey with Hand-Held Computers: Interviewer Behaviour and Data Quality' in CBS- Select 4; Automation in Survey Processing. Den Haag: Staatsuitgeverij.

Van BASTELAER, A. M. L., KERSSEMAKERS, F. A. M. & SIKKEL, D. (1988) 'A Test of the Netherlands Continuous Labour Force Survey with Hand-Held Computers: Contributions to Questionnaire Design', Journal of Official Statistics, vol. 4, pp. 141 - 154.

Van HATTUM, M. & De LEEUW, E. D. (1996) A Disk-by-Mail Survey of Teachers and Pupils in Dutch Primary Schools; Logistics and Data Quality. Department of Education, University of Amsterdam.

WATERTON, J. J. (1984) 'Reporting Alcohol Consumption: The Problem of Response Validity', Proceedings of the Section on Survey Research Methods of the American Statistical Association. Washington D.C: ASA.

WATERTON, J. J. & DUFFY, J. C. (1984) 'A Comparison of Computer Interviewing Techniques and Traditional Methods in the Collection of Self-Report Alcohol Consumption Data in a Field Survey', International Statistical Review, no. 2, pp. 173 - 182.

WEEKS, M. F. (1992) 'Computer-Assisted Survey Information Collection: A Review of CASIC Methods and their Implication for Survey Operations', Journal of Official Statistics, vol. 4, pp. 445 - 466.

WEISBAND, S. & KIESLER, S. (1996) Self- Disclosure on Computer Forms: Meta-Analysis and Implications. Tucson: University of Arizona (available on internet: u/~weisband/chi/chi96.html).

WILSON, B. (1989) 'Disk-by-Mail Surveys: Three Year's Experience', Sawtooth Software Conference Proceedings. Evanston: Sawtooth Software.

WITT, K. J. & BERNSTEIN, S. (1992) 'Best Practices in Disk-By-Mail Surveys', Sawtooth Software Conference Proceedings. Evanston: Sawtooth Software.

WOIJCIK, M. S., BARD, S. & HUNT, E. (1992) Training Field Interviewers to use Computers: A Successful CAPI Training Program. Chicago: NORC. Information Technology in Survey Research Discussion Paper 8 (Also presented at the 1991 AAPOR-conference, Phoenix, Arizona)

WOIJCIK, M. S. & BAKER, R. P. (1992) 'Interviewer and Respondent Acceptance of CAPI', Proceedings of the Annual Research Conference, Washington DC: US. Bureau of the Census, pp. 619 - 621.

ZANDAN, P. & FROST, L. (1989) 'Customer Satisfaction Research Using Disk-By-Mail', Sawtooth Software Conference Proceedings. Evanston: Sawtooth Software.

Copyright Sociological Research Online, 1996