Sociological Research Online

Methods used in the Study

Stephen Gorard - Extract From Phd Thesis

Introduction

This chapter describes, and justifies where necessary, the methods used in the present study, building upon criticism of the methods used in previous work, as outlined in Chapter 2. In summary, this study uses a mixture of methods, a large stratified sample, multivariate analysis, and a range of schools. It shows the value of combining documentary evidence, with a large-scale questionnaire and follow-up interviews, as suggested by Hammond and Dennison (1995). The aim of this study is 'to use the quantitative data to generate and test hypotheses on the lines of the classic hypothetico-deductive model, and to use the qualitative data to explain the findings and processes at work ... that lay behind the statistical relationships' (Reynolds 1991). It involves children by asking for their views directly, rather than via their parents, and contacts families both during and after making the choice. This enables a comparison to be made between the attitudes of families before, and after, starting the new school. By looking at the characteristics of the schools themselves, it is also possible to illuminate any mismatch between professed attitudes and actual behaviour (Eiser and van der Plight 1988).

The chapter is divided into five parts. The pilot study is described briefly, with its implications for the main study. There follows a description of the sample, the instruments used in the main survey, and the proposed analysis of the quantitative results. The final section describes the collection and analysis of the more detailed interview and narrative data.

The Pilot Study

The process of negotiating access to schools in South Wales commenced in 1994, and embryonic versions of the questionnaires were piloted at that stage. Three schools took part in the pilot survey, while three provided findings from their own market research, and the two sources provided an interesting comparison. The pilot survey involved families with children in Year 7. Questionnaire forms for pupils were distributed during normal school lessons, and the parents' forms were sent home with the pupils under a covering letter, and returned to the school. A total of 178 forms were returned, with an overall return rate of 90%.

Lessons from the Pilot Study

In some pilot schools, the pupil questionnaires were administered by a member of staff with the researcher not present. This was convenient for administration, but had three major disadvantages. The pupils were more likely to see this as the school's own research, and thus feel inhibited in their criticisms. The teachers may have provided different explanations of questions in different schools, and may not have understood the purpose of the work, and so misled the respondents. This last problem was also encountered several times by the researcher when working in tandem with a form teacher in classrooms. The 'ratings' questions asked pupils to rate the importance for them of a number of school characteristics when choosing a school. It was made clear by the researcher that the ratings should not be an appraisal of the current school, but a recollection of the characteristics of the 'ideal' school sought in the previous year. In this light, it might be perfectly reasonable for a respondent in a single-sex school, for example, to answer that co-education was very important to them, although presumably outweighed by other reasons, or the opinions of others. However, more than one teacher was heard to encourage pupils who asked for clarification, to respond for the school. One such comment was '... well, it's about the importance of a sixth form, but we don't have a sixth form so you should put 'not important''. To remedy such problems in the main study, all forms were completed by pupils with the researcher present, and standard explanations were written, to be read out for each question.

Most questions given to parents and pupils were intended to ask the same thing, but several were paraphrased on the pupil form in order to reduce the level of reading competence required to complete it. It was difficult to paraphrase concepts such as 'equal opportunities', 'progressive education' and 'traditional values' without being cumbersome, or losing precision. In retrospect, it was not clear whether both groups were actually answering the same underlying questions, which prevented clear comparison of the differences between children and their parents. In the main study, all questions are phrased identically for both groups. However, some questions were simplified for all, and made easier to answer, code, and analyse.

The Sample

The initial unit of sampling and administration for the main study was the school. Schools were chosen for the sampling frame because it was easier to create a list of all fee-paying schools than to create a list of all families using fee-paying schools. Once schools had been identified and agreed to participate, the entire year group of relevant age pupils and their parents were surveyed.

The size and distribution of schools in the fee-paying sector of Wales is shown in Chapter 3, and it can be seen that the absence of schools in Mid-Wales divides the sector neatly into two. The region of South Wales was selected for study since it was the largest, and the most convenient. It was operationally defined as the area south of a line through Llandovery, Brecon and Monmouth. This line was chosen because there are no private schools north of these towns until Barmouth, which is in the northern part of Wales. The few schools in the far western portion of South Wales were excluded from the study, as being too difficult to access. The study region therefore extends from Chepstow, on the border with England, to Llanelli in the West.

Schools

It was understood from the outset that schools in the private sector might not welcome research conducted in the sensitive areas of recruitment, market image, and reputation. It was anticipated that some may not have the funds necessary for development, and would be wary of raising false hopes by what might appear to be consumer research. In addition, there remained the possibility that some would not relish further comparison with other schools in the area. There was a fear that the work could be haunted by the 'ghost of Royston Lambert' (Walford 1987c) whose earlier unflattering work on English boarding schools made it hard for later researchers to gain access. Even the limited local study by Griffiths (1991) was constrained by the refusal to take part in the investigation of 40% of the private schools approached. In an attempt to allay these concerns, the schools in this study were assured that they would not be identified in any reports arising from the survey, although this clearly does not rule out other kinds of analysis based upon publicly available information, such as that presented in Chapter 8. Schools which expressed an interest were shown a provisional copy of the questions and asked for feedback on the acceptability of their phrasing. As a result, the rubric given to respondents was changed to make it clear that cooperation by the school did not indicate support for any of the ideas contained in the questions. The schools were offered an analysis of the results of their own respondents,and a summary of the results from the whole study and it was suggested that these might be of benefit to them for their marketing and development.

In the event, 20 of the 29 fee-paying schools in the area were approached and 16 agreed to take part in the survey. It is worth recording here again how helpful, friendly and tolerant the management, staff, parents, and pupils of all schools were. The atmosphere that this attitude created made the field work a real pleasure. In many cases, after communication and a brief discussion with the Head, the researcher was allowed to take entire year groups of pupils unsupervised for a pastoral or registration period. The age and experience of the researcher as a teacher for 13 years, his accent, and overall appearance were undoubtedly helpful in gaining this measure of trust. Three of the schools that refused to take part in the survey did so because they did 'not want to stir up a hornets' nest'. Their replies show a lack of confidence in their provision and the goodwill of their parents and pupils. Unfortunately, all three of the schools are of one type; rural, single-sex, boarding schools. As this type of school is so scarce (Chapter 8), it was not possible to completely replace these in the sample with the result that the findings which emerged may have less validity for these traditional fee-paying schools.

The 29 schools in the study region represent 56% of all private schools in Wales, and cater for 66% of all private pupils. The 16 schools taking part in the study were a reasonable representation of all schools in the region, accounting for 31% of the private schools in Wales, catering for 33% of private school pupils. Of the schools in the focus area, 6 take pupils of only primary age, and 3 of these were surveyed (50%), while 23 take some pupils of secondary age, and 13 were surveyed (57%). The sample included the largest and the smallest schools in the region. Further details of the stratification appear in Chapter 8. Despite some reservations, and the fact that the sample is not random, it has many of the characteristics of a good random sample.

Some focus private schools provided information leading to the identification of local state-funded primary schools which had previously 'fed' them pupils in cross-sector movement, and some state-funded secondary schools were additionally located by being mentioned as under consideration by parents of year 6 pupils in fee-paying schools. One of each such school were included in the survey, in order to provide a small sample for comparison with the results from the private sector, and to ensure that families considering cross-sector movement in both directions were included in the overall sample. An additional primary and a secondary state-funded school, neither of which were mentioned as having any cross-sector communication with fee-paying schools were also surveyed to provide a basis for comparison. The total school sample therefore consists of a stratified sample of 16 of the 29 local fee-paying schools and, in addition, four of the local state-funded schools with which they compete. The sample of state-funded schools contains two primary and two secondary, one urban and three suburban schools in two cities. However, although relevant, it cannot be seen as proportionately representative of its population in the same way as the fee-paying sample. Entire year groups had to be surveyed in order to find those families involved in a cross-sector switch. Having obtained the data representing the whole year group for the four state schools, they are used primarily for comparison with the results from the fee-paying schools.

Families

Although most research on school choice concentrates on parents and their views, there is clear evidence, given in Chapter 4, that pupils are very influential in school choice. David et al. (1994) questioned pupils, and found them to be a valuable source of often quite complex information. As pupils can be contacted at school, the response rate is likely to be high, and so the responses can be seen as truly representative. Woods (1992) agreed that there are good reasons for more research investigating pupils and their role in school choice, while King (1987) pointed out that '10 year olds can complete a simple questionnaire', and the importance of allowing them to do so was shown by Pifer and Miller (1995) in their analysis of data from the Longitudinal Study of American Youth, revealing the inaccuracy of the reports of parents and children about each other. This finding confirmed the view of Payne (1951), that respondents are more accurate about themselves than about others. So this study involves both pupils and their parents. Pupils were asked directly for their input: to give them a sense of ownership of the work; to encourage them to deliver and return their parents forms; because they are themselves under-researched, and because they could answer more accurately for themselves than their parents could.

The choice of a secondary school is likely to be the most fruitful area for research into parental choice, as most parents have more contact with schools at that time in their children's' education than at any other (Hunter 1991). Some previous research on parental choice of secondary school has focused on parents of pupils in year 7 - the 11/12 year olds who have already taken a place at school. One advantage of this approach is that a one year study can readily identify those who have chosen a particular type of school, and find out why. The population is easily defined and an appropriate sample can be devised. The research cannot affect or disrupt the process of choice, and, as Payne (1951) said, 'answers to hypothetical questions may not be so valid in predictions of future behaviour as answers in terms of past experiences may be'. The disadvantage of this age group is that there may be an element of justification of their choice in their recall, and their responses may be influenced by their post hoc knowledge of the school, and their drive to reduce dissonance (Eiser and van der Plight 1988). The researcher is faced with a fait accompli, and has lost the immediacy of the choice process (Dennison 1995).

The advantage of dealing with parents of pupils in year 6 - those 10/11 year olds still at primary school - is that the researcher can be closer to, and more involved in, the actual process of choosing. In this way, the reasons of parents who do not choose a particular school can be more easily identified and such de-selection of schools is at present an under-researched area. In such a group, the element of post hoc justification is missing and, in addition, the researcher can investigate the process of choice as it occurs. This approach has a major practical drawback. A researcher looking at pupils going to a particular type of school cannot prejudge those who are likely to apply, or be selected, since this will bias the sample. Therefore, the sample must be wider and larger than with Year 7 to ensure that it includes a sufficient number eventually going to that type of school.

The decision concerning which year groups to study depends upon whether it is seen as preferable to consult families before or after their choice, but it is also complicated by the lack of consistency in the age ranges of the focus schools. In the event, this work uses respondents from both year groups, and attempts to preserve the advantage of both, while investigating any differences between them, which are anyway likely to be small (West and Varlaam 1991). The survey only involves families with children faced with a choice of school for next year, and those who have just moved schools. Some schools take the majority of their children in Year 1, or in Year 7. Some schools lose the majority of their pupils at the end of Year 6, at the end of Year 8, or at the end of Year 11. As far as possible, in schools with pupils of primary age, Years 1, and 6 were surveyed, although only the parents were involved from Year 1. In schools with pupils of secondary age, Year 7 was surveyed. In preparatory schools taking pupils to age 13, Year 8 was surveyed. In secondary schools with no sixth form, year 11 was surveyed. In-depth interviews were also conducted with selected volunteer families whose child was in Year 6 in a study school in which there was no Year 7, or in Year 7 in a school in which there was no Year 6, or in Year 8 in a school in which there was no Year 9.

The parents and pupils completed the forms between October 1994 and November 1995. Responses are anonymous, which means that, although it is possible to compare the overall results of parents and pupils, it is not possible to link the responses of individual families. In retrospect, this loss of potential data may have been caused by over-sensitivity on the part of the researcher and in any further research of this type, a code numbering system could be used for the purpose of linking the pairs of forms, although this might involve other ethical compromises. The rubric to the questionnaire and the letter of introduction to parents makes it clear that respondents should feel free to leave out questions which they feel uncomfortable with.

In total, 1,267 usable forms were returned from 1,606 distributed in 20 schools, including the pilot study. Most of these were fully completed. Full details of the respondents, and their characteristics, are given in Chapter 9.

The Questionnaires

The questionnaires used in this study are totally original, although based upon the findings of previous research as described in Chapter 4, and designed upon established principles (Payne 1951). Technical and unusual words and homographs were avoided where possible. Questions of similar design were put together, and distinctive type faces were used for questions, answers and directions (Oppenheim 1992). The personal questions about respondents were placed either at the beginning, or the end, in two different versions of the instrument. Since face to face, telephone, and mail surveys produce similar responses to the same questions (Payne 1951), the choice of a hand delivery method was made for practical reasons, such as cost. The forms were only printed in English, which was the medium of instruction in all of the schools. However it must therefore be considered a possibility that the response rate from parents whose first language was not English was lower than average. All pupils completed at least some of the questions. Two children in different schools completed the questions verbally and had their answers recorded by a support teacher, because of problems with their reading. Two Japanese children in different schools spoke so little English, that although they were given every assistance, their responses were mostly incomplete.

Several different, but broadly similar, versions of the questionnaire forms were printed and duplicated. Examples of the forms, for both parents and children, are in Appendix B. The questionnaires used for the pilot study asked fewer questions than in the main study, and some questions were re-worded after the pilot. Therefore not all of the results from each study are directly comparable, but the results for those questions that were identical are used in some of the analysis. Of particular interest is the similarity of responses to such questions in both parts of the study, which serves as an indication of their reliability.

The questionnaires were given to pupils during normal school time, and were completed immediately after a short explanation by the researcher. Whenever a pupil asked for clarification of a question, a pre-prepared explanation was read out at a volume that all could hear. The pupils were given a re-sealable envelope containing a questionnaire form for their parents, a letter from the researcher and, in some cases, a letter from the individual school supporting the research. The completed forms from the parents were collected from the schools in envelopes, although in some cases parents mailed them to the University. In an unusual case of twins at the same school, the mother and father completed two forms separately for their own amusement, and sent both in with a covering letter. Both were coded, and the decision to do this was partly influenced by a problem which arose when parents had children in the study years at more than one school. Such a problem is not mentioned in the literature, and although obvious in retrospect, had not been catered for in the design and was only noticed when a parent explained why they were returning an empty form. Since not all cases could be identified, and the numbers were likely to be few anyway, the problem was not regarded as important. All completed forms were processed, but future designs should be aware of the overlap.

The forms for children used the same questions about the reasons for choice, as the form for the parents. Otherwise, they were shorter and simpler. Pupils were not asked much about their family background, as they have not been shown to be reliable on such matters (Pifer and Miller 1995). Each form contains 606 words, at an average rate of six words per sentence, and takes between 10 and 30 minutes to complete. Its Flesch Reading Ease is assessed as 63.8, and its Grade level as 8.6 using Word 5.1 for Macintosh, defined as 'standard', or not difficult, writing (Microsoft 1991). Since the researcher was present throughout the administration of the pupil questionnaire, and an entire pastoral/registration period was devoted to it, it was also possible for standard explanations to be given to pupils who had difficulty in reading the form. Many pupils made relevant coherent comments on the form, or orally during its completion. In several schools, once the forms had been collected, a general discussion of some of the issues raised, such as bullying, ensued naturally. There is, as a result, no reason to doubt the competence of the majority of children to complete the questionnaire, or at least, no more so than for the adults.

The questions are highly structured in order to make completion and comparative analysis easier. Because open questions are more likely to cause problems of categorisation and so lead to false conclusions, or to over-represent the more convinced and more articulate (Payne 1951), the great majority of responses involve ticking an appropriate box. In order that the questions were not too restrictive, some are also of open design, eliciting responses not foreseen by the researcher. There are spaces for comments and many respondents felt free to give explanations and comments throughout, even where there was no obvious space for them on the form. Some parents attached letters to the form discussing points of interest and explaining their answers to some items, and a few included other documentary material justifying, or explaining, their decision. Parents were asked to write their occupations, and their Post Code districts, but otherwise all respondents only wrote anything if they wished to make a comment, or if there was a category not foreseen in the design.

There are some indications that the reasons for choice coming towards the end of the pilot questionnaire were rated more highly. This could have been due to practice or fatigue effects in the respondents, although it could, of course, have been a genuine result. To test whether position affects the responses, and to see whether siting the characteristics questions at the beginning, or at the end as suggested by Sudman and Bradburn (1982), affected the response rate, two versions of each form were used in the main study. The second version is the same as the first except that the order of the questions is reversed. The responses to both versions were found to be broadly similar in distribution.

The questions concerning respondents' characteristics and the process of choice were rephrased slightly for the different year groups. The phrasing is made prospective for those making a choice, and is in the past tense for those who have made a choice. For example, the question 'Which of the following are you intending to try for?' appears on some forms as 'Which of the following did you try for?'. For Year 11, the question concerning future school plans includes a box for the option of leaving education.

Instead of using standard methods of checking for 'ignorant responders' (Payne 1951), the decision was made to treat all respondents as experts in this field. Unlike many surveys, the questions do not simply measure attitudes, since a decision is actually made by the respondents and no-one can know better than they how and why they make it. They may make a decision based on imperfect information or even on faulty reasoning, but they do make the decision, and the survey attempts to find out how and why. Where possible, the competence displayed by respondents is assessed by comparing their responses with the chosen school characteristics, but even this is difficult. For example, if they rate examination results as very important and then choose a school with poor results, it might mean that they are not very good at choosing a school with the desired characteristics. However, it could also mean that other equally important reasons over-ride exam results or that respondents are not being truthful in completing the survey. It is not possible to tell. This problem provides further justification for the interviews conducted as part of the research programme.

The questions, and associated variables, may be notionally divided into three types: those concerned with the characteristics of the respondent; those concerned with the process of choice, and those to do with the reasons for choice. Parents answered eight questions about their characteristics, including their own education, the education of their other children, their religion, occupation, and postal code district. Two of these questions may be especially criticised in retrospect. One asks 'Which of the following attended a private school at any age?', and then displays a box for each parent to tick. It is therefore not possible to distinguish clearly between those who had not attended a private school, and those who did not answer the question. Other similar questions use a Yes and a No box for each parent, and it is not clear at this stage why this was not done throughout. The answer boxes for this question should also be closer together on the form, rather than being left and right justified. The question concerning the occupation of the parents may also be criticised for not producing sufficiently detailed answers. Responses such as 'Doctor' or 'Company secretary', for example, are not easy to code and it would have been easier, although not necessarily more accurate, to allow the respondents to code this item themselves from a prepared list, such as the eight point scale employed by Halsey et al. (1980). In addition, the question asking for the residential post code is not well thought out, since there is an obvious relationship to the area of the school in a study with such a high proportion of day schools. This information is therefore of little use in the analysis but could have been more useful if the number of boarders had been higher.

Parents answered nine questions about how they choose a school, including who is responsible for choosing, which type(s) of school they are considering, how many they are considering, and which information sources they use. Most of these questions worked well, although the positioning of the response boxes for the question 'Who has the biggest say in the choice of school?' may have led to some ambiguity. The boxes should be in one straight line. The question 'Please list up to three schools that you strongly considered' caused problems with coding, and should have been more specific. The responses do not make it easy to compute the number of schools considered by parents. There is no reason to limit their responses to three, and some Year 7 parents listed three in addition to the school that their child is attending, effectively mentioning four.

All respondents were asked to rate the importance of 73 possible reasons for choosing a school. The reasons are in no particular order, except that those apparently related in the sense that the same term appears in both, such as 'Wide range of sports' and 'Good reputation for sports', are kept separate. This is done to encourage the concentration of the respondent on each issue. It is common practice for questionnaires to use several questions to ask what is essentially the same question, and then to average the results per case to form an index, but this survey asked each question only once. There are two main reasons for this. Firstly, in order to avoid the problem of omitted variable bias (Maddala 1992), or variable selection bias (Kim and Mueller 1987a), a question is included for every reason found by the previous research cited in Chapter 5. This means that 73 questions need to be asked, and it is likely that the fatigue induced by asking each question more than once would outweigh the spurious advantage of creating an index. It is also likely that the completion and response rate of a longer form would be lower. Secondly, this technique of indexing is normally used because having only one measure is often unreliable, but the presumption that several sources are more reliable is a statistical argument based upon sampling theory (Anderson and Zelditch 1968). The method assumes that each measure used in an index has the same variability, and that this is due solely to random error. If either of these assumptions is false, and it is unlikely that either would hold in much published work, using only one question can, in fact, be more accurate (Anderson and Zelditch 1968). Any decision here is in the nature of a compromise, but although it is recognised that there will be a substantial error component in the responses, there seems little point in repeating the substance of the questions.

Each response is in the form of a tick in one of three boxes, rated '0 : not important' to '2 : very important'. Above the boxes, a continuous arrow is drawn from 0 to 2. Three boxes are used rather than the five (or even seven) more usual in attitude research, since this makes completion of the form easier for the respondents, particularly the children, who struggle with seven point scales. Any loss in sensitivity is likely to be spurious anyway. The three point scale, recommended for verbal scales by Sudman and Bradburn (1982) and by Hammond and Dennison (1995), also increases the number of cases in each category and produces a more normal distribution overall. In addition, use of the more sensitive and powerful parametric statistics proposed requires the data to be in equal interval, as opposed to ordinal, form. This means that the difference in importance between 0 and 1 must be the same as the difference between 1 and 2. Such a position is easier to maintain for the two differences on a three point scale, than for the six differences on a seven point scale, for example. During analysis, some analysts collapse the 5 or 7 points on their data collection scale to 3 anyway, so that for example, strongly agree and slightly agree are both coded as agree. This may lead to some distortion, and it is better, where possible, to collect the data in the form in which it is to be analysed. On the other hand, although the use of a three point scale may itself lead to distortion through truncation in some cases, Factor Analysis can be carried out with scales of only two values (Kim and Mueller 1978b).

A simple data capture form was also designed to make it easy to collect comparable data on all of the schools involved in the study. Although most of the information concerning schools comes from observation, interview, and literature in the public domain, representatives were also asked 30 questions about the structure, composition, curriculum, and history of each school. An example is given in Appendix C.

The Analysis Of Results

Coding

To a large extent, the coding system is implicit in the schedule. Missing or illegible answers are all coded as '9', or as many 9s as there are digits in the answer. The responses, with the school, year group, and generation of respondent were entered as they were received into a spreadsheet for each school. The entries were verified by the position of the cursor at the end of each page of the form, and by spot-checks. Mistakes were also spotted by range checks on the values. The individual spreadsheets were converted into one systems file for SPSS for Macintosh, which was used for further analysis.

The most open-ended question, concerning occupation, was the most difficult to code. The intention was to use this information to judge the occupational, and therefore the social, class of the respondents. This study is not alone in facing the problem of defining the class of the emerging non-nuclear families, and the increasing proportion of dual-income families. David et al. (1994) felt that it was not clear that current definitions of class accorded with traditional notions. Existing scales appear to have been designed to take account of only one (male) occupation per family. This causes several problems for the present study. Firstly, 'women's occupations are usually defined in ways that make them appear more middle class' (David et al. 1994 p.143), and secondly it is not clear how to code a family with parents in two differing occupational classes. The decision was made to use the eight point scale of Halsey et al. (1980), using the classification of occupations defined by the Registrar General (OPCS 1970), and to categorise the family on the basis of the full-time occupation of whichever partner appeared in the most prestigious category on the scale. A new category was created for those who were unpaid, including unemployed, students, retired, and house workers. Even so, it is not easy to classify some families. Particular problems were caused by the distinction between higher and lower grade professionals, between directors of large and small businesses, and between marketing managers and sales staff. Respondents did not always provide sufficient information to make these distinctions, and may anyway have been tempted to use more grandiose titles than was absolutely necessary.

The coding system for some nominal variables was altered after the data collection. The data on occupational class, although recorded on a nine-point scale were combined by recoding onto a smaller scale of four points for analysis, since some of the cells created by the larger scale were too sparsely populated. The new collapsed scale is still that of the Oxford Mobility project, with divisions of groups 1 and 2 into 'Service', 3 4 and 5 into 'Intermediate', and 6 7 and 8 into 'Working' class. The scales for family religion, and area of residence are recoded for similar reasons. Children were originally asked whether their parent(s) lived in Britain, but the number responding yes was so small that this variable was omitted from further analysis.

The 73 ratings of the possible reasons for choosing a school are coded as real numbers, with an arbitrary zero point and undefined unit of measurement, which are the defining characteristics of an interval, rather than ratio, scale of measurement (Siegel 1956). Previous research may have treated these results as being on an ordinal scale, but to do so is to ignore a vital characteristic of attitudes which is that they 'probably do not exist in a simple pro-con dichotomy but rather in gradations...' (Reynolds 1977 p.8), and to reject the use of the more powerful parametric techniques of analysis. The responses are anyway used to create a matrix of correlation coefficients, and even if they are thought of as being only on an ordinal scale, assigning numbers to them and using parametric tests would be unlikely to distort the resulting correlation values (Labovitz 1970). Much research literature uses parametric statistics anyway, without any apparent qualms about the metric used. Comrey (1973) is particularly reassuring on this point - 'if the distributions for the data variables are reasonably normal in form, the investigator need have no concern about applying a 'parametric' factor analytic model without demonstrating that his scores are measured on an equal-unit scale' (Comrey 1973 p.198).

Univariate Analysis

Missing values are excluded from analysis. The response rate was calculated, the results were processed, cleaned, and then processed again. The frequencies and modal values of those variables measured on a nominal scale ( e.g. 'occupation of mother') were calculated (Frude 1993). The distribution of those variables measured on an interval scale (e.g. 'the number of schools considered') was assessed, and it was shown that they followed an approximate normal distribution overall, which was not overly skewed. Their means and standard deviations were calculated, and the 73 ratings of the reasons for school choice were sorted into descending order on the basis of their means.

Bivariate Analysis

With 97 variables in the study, the number of possible comparisons between any two of them is 9409, making a 5%, or even a 1% level of significance, of little value if all comparisons are made. This is a major argument for reducing the size of the data set, by reducing variables that show large common variance into a smaller number of underlying components by using Factor Analysis. To this end, a correlation matrix (Pearson's r) was produced for all of the interval variables, including an entry for each of the 73 choice variables in the questionnaires. This provides a measure of common variance between the ratings, so that any with little common variance can be excluded from multivariate analysis. Correlation is used as the first step in analysis to standardise the scores, and eliminate the problem of different means and standard deviations for each (Gorsuch 1972). As the data are non-continuous, many cases can score the same on one specific variable, which reduces the variance and the corresponding correlation coefficients, so any results are likely to underestimate the common variance. This is another reason for treating the data as being in interval form, since although the use of non-parametric rank correlation coefficients would reduce the effect of skewness and kurtosis, as there are only three values in the metric, there would be too many ties for rank ordering to be effective (Gorsuch 1972).

The nominal variables were converted into percentages for standardisation (Reynolds 1977), and cross-tabulated with each other (Gilbert 1993), and when the difference between the observed frequencies were markedly different from those expected, given the size of the sample, or where previous results suggested a difference, a chi-squared test of significance was carried out using the original frequencies, and the null hypothesis that the differences were due to chance was rejected, where appropriate, at the 5% level. Even so, the number of tests carried out was large and as looking for patterns in tables has been compared to looking for patterns of stars in the sky (Gilbert 1993), in addition to prior theoretical knowledge, some triangulation of these results was required for verification. Chi-square was used as a measure, rather than the easier to comprehend Goodman and Kruskall's tau, since it is more usual and it is not necessary in this study to predict row cell values from column values, or vice versa (Reynolds 1977). Where the cell sizes did not appear large enough for suitable analysis, categories, such as those on the scale for occupational class, were grouped together (Lee et al. 1989).

The mean of each of the interval variables was calculated for all sub-groups of respondents, as defined by their characteristics. For example, differences (and similarities) were sought between the responses from different schools, gender of pupils, year groups, fee categories, religions, parents and pupils, and between those who are making, and those who have already made, a choice. When the means showed a marked difference, or when a difference was expected on the basis of previous research, a One-way Analysis of Variance, or two-tailed t-test, was carried out with the characteristic as the independent variable, using a 5% significance level. If the independent variable was dyadic, such that there were only two sub-groups, the direction of the difference was apparent from the two means. When the independent variable could take several values (e.g. occupational class), a TUKEY ranges test was applied to decide which groups were driving the difference found by the Analysis of Variance (Levine 1991). This method computes a single value against which all possible differences between pairs of means can be compared.

Multivariate Analysis

The bulk of the data are the ratings of 73 choice variables on an interval scale, and the major task of analysis was to reduce the size of this set to a smaller but more useful set of measures, referred to as 'factors'. These factors are intended to have an explanatory, and possibly a predictive, value. As well as making clearer why families choose particular types of school, it might be possible to use the factor scores to decide in advance which type of school would be selected. Such an analysis also requires the reduction of schools to a smaller number of types, or clusters.

Assumptions

The assumption that the data used for the analysis were measured on an interval scale was discussed above, and, in his textbook, Norusis (1985), for example, presented an illustration of using Factor Analysis on consumer ratings of products, as though such a procedure is an everyday occurrence with rating data. Similarly, Kim and Mueller (1978b) stated that Factor Analysis can be used even when the metric base of the variables is not clearly defined, as with attitude measurements. Apart from this assumption, the ideal basis for factor analysis is to have variables that are approximately normally distributed, which they are, even though the factor analytic model does not require it (Kim and Mueller 1978b), with linear regression between all pairs of variables, although this ideal is also not always necessary (Comrey 1973). Perhaps the most commonly flouted assumption concerns the sample size. Comrey (1973) recommended at least 300 cases, and Child (1970) suggested the need for at least five times as many cases as variables. Another problem is that even a well loaded factor can be biased if the variables fail to represent some important aspect of it. In using 1,267 cases to analyse 73 variables which themselves cover all of the reasons for school choice, this study avoids both problems.

Principal Components Analysis

Factor Analysis, in general, provides the economy of description necessary in such a complex process as school choice, while retaining the majority of the variance in the responses (Child 1970). It is used to give 'a better understanding of the complex and poorly defined inter-relationships among large numbers of imprecisely measured variables' (Comrey 1973 p.1), to explore the data more fully, and to set up new measures. These new measures minimise the number of variables for future analysis, while maximising the amount of information retained (Gorsuch 1972). Some information is lost in this procedure, but most of what is lost are the idiosyncratic, erratic, or irrelevant components of the variance in responses (Marradi 1981), and since the use of multiple tests tends to lead to spurious results, Factor Analysis can combat this by reducing the number of potential tests (Stevens 1992).

There are many different types of Factor Analysis, although they all have common steps. This section describes those common steps, and explains the decisions taken by the researcher at each one. The first step was to decide which variables to include in the analysis. A 73 by 73 matrix of Pearson Correlation Coefficients was produced, and examined for an intuitive feel for the data. As discussed above, other types of correlation or covariance matrices can be calculated (Siegel 1956), but the standard Pearson r is the most common, and the most appropriate to these data (Comrey 1973, Kim and Mueller 1978a). One purpose of the analysis was to find a smaller number of satisfactory substitute variables, so substantial correlations between variables are required (Comrey 1973). All of the variables correlate significantly with several others, but there are marked differences between the number and size of these associations. A criterion was needed to determine which correlations are large enough to be worth further investigation, and which to exclude from the reduced correlation matrix (Marradi 1981). To exclude the clearly unrelated variables from Principal Components Analysis can lead to a neater solution, but it can also damage the result. On the other hand, to include them would be to court unnecessary complication, and is contra-indicated by the principle of 'garbage-in garbage-out' (Kim and Mueller 1987a). With a large sample, such as in this study, several minor factors will appear as statistically significant, without contributing greatly to overall covariance structure (Kim and Mueller 1987a). All researchers have to make a judgement at this point, and this one is based upon discretion. One way of increasing the proportion of 'healthy indicators' in the analysis is to exclude all variables not correlated significantly with any other, at the 5% level. The advantage of this method is that it is case-sensitive, but since the number of cases is 1,267, correlations as low as |0.06| are flagged as significant, even though the proportion of common variance is negligible. Instead, the method chosen here was to select a figure for the size of common variance that could be useful for explanatory purposes, such as 10%, calculate the size of the correlation that represents it, such as |0.3162|, and exclude all variables that have no correlation of at least that size. The selection and discarding of measures at this stage is too important to be left to a computer, and is of necessity an iterative process (Marradi 1981), so a final decision cannot be made until all data are collected, and it is discovered which of the 73 choice variables are the more useful 'pure factor' measures, and how many of them may be dropped as too 'complex' for their explanatory power (Comrey 1973). Another use of the correlation matrix was to confirm that it was not an identity matrix, which would make factor analysis impossible (Norusis 1985). A third reason was to calculate and check Kaiser's measure of sampling adequacy.

The second step is to extract the unrotated factors underlying the variables. Two main decisions were made at this point - which method of factor extraction to use, and how many factors to extract. A common method of extraction for Factor Analysis is the centroid one, where the first factor is assumed to be in the middle of the two most closely related variables. This is a good method for retaining and exploring all of the variance (Maxwell 1977), but the Principal Components method is also good for extracting the maximum variance (Gorsuch 1972), and is chosen here for several reasons. It is more appropriate for reducing the number of variables, especially where the first few factors account for a large part of the variance, more useful as a prelude to further analysis, and better used when all variables are measured on the same metric with relatively low measurement error and variances of similar magnitude, as they are in this instance. Maxwell (1977) described Principal Components Analysis as ideal in these circumstances, and further stated that, used for this purpose, 'a principal component analysis is straightforward in the sense that no distributional assumptions need to be made about the observed variates' (Maxwell 1977 p.42). The Principal Components solution is often rotated to from other starting points (Gorsuch 1972), and Comrey (1973) pointed out that whichever method is used initially, the subsequent rotation of the factors leads to similar results anyway. Missing values for specific variables were dealt with by imputing a suitable estimated replacement value (Lee et al. 1989), based on the mean value for that variable.

The sums of squares of loadings for successive factors decrease (Comrey 1973), so they become less and less useful in explaining the common variance. Factors are extracted until the residual correlation is too close to zero, but generally as many factors are extracted as there are initial variables, in order to explain all of the variance. However, unless the measurement of the variables is wholly reliable, which they are not in this case, at least some of the variance is due to error. The total variance for each variable can be seen as the common variance, as assessed by the correlation, which itself contains an error component, plus its unique variance, and the variable specific error component. To attempt to explain all of the variance, including that due to error, is unparsimonious, and the number of factors should be kept to a minimum (Cureton and D'Agostino 1983). On the other hand, several factors need to be extracted to create a good enough structure for rotation in step 3 (Comrey 1973), and forcing a solution into too few factors (e.g. fewer than 6) distorts the solution. It is possible to test each factor for significance (Gorsuch 1972), but since the test is case sensitive, with over 1,267 cases, even trivial factors would be statistically significant. Kaiser's criterion and Cattell's Scree Test both define a cut-off point for the number of factors extracted, and both give similar results when the number of cases and variables are large, although they also tend to be less accurate, and to overestimate the number of factors in these circumstances (Stevens 1992). The first of the two methods was used here, so that factors were extracted until the sum of the squares of the correlations of each variable with the factor (the eigen value) was less than 1.0. A factor with an eigen value less than 1.0 would explain less than 3% of the total variance in the responses to the 73 choice questions, which does not justify the additional complication of its inclusion in the solution. The scree test would be a more useful method of determining the number of factors in a study where a large number of 'minor factors' are expected, and interest is focused on the major ones (Kim and Mueller 1978b). It is not so appropriate in this exploratory study since the variables with low inter-correlations, which are liable to cause the minor factors, have been omitted from the analysis.

The next step is to rotate the factors to a solution, which although mathematically equivalent to the original, is more useful for explanatory purposes. The main choice is between an oblique rotation, allowing the factors to overlap, or an orthogonal rotation, forcing the factors in the solution to be unrelated (Child 1970). An oblique rotation would be necessary if higher order factors were to be derived, but the gain in generalisation from second order Factor Analysis is at the cost of less accurate results (Gorsuch 1972). As with many decisions in the analysis, and perfectly properly, a mathematical decision is taken for non-mathematical reasons (Comrey 1973). Several useful criteria for deciding on a method of rotation are: the ease of interpreting the results; the number of zero loadings; the speed of convergence; the restriction of each variable to only one factor, and the replication of factors across split halves of the cases (Gorsuch 1972). It was discussed in Chapter 2, with relevance to Network Analysis, that one of the problems in school choice research is that no-one knows which reasons for choice are related to each other, and for this reason alone, VARIMAX orthogonal rotations were used here (Stevens 1986). The intention was to produce a small number of clearly unrelated factors underlying the reasons for choice, with clearly differentiated loadings. This does not mean that an oblique rotation would not produce meaningful results, merely that they may not be so clear cut. A set of uncorrelated factors can also be seen as more parsimonious (Cureton and D'Agostino 1983).

The final step is to interpret the factors produced by the rotation. This involves both naming and explaining the factors, each of which can then be treated as a type of hypothesis which should be further tested. It is important to decide of each factor - is it valuable, or is it an artefact? One of the reasons for the use of the extended interviews was to see how they, and the choice factors shed light on each other, because one way of testing the factors is to see if they agree with the reality of people's experience. Another is to see if they are relevant to previous findings. Either way, the factors must be confirmed by an alternative analysis to have any genuine theoretical meaning, and so to have any usefulness (Gorsuch 1972).

Each variable has a final measure of communality, which is an estimate of the sum of the squares of all common factor variances of a variable. It gives the proportion of variance in the variable that can be accounted for by scores in the factors. So a variable with a communality near 1.0 is almost wholly explained by its common variance with one or more factors, but a variable with a communality near zero is almost wholly explained by its specific variance. The latter should be minimised by the exclusion of non-correlating variables, prior to analysis. Their exclusion does not mean that such variables are unimportant in school choice, although many of them actually are, but that they are irreducible, or elementary, and need to be discussed in isolation.

Each variable also has a final loading on each factor. The loadings are the correlations between variables and the factors, so the square of a loading can be seen as the amount of variance common to both. One very conservative estimate of the significance of these loadings is to use twice the critical values of alpha for a two-tailed t-test as a cut-off point (Stevens 1992). With over 1,000 cases, a value below 0.162 would emerge as significant, but it would not necessarily be useful. Comrey (1973) suggested that loadings of 0.55 are good, those of 0.63 very good, and those of 0.71 are excellent. He also agreed with Child (1970) that a reasonable cut-off point would be 0.3, with loadings below that figure being ignored in explanation of a factor. This study uses the more precise, and more stringent, figure of |0.3162|, which is the square root of 0.1, so that variables used in the final model have at least 10% common variance with the factors to which they contribute. However, the threshold for a loading cannot be defined rigidly in advance, as it really depends on where a sharp drop in values is noticed (Marradi 1981). As discussed above, since the variables are unlikely to be wholly reliable, they cannot correlate perfectly with any factor, but as a rule, if the loading of a variable is equal to the square root of its reliability, it is a 'pure factor' measure, i.e. the variable and the factor are identical. For example, if a variable is reliable at a level of 0.81, and has a loading of 0.9, it is indistinguishable from the factor, and it is variables of this type that will be most useful in shaping the factor 'space' (Comrey 1973). For a study with a relatively small number of cases, the quality of a factor must be assessed both in terms of the number and size of its loadings, so that, for example, Stevens (1992) suggests that a reliable factor must have four or more loadings of at least 0.6 when the number of cases is below 150. However, with cases in excess of 300, any number of significant loadings can define a reliable factor (Stevens 1992).

Further Analysis

Once a satisfactory set of factors were derived from the data, they were subject to further analysis. The factor scores per case were calculated and saved. There is only one common method of calculating factor scores after PCA (Norusis 1985). These scores are defined as the sum of the case score on each variable, multiplied by the loading for that variable on the factor, for all relevant variables (Jackson and Borgatta 1981). Unfortunately, this procedure reduces the number of cases, since only cases with valid responses to all of the relevant variables are given a score for the factor. One-way Analysis of Variance, and t-tests were used to test for relationships between the factor scores and the nominal variables, as above (Levine 1991), and correlation coefficients were calculated between each factor score and other interval variables, such as those ratings not included in the Principal Components Analysis.

School Types

Where the absolute sample size is small, as it is with the schools in the study numbering only 20, grouping them into smaller clusters in terms of theoretical knowledge is an appropriate method before analysis (Stevens 1992). The qualitative data described in Chapter 8, led to the division of the schools into seven different types, based upon their common characteristics (Everitt 1980). As a follow-up, a Principal Components Analysis was carried out on interval variables describing the characteristics of schools, such as size, age range, and gender mix. The choice of variables for this analysis is partly based upon theory, and partly upon what information is available, so that the conclusions of this analysis must be seen as indicative, rather than definitive until confirmed by external comparison (the results appear in Appendix G). Once the school clusters have been produced, they are tested for validity by comparison with variables not used in the analysis, and it is actually the successful triangulation of the results that justifies their further use. An interesting question is whether the factor scores and the characteristics of the respondents can be used to predict the type of school chosen. In essence - is there a function that can be used to calculate the type of school chosen, of the form:
Preferred School Type = Function of (Factor Scores, Family Characteristics)
which could then be tested with new respondents and schools? Since the factor scores are uncorrelated (orthogonally rotated) and use the same metric, they can all be used in such a multiple regression function (Stevens 1992). Any family characteristics which correlate with the factor scores, or which have markedly different variances, must be excluded, to avoid problems of multi-collinearity and heteroskedasticity. Unfortunately as school type is a nominal variable, the results of this regression would not be reliable. It could be converted to being a dummy, or set of dummy variables, but this is not good practice. School type could be converted to an ordinal measure, by placing them in rank by size, level of fees, or examination performance. Alternatively a regression function could be calculated to predict the number of schools considered by each respondent from their factor scores.

"Thicker" Data

Qualitative methods should not be distanced from other work in the field, as they can lead to better understanding of the data, while the quantitative methods can allow the researcher to generalise about results. The "quantities" used in social research are often "amounts" of qualities anyway. Thus, the two supposed paradigms should not be seen as a dichotomy, but as two strands of a more general investigative method. In this method, all relevant techniques of data collection and analysis cooperate, leading to confirmation (or disconfirmation) through triangulation, a richer detail in analysis, and new lines of thought springing from the surprises and contradictions encountered (Miles and Huberman 1994).

Interviews are often difficult due to time scarcity of both "actors" - McCracken (1988) gives a time budget of 738 hours to deal with 8 interviews - and the respondent's concern for privacy, but they were used here to balance the simplicity of the survey, since "the disadvantage of numerical scores is the risk of reducing something that may be rich and complex to a single index that then assumes an importance out of all proportion to its meaning" (Eiser and van der Plight 1988 p.4). The lucid and forthright comments of some respondents on the questionnaire show the complexity in the views of some families, and the serious problems many of them face in choosing a school. They highlight the need for a more detailed account of some cases, as "every scientific study is improved by a clearer understanding of the beliefs and experience of the actors in question" (McCracken 1988 p.9). The interviews employed in this study are of two main types. The first are informal, opportunistic discussions with Heads, teachers, parents, and pupils, recorded in field notes, during school visits. The second are pre-arranged, semi-structured, taped interviews with volunteer parents. In total, 47 interviews were held with a variety of "players" in the choice process, not including the prolonged discussions held with some groups of pupils immediately after completing the questionnaire.

Opportunity Interviews

Methods, purposes, and even results were frequently suggested to the researcher by participants during the field work. Everyone who had worked in, or even attended, a school had a view on how, and why, other people chose schools. This, the friendliness of all involved, and their willingness to give up their time, were very seductive. The researcher was a 36 year old, ex-deputy headteacher. He wore a simple suit during all field work, as this was his "uniform" as a teacher, with which he felt comfortable. This appearance made it easy to gain access and trust in most schools and classrooms, but may have made it harder to remain distanced, and so reflective. He settled in the end for being treated as an uninvited consultant expert on school development, and concentrated upon making the schools "anthropologically strange" as far as possible, for himself (Measor and Woods 1991).

A major problem with these interviews was the lack of consistent length and pattern, with short interviews, such as at a school gate, or while waiting in a foyer, generating data of possibly doubtful validity (Burgess 1985). The problem of pupils being coached by teachers, as noted by Walford (1991d), was encountered in the pilot study, and minimised in the main study by the greater rigour of the researcher. However, it is undoubtedly difficult to interview people in a half-hour slot; explain the purpose of the research, and get them to trust the interviewer. The aspect of trust was as good as could have been expected, and some individuals told personal stories of abuse, racism, and unfair treatment in an obviously unrehearsed way. These interviews were noted on paper in longhand at the time if possible, else they were noted in the researcher's car at the earliest opportunity, and then typed in a fuller form within 48 hours. The unprompted comments in such interviews and class discussions help to validate the questions in the survey, and suggested new ones for the pre-arranged interviews. The recorded comments and behaviours of Heads and their representatives were used to enhance the knowledge of each school and its mission (Chapter 8).

Pre-Arranged Interviews

These were arranged with the cooperation of six schools; three catering for only primary age, one preparatory, one secondary, and one all-age. Two were state-funded schools, two were proprietary, two were traditional fee-paying, one was rural, one was single-sex, and one took boarders. The families interviewed were all faced with, or had just made, the choice of a new school for their 10/11 (or 12/13) year old, who was their eldest child in many cases, since "parental choices in education are sometimes made only for the first child in the family" (Johnson 1987 p.121). The families were selected from a larger set of names and addresses provided by the schools, partly to protect the identities of the interviewees, and partly to concentrate on those with a relevant or particularly interesting story to tell. The sample includes families considering: a move from fee-paying education to state-funded, and the reverse, including those who had made the change, and those who had decided against it; a boarding school in England; a boarding school in Wales; a music scholarship, and an Assisted Place. They include a wide range of schools under consideration, with pupils expected to gain a scholarship, and those expected to have trouble with an entrance examination. In this way, although the interviewees cannot be seen as proportionately representative of a larger population, they do include many varied backgrounds and stories. The sampling strategy can be seen as a mixture of the methods of maximum variation, critical cases, theory based, and opportunistic, but with multiple cases where possible (Miles and Huberman 1994). As data from the survey came in and was analysed, it was felt important to try and contact more of the families for whom English might be a recent or second language, and an attempt was made to do this by contacting families on the basis of their surnames. Although a reasonable number of such interviews were arranged, some were very brief, several were cancelled, and on two occasions, both with Indian families, no one was in the house at the agreed time of the appointment.

The interviews were held in the homes of the interviewees, except for one which was held at the University in the office of the researcher, while two were in the workplace of the interviewees, a burger bar, and a hardware store respectively. The recordings of these last two interviews are markedly inferior. The total elapsed time per case is between 80 minutes and 200 minutes. They took place between January and July 1995, while applications, entrance examinations, ISIS exhibitions, and school Open Days continued for entry in September 1995, and again in October and November 1995, after the new pupils had just arrived at school. They were recorded, with the permission of the interviewees, on a portable cassette tape recorder, and transcribed by the researcher. The comments of the interviewees were transcribed verbatim wherever possible, but the comments of the interviewer were shortened in some cases, to reduce the time taken for the decidedly amateur transcription.

The use of a questionnaire is vital for a long interview, to cover all of the terrain in the same order, and to help manufacture distance (McCracken 1988). Use of the structure allows flexibility, but does not lead to the collection of superfluous data, helps to reduce bias, and the similarity of the schedule to the survey makes comparison of the various sources of data easier (Miles and Huberman 1994). The standard questions are listed as a schedule in Appendix E. Some items are similar to questions in the survey, such as who made the final choice, while some are prompted by the early results of the survey, such as a discussion of the meaning and value of the choice factors, and others are completely new items, such as the detailed educational history of siblings. All interviewees were asked these questions, as far as possible in the same format, although the previous conversation made some of them superfluous, or insultingly repetitive, in which case they were dropped. Additional questions and comments were put to individuals, depending upon their narrative. The interviewer tried to remain non-directive and non-committal, but was on occasion open to the seduction mentioned above, as most of the interviewees were both lucid and emotional. In the longer interviews, both parties visibly relaxed, with the interviewer becoming involved in household tasks, such as watering plants, during the conversation, and in one case being asked to drive the daughter of the house to a music lesson a few miles away.

Data on the Schools Themselves

Apart from the school questionnaire form, and the interviews with 'actors', the main sources of data concerning schools were Welsh Office publications, league tables of School Performance, the schools' own publications, Local Education authority documents, and the publications of fee-paying school associations, such as ISIS. The school literature was analysed, following the suggestions of Headington and Howson (1995). Some data were also available to the researcher as an individual, who worked in one of the study schools in a senior management position, and in another as a relief teacher. This final group of data are not publicised in this thesis, but they can be said to be 'present' throughout, providing much of the initial motivation for the work, and presumably influencing the perceptions of the researcher.

Analysis

There are no clear guidelines for the 'qualitative' analysis of data, and 'seen in traditional terms, the reliability and validity of qualitatively derived findings can be seriously in doubt' (Miles and Huberman 1994 p.2). There is a very real danger of finding meaning or patterns in data, like the patterns appearing in random numbers, by relying on plausibility. Plausibility is not enough. In addition, it is difficult to assess the validity of many ethnographic conclusions or generalisations, as replication is generally not possible (Hammersley 1990). The value of interview data is sometimes discounted because of poor sampling, and because there is no recognised way of checking their reliability or completeness (Weller and Romney 1988). However, it is often apparent when quality data and conclusions are obtained, since quality data is unambiguous, economical about assumptions, internally and externally consistent, powerful in explaining, and fertile for new ideas (McCracken 1988). No claim is made that the interview sample is complete and representative of a larger population in the same way as that of the survey, from which it is taken.

The data from interviews, and comments from the survey were coded manually in two ways: as a complete narrative, by the type of respondent, and by the content and tenor of each section, as chunks of text. Approximately twelve codes, such as 'gender issues', were initially used as a manageable number, and these were checked by, and discussed with two other researchers, so that they are to some extent consensual (Miles and Huberman 1994). None of the categories were determined in advance, and although some were suggested by the quantitative results, some, such as the concept of three steps in choice, were plainly grounded in the observations (Measor and Woods 1991). These results were used to create substantive theories (Glaser and Strauss 1970), which led to further analysis of the interview data, and to hypotheses to be tested by the further application of bivariate, or multivariate statistics, to the survey data.

The data from schools are similarly coded, using codes such as the method of funding the school, but the analysis is less detailed, or perhaps less visible to the researcher himself, which is in line with the approach recommended by some previous investigators (LeCompte and Preissle 1993).

Conclusion

The regional nature of the sample, the size of the survey, and the use of pupils as respondents are probably sufficiently novel in their own right to justify this research on methodological grounds. However the use of different year groups, the mixture of interview and survey design, the inclusion of fee-paying and state-funded schools, and above all, the use of multivariate statistics to describe the relationships between data that until now have been only theoretical and metaphorical, make this a completely new venture in the field of school choice research.

Bibliography