Copyright Sociological Research Online, 2000

 

Jon Mulberg (2000) 'Cash for Answers: The Association between School Performance and Local Government Finance'
Sociological Research Online, vol. 5, no. 3, <http://www.socresonline.org.uk/5/3/mulberg.html>

To cite articles published in Sociological Research Online, please reference the above information and include paragraph numbers if necessary

Received: 11/5/2000      Accepted: 17/11/2000      Published: 31/11/2000

Abstract

One of the reasons that the publication and content of secondary school performance tables in England is such a controversial political issue is the introduction of quasi-market models in public services in the 1980's and 1990's. These models assume that the outcome of the educational process in schools can be separated from the inputs - the background of the pupils - and that schools are able to affect poor performance. Any research that shows that the examination results are associated with parental background attacks the concept of choice that is a major rationale for these models, and confronts the quasi-market approach, since it suggests that the outcomes are exogenous to the educational process. The paper suggests that the present approach to performance indicators is contradictory and confused.The paper offers a comprehensive examination of the association between socio-economic background and school examination results at the local authority level. It uses three measures of socio-economic status derived from local government finance, and shows a strong association between these and the five published indicators of educational performance, in an analysis covering the whole of England for the last three years. The evidence strongly suggests that that the tables reflect the background of pupils rather than the effects of educational professionals and local education authorities. It also offers critiques of the alternative indicators of improvement and 'value-addition', which are currently being developed.Since these performance tables are an element in the new performance-related pay of teachers, the study implies a critique of both UK educational policy and policy on pay. It also suggests the current trend to expand performance indicators to other public sectors is misdirected.

Keywords:
Examinations; GCSE.; Grant; LEA; Performance; Performance-related; Quasi-market; Schools; SSA; Tables

Introduction

1.1
At the time of writing the UK government has just reinstated what it terms 'performance related pay' after a legal challenge and in the face of bitter opposition from unions, teachers and other educational professionals. One element of the assessment for performance related pay is the examination success rate of teachers. This paper will suggest that this element is flawed. The paper will consider the debate on secondary school educational performance measures. It will argue that the use of examination results as a measure of school performance is both empirically flawed and part of a misguided approach to educational management. Using local authority finance as a measure of social need, the research will show that the claim of many commentators that the examination results only reflect the socio-economic background of pupils is indeed borne out, and that the level of examination results are to an extent exogenous to education authorities, schools and classroom teaching. The article will also demonstrate that examination results continue to play a large part in the education appraisal process, in spite of both considerable empirical evidence and conceptual argument as to their flaws. In addition, the paper will argue that many of the suggestions as to how to move away from 'raw' examination performance data to more sophisticated measures is problematic, and will question the validity and usefulness of some of the proposed alternative measures.

1.2
In spite of all the misgivings the UK government policy at present is clearly moving towards an extension of the use of performance measures. In the education field their use has included not only publication of public examination results, but also the introduction of a wide range of national tests. The latest proposals include extending the performance 'league table' approach to policing and health. Yet, as I shall argue below, not only is the use of performance indicators in public services problematic, but the rationale which links their use with an improvement in performance is flawed.

Quasi-Markets and Performance Indicators

2.1
The publication of secondary school results was proposed in the UK during the mid 1980's, and was included in the 1988 Education Act. They were first published in 1992. Levacic and Hardman (1988 p.304) suggest that this was part of a 'quasi-market' approach to education, which attempts to introduce some of the elements of market mechanisms into public services while retaining government regulation (q.v. LeGrand and Bartlett 1993). A quasi-market system could be viewed as 'a sort of half-way house' between full market systems and the previous, state-run systems. Although accepting some of the free-market critiques of state-run schooling, namely that a lack of consumer choice, competition, incentives and performance-related pay leads to inefficiencies, in the UK system the state remains the sole purchaser of educational services (state schools in the UK do not charge fees), and schools remain fixed in size (Adnett and Davies 1999 p.244). According to Levacic and Hardman, the aim of the quasi-market was to provide financial incentives to schools to respond to parental pressures and wishes, by combining the publication of results with increased freedom of choice for parents and budgets delegated directly to schools (Levacic and Hardman 1988 p.304, also see Bradley and Taylor 1998 p.291, Bartlett 1993). The organisational model then becomes similar to an open system, evolutionary approach. Levacic and Hardman outline two such evolutionary approaches, broadly corresponding to adaptation and filter models.[1] In the first model, the organisations - schools - adapt and change to fall in line with consumer choice. In the second version schools are selected by the workings of the quasi-market for survival or decline, and those unfitted for the new environment do not survive. The quasi-market approach therefore entails a programme of the financing of education whereby a school's financial resources are mainly or exclusively linked to recruitment, which is in turn believed to be dependent on educational performance (Levacic and Hardman 1988).

2.2
In practice a modified version of the latter, 'survivalist' model appears to have emerged. The schools which were regarded by the government education inspectorate as 'not fitted' for survival, so to speak, are liable in the quasi-market to a sort of 'quasi-takeover' by government regulators, which would then modify the school management systems so as to increase educational performance.

2.3
The utilisation of performance indicators is often included as a vital element in quasi-markets[2]. One of the main justifications for indicators is in order to cut down on transaction costs. All transactions within a market involve costs due to bargaining, information and enforcement. Markets cannot ensue when transaction costs are too high.[3] Educational performance indicators are intended to enable allow parents to cut down on transaction costs by assisting them to obtain readily information about schools and to monitor enforcement, which would make market-type choices feasible.

2.4
While performance indicators can be used for other purposes than quasi-markets and, as we shall see later, indicators do not have to be set up as league tables, the initial employment of league table rankings of schools as a performance indicator falls squarely within this quasi-market, competitive structure. The aim was to force both schools and the local education authorities responsible for the schools, to compete against each other in an attempt to climb up the league table of examination performance.

2.5
Both the use of the league tables and the employment of a quasi-market can be criticised. We could argue against a quasi-market on political grounds. It could be contended that a market model, even a quasi-market, which will necessarily involve a reallocation of resources to 'successful' organisations, is unsuitable for the provision of public services such as health or education, and that direct public accountability would be better for these sectors.

2.6
However, while such a debate is outside of the scope of this paper[4], quasi-market models also make assumptions that will be criticised in this paper. The models assume what Cuttance (1981) refers to as a 'neo-classical' approach to schooling, analogous with the neo-classical economics paradigm. This approach assumes a 'market' for an homogeneous educational 'product', which 'converts occupational origins into occupational destinations in the process of social stratification' (Cuttance 1981 p.65).

2.7
There are several problems with this neo- classical approach. Firstly, market-based models tend to assume that parents are guided by the labour-market outcomes of educational choices. However there is little evidence that parent's decisions are made in this manner, not least because of the complexity of the data and the fact that outcomes are long-term (Adnett and Davies, 1999; p. 225). Secondly, like many economics models it assumes mobility of factors, particularly that pupils will be mobile between schools, and that parents will respond to changes in performance information by changing their choice of school for their children; that parental (consumer) choice will be guided by relative performance. There is evidence to show that this is not the case and that other factors, such as ease of access to the school, are more important in parental judgements concerning education (Buck, 1993; p.89, Gorard, 1997). In addition, switching schools has high personal costs, and few parents exercise their choice in this matter (Adnett and Davies, 1999; p. 225).

2.8
Furthermore the neo-classical approach assumes an homogeneous educational 'product' - qualifications - and that a simple indicator can be used to provide suitable information to guide parental choice. However there is no reason to believe this assumption holds true. Some parents may well regard examination passes geared towards university entrance as irrelevant to their children, and it would be comparatively more difficult for these parents to exercise choice in the quasi-market. Indeed, Adnett and Davies (1999 p. 229) go so far as to suggest that 'schooling decisions are economically irrelevant for the less academically able'. Other factors, such as social and citizenship skills would then be more important. These skills are ignored in the current performance indicators (ibid.).

2.9
The neo-classical approach also leads to a major dilemma concerning the quasi-market policy it attempts to promote. The driving concern of proponents of the quasi-market approach is the perceived tendency of the teaching profession to blame all inefficiencies on problems of social order - class, inequality and so on. The whole basis of the quasi-market is that decisions and endeavours made in schools are the major element in performance, and that schools and teachers have responsibility for educational failure.[5] However research using the neo- classical model has tended towards the opposite view, suggesting that schools actually made little difference to educational achievement (q.v. Coleman 1966, Rutter et. al. 1979). In fact this conclusion was reached partly as a result of the very standardisation of measures that the quasi-market requires (see Coe and Fitz-Gibbon,1998; p.422).

2.10
Unlike in a 'full' market system, where price is a major indicator for 'consumer' choice, in a quasi-market there are no prices to compare, so performance indicators - in this case school league tables - are supposed to become the central tool for parental decision-making (Adnett and Davies, 1999; p. 227). The indicators are therefore central to the operation of the quasi- market. However the problem with the quasi-market performance indicators is that they are unable to ensure that the outcomes of the model are endogenous; that the actions of the educational institutions form a major determinant of the results. Not only is this problematic but if, as will be shown below, the educational outcomes are linked to the demographic inputs, the entire quasi-market model implodes. If the outcomes - the examination results - are themselves directly linked to the demographic characteristics of the parents who are choosing between schools, clearly increasing parental choice will not effect an increase in educational performance.

2.11
This is the reason that the association between social status - class, race, lone parenthood and so on - and examination performance was, and is, so politically sensitive. Not only does it undermine the entire concept of league tables, but also challenges the quasi-market model which was its raison d'être, and from which the current regulatory system has emerged. It is this question that the present study investigates.

Economic Status and Educational Performance

3.1
While critics of the performance- indicator regime claim examination results simply reveal '[the] secret that everyone knows' - namely that the socio-economic background of pupils is reflected in the examination results and that therefore the school examination outcome is largely dependent on the input of the pupils' background (Davies, 1999) - this does not mean that schooling has no effect at all. A widespread view is that 'schools can and do make a difference', but only a partial and limited difference, because they are part of a wider society (Gewirtz, 1998; p.439). As Levin puts it

Socio-economic status continues to be the single best predictor of educational and other life outcomes for young people. Those born to better-educated, higher-income parents get better more years of education, better grades, better employment and income results... this is probably the single most robust finding in all of educational research (Levin, 1999; p.313)

3.2
Attempts to separate out school effects on educational outcomes have given results of less than 15%.[6] Contrary to the presumptions of the neo-classical approach, it has been suggested that the structural social elements that form the parameters of the influence of the educational institutions are too strong for schools to overturn in the long run. While exceptional effort and skill on the part of schools can occasionally buck the trend, it seems that the elements of structural inequality eventually assert themselves. As Mortimer and Whitty point out, bucking the trend involves extra effort by schools, both initially and over a sustained period of time. This is a dubious basis for a national strategy (Mortimer and Whitty 2000 p. 15).[7] It could be suggested that the idea of educational policy should be to change the trend rather than buck it.

3.3
Mortimer and Whittey also suggest that the previous policies for improving the educational performance of disadvantaged children have not been successful (Mortimer and Whitty 2000; p. 10,). The present study considers the relationship between socio-economic status and educational performance. As we saw earlier, this research has a long tradition, going back to the Coleman report in the US and the analysis of Rutter et.al[8]. More recently the proportion of pupils entitled to free school meals has been used as a measure of low income, since this is the most readily available measure for individual schools. The UK government Audit Commission ( 1992) found a 70% correlation between the number of pupils requiring free school meals and the number having special educational needs.[9] In fact, the issue of the examination results reflecting the background of pupils was one of the earliest complaints about the publication of school league tables. (Jesson et al 1993, cf. Weinstock 1997 p.13). Jesson et al's study of pupils in Nottinghamshire found a high association between a GCSE[10] 'point scores' measure and the gender, ethnic background, parental occupation, family size and family income of the pupils (McCullum and Tuxford, 1993; p.400. McCullum and Tuxford used the 1991 UK census to factor out a wide range of demographic contextual variables, and achieved correlation coefficients with GCSE results ranging between -86%[11] for unemployment to a coefficient of -65% for overcrowded homes (the second lowest correlation was home ownership at 77%) (McCullum and Tuxford 1993; p.400,). Similar associations have been made in a UK Treasury research review, which notes how a quarter of the children from 'problem' housing estates will gain no GCSE passes compared to the national average of 5%, and truancy rates are four times the average (Davies, 1999).

3.4
The present study uses measures derived not at the level of the individual school, but at the level of local government. The study uses aspects of local government finance as a crude measure of need, and will show a strong association between measures of need and all the indicators of examination performance published by the DfEE.

3.5
Publicly financed education in England is largely organised through 150 Local Education Authorities (LEA). These broadly correspond with, and are financed by, elected Local Authorities, which are themselves mainly financed through local taxation (principally property-based council tax and business tax), and a grant from central (national) government, known as the Revenue Support Grant (RSG). In order to avoid confusion the elected Local Authorities will be referred to as local councils. Up until recently all publicly funded schools in England were financed through the local councils.

3.6
However the question of finance is not the main thrust of the present study. What is being claimed here is that the level of Revenue Support Grant (RSG) paid to local councils gives a reasonable indication of the socio-economic makeup of the locality. This can then be used to test the main objection to the use of examination results as indicators of school and teacher performance mentioned above, namely that they reflect the background of pupils, not the effect of the education provided to them.

3.7
In order to understand why the RSG can be used as a measure of the socio-economic makeup of the locality a brief explanation of the method of calculation of the RSG may be useful. The RSG is calculated on the basis of estimates of unit costs on the services for which local councils have responsibility: fire and policing; road maintenance; social services and so on. Education is, of course, one of these 'service blocks'. The unit costing is called a standard spending assessment (SSA), since it presumes a uniform costing technique applied across the country, based on the characteristics of the area.

3.8
Each major service block is subdivided into a range of services. In the case of education, this consists of primary, secondary, post-16 and 'other' education. Each of these education sub-sectors has a calculation of its SSA, which are combined to arrive at a total SSA for the Education service block. The SSA is therefore an estimated unit cost for each service. This is multiplied by the number of people using the service, and then multiplied by an area cost adjustment coefficient designed to equalise costs throughout the country. The result of these calculations is further adjusted to match a 'control total', which is a total national spending limit on each service block announced in the government's annual Budget. The results from each of the major service blocks are then aggregated to form a total overall SSA for each local council. This total, then, is an estimate of what each local council will need to spend to deliver its services, based on an estimate of unit costs for the area. This total Standard Spending Assessment is then matched against an estimate of the potential income from local taxation, again estimated from a standardised valuation. The RSG is the difference between these assessments of income and expenditure; it is meant to offset the shortfall in income for local councils (Department for Environment, Transport and the Regions 1997).

3.9
The RSG is therefore an indication of the shortage of income compared to necessary expenditure of each local council. This can be used to give some indication of the socio-economic makeup of the residents of the locality; it can serve as a crude measure of need which is related to socio-economic characteristics, such as poverty, ethnicity and so on. In this regard, it should be pointed out that the education SSA includes elements directly related to social deprivation. It includes free school meals, which as we mentioned above can be used as a measure of poverty, since they are only granted below a given level of household income. The education SSA also includes an estimate of additional expenditure associated with Additional Educational Needs (AEN). These include three factors: the proportion of children having lone parents, the proportion of parents on Income Support (granted to low-income families) and a measure of ethnicity.[12] We will investigate below the extent to which both free meals and the AEN estimate are associated with examination results.

3.10
It is worthwhile stressing that it is not the Education SSAs or the total SSAs that are the relevant measures. These are simply budget calculations based on unit costs. The relevant question is the extent to which the local residents can meet these costs. A high RSG is therefore an indication of generally low economic status within the area.

3.11
There are therefore three measures of socio- economic status that we can obtain from the data for local council finance: the RSG, the allocation for AEN and the allocation for free school meals. These will be compared with the variety of measures of secondary school performance (age 11-16) which are published by the DfEE. The 'headline' measure is the proportion of pupils gaining over 5 passes at grade C or above in the GCSE. This (combined with Advanced level passes later on) would be a prerequisite for university entrance. The second performance measure published is the proportion of pupils passing 5 GCSE at any grade, giving an indicator for lower ability pupils. In the last two years the DfEE also published an 'Average Points' calculation, giving a combined scale for GCSE and vocational qualifications, again to facilitate inclusion of a wider range of pupils. Measures aimed at appraising unsatisfactory aspects of pupils results and behaviour are the no-pass rate (pupils who pass no examinations whatsoever) and the truancy rate, measured as a percentage of half-days.


Table 1: Measures of Socio- economic Background and Educational Performance
Council Finance Measures
Revenue Support Grant (RSG)
Additional Educational Needs (AEN)
Free School Meals
Educational Performance Measures
5 or more GCSE grade C passes (high passes)
5 or more GCSE passes any grade (pass 5)
Exam points*
No GCSE passes
Truancy rate
*(n.b. only available for last 2 years)

3.12
In the next section we shall see the extent to which the measures of socio-economic status - the RSG, AENs and free meals - are correlated with the measures of educational performance. A couple of caveats should be borne in mind though. Firstly, the time periods over which the two sets of measures are calculated are different, in that local council finance is taken for the tax year to April, whereas the school year is September to July. Although it is doubtful that this effects the correlations to a great extent, given that the RSG estimates are not that exact anyway, it may be a source of slight inaccuracy. Secondly, the period of analysis, 1996-1999, was also a period of local government reorganisation, so that for some of the period a few LEAs did not have corresponding local councils and vice-versa. This did not effect most of the councils however, and the sample size remained high. The results were statistically generalisable.

Analysis

4.1
The statistical calculation chosen for investigation of the correlation between the measures of socio-economic background and examination results was Spearman's rank correlation (rs) rather than the more usual Pearson's product moment correlation (r). This calculation, which compares pairs of rankings, was chosen for two main reasons. Firstly the focus of this study is on the concept of education 'league tables' and the rationale behind these. Our interest here is therefore on the relative effects of the variables at ordinal level, that is, with the rankings, rather than the relationship between the level of grant or assessment of need and the actual proportion of examination passes. The point is to compare the rankings highest through to lowest, not the actual magnitudes. Secondly, these magnitudes are themselves subject to variation. The difficulty of the examinations may be subject to variation (see Turner, 1999), but more to the point the levels of RSG are subject to considerable variation due to a range of exogenous variables, in particular political influence, budget decisions and the reorganisation in local government. The associations between rankings will therefore offer a better statistic than the association between magnitudes, although for the sake of completeness both will be reported.[13]


Table 2: Five 'high' GCSE passes by RSG/Capita, England 1996-99 (%)
1996-971997- 981998-99Average passes by Average RSG 1996- 99
Rank correlation rs-70.9%-71.9%-64.8%-69.3%
N107128147148
All results significant at the 1% levelSOURCE: DFEE/ DETR

4.2
The correlation of Revenue Support Grant with all the measures of secondary school (age 11-16) examination performance were high. The headline high passes measure (5 or more passes over grade C) correlation with RSG per capita varied between -72% and -65%.

4.3
The negative correlation may seem somewhat surprising at first, since they show that the more revenue the local council obtains in RSG the lower the examination performance. This may seem paradoxical, but recall that the RSG was designed to counteract a shortfall in the council's taxbase. We would therefore expect that, if the pupils background does indeed effect their examination performance, the higher this shortfall is (and therefore the higher the RSG) the lower the examination pass rates would be. This would suggest a negative correlation.

Figure 1: High GCSE passes by RSG/Capita, England 1999

Figure 1

SOURCE: DFEE/ DETR

4.4
Most revealing in table 2 was the correlation between the average proportion of the passes for each LEA over the three years and the average amount of RSG for each council over the same period: the right hand column in table 2. This shows just above a 69% correlation. That is to say around 70% of the proportion of examination passes can be attributed to the relationship between the RSG and the exam pass rate. Given that the RSG can be used as an indicator of socio-economic status, this result implies a very strong association between the socio-economic background of pupils and their subsequent examination performance. We can see this graphically in figure 1. The data in the scattergraph in Figure 1 is quite close to the trendline, indicating a high correlation.

4.5
A similar association is found with the other measures of examination performance. Table 3 shows the correlation of RSG/capita with the 'pass 5' measure: the proportion of pupils with 5 GCSE passes at any grade:


Table 3: Five GCSE passes by RSG/Capita, England 1996-99 (%)
1996-971997- 981998-99Average passes by Average RSG 1996- 99
Rank correlation rs-75.0-71.9%-57.0%-65.4%
N119128146149
All results significant at the 1% level
SOURCE: DFEE/ DETR

4.6
Again, the overall pattern is towards a high correlation, although with more variation over the three year period. The average RSG/capita compared to the exam pass rate averaged over the three years is at 65% only slightly lower than the previous correlation for high-grade passes.

4.7
The scattergraph in figure 2 still shows a high association with only slightly more outliers.

Figure 2: Pass 5 GCSE by RSG/Capita England 1999

SOURCE: DFEE/ DETR

The level of correlation remains consistently high even if we use the new 'points' measure published in the last two years. These correlation results are tabulated in table 4.


Table 4: Average Points by RSG/Capita, England (%)

1997-981998- 99
Rank Correlation rs-71.6%-62.0%
N128147
All results significant at the 1% level
SOURCE: DFEE/ DETR

4.8
I would caution against placing too much emphasis on a points score as a measure of educational performance however, since it involves an artificial scaling scheme for grade comparison (Glogg and Fidler, 1990; p.42). If the scaling scheme were changed, so that a different number of 'points' were used for each grade, the results would obviously be affected. Whether the particular scaling system used reflects the difficulty of obtaining the respective grades across different examinations will always be a matter for conjecture. It is only when we consider the lowest measure of examination performance, the proportion of pupils with no exam passes, that there is any notable change in the level of correlation.


Table 5 : No GCSE passes by RSG/Capita, England 1996-99 (%)
1996-971997- 981998-99Average passes by Average RSG 1996- 99
Rank correlation rs-56.7%-57.1%-40.3%-51.8%
N107128147148
All results significant at the 1% level
SOURCE: DFEE/ DETR

4.9
Table 5 shows the correlation of the proportion of pupils with no exam passes with the RSG ranking. These correlations are between 14-18% lower than the corresponding figures for 5 GCSE passes. Nonetheless the correlation between the average no-pass rate and the average RSG is still over 50%. Over half the no-pass rate can be attributed to the association between the RSG and examination pass rates.

4.10
Given the high association between the proportion of pupils gaining no exam passes and the truancy rate (typically around 65-70%), it should not be too surprising to discover that the relationship between truancy and the RSG moves in tandem with the no-pass rate, and is about the same level:


Table 6 : Truancy by RSG/Capita, England 1996-99 (%)
1996-971997- 981998-99Average passes by Average RSG 1996- 99
Rank correlation rs65.1%62.8%53.9%48.9%
N118128147149
All results significant at the 1% level
SOURCE: DFEE/ DETR

4.11
As can be seen in table 6 the relative levels of truancy averaged out over the period in each LEA have just under a 50% correlation with the average levels of RSG. About half the relative truancy rate can be explained by the association with the RSG. However, while the investigation of truancy rates is valuable in its own right and is an important aspect of behaviour in schools, it should be borne in mind that it is not in itself a measure of examination performance. Furthermore, as Weinstock points out, the data is the least dependable of all the measures, since they are recorded with the least level of reliability. Weinstock suggests that there are likely to be variations in both the definitions of 'unauthorised absence' and the practices of recording and reporting (Weinstock, 1997; p.15).

4.12
As mentioned earlier, the education SSA, on which the RSG is based, included assessments for Additional Educational Needs (AEN) and free school meals. The former, it will be recalled, included allowances for ethnicity and lone parents. The correlations between these two measures of socio-economic background and the various measures of educational performance are given in table 7. These are broadly in line with the RSG correlations. The rank correlation of the 3-year average AEN is 71.5% for the highest measure - 5 high passes - and 65.8% for 5 GCSE passes. The figures are 73.5% and 64.5% respectively for free school meals. These are very close to the RSG correlations, and are also, it will be recalled, similar to the results from the studies conducted at individual school level.


Table 7: Educational measures by Additional Educational Needs and Free Meals, England 1996-99
AEN (rs)Free Meals (rs)N
5 High Passes Grade C or over1996-97-72.1%-74.3%118
1997-98- 72.2%-74.7%128
1998-99- 71.4%-72.9%146
Avg 1996-99-71.5%-73.5%146
5 Passes any grade1996- 97-74.5%73.7%118
1997-98- 69.0%67.9%128
1998-99- 60.6%59.4%146
Avg 1996-99-65.8%64.5%149
No passes1996-9764.1%66.0%107
1997-9856.6%63.0%128
1998-9949.6%50.4%146
Avg 1996-9956.9%57.6%146
Exam Points1997-98-73.0%75.1%128
1998-99- 69.1%70.8%147
Unauthorised Absence1996- 9763.7%62.6%118
1997-9863.7%62.7%130
1998-9953.5%51.1%146
Avg 1996-9950.4%46.7%148
All results significant at the 1% levelSOURCE: DFEE/ DETR

4.13
It would appear then that there is very strong evidence to suggest that the examination result output is indeed closely associated with the socio- economic background of the pupils. The output is dependent to a large degree on the inputs, not the process. This makes the measures largely unsuitable for both quasi-market usage and for the purpose of the public accountability of schools, since they give little indication of the effect of schooling on the examination outcomes.

Alternative Measures of Performance

5.1
As we saw earlier, there was a somewhat predictable criticism of the school performance tables from the outset, some of it pointing out precisely the sorts of association that the present study has confirmed. The attitude of successive governments to the tables has been ambiguous however, and suggests a confusion of purpose. The official response has been to propose alternative complementary measures. Unfortunately we shall see that the measures are of doubtful validity and reliability, and also we shall see that the alternative measures do not serve the same purpose of informing the quasi-market.

5.2
It should be stressed that the different performance measures are not necessarily interchangeable. Each measure often has a unique validity, and serves a particular purpose. Glogg and Fidler identify four rationales for performance measures:
  1. External Client Choice
  2. External Public Accountability
  3. External Professional Accountability
  4. Internal Management use
(Glogg and Fidler, 1990; p.39)

5.3
The first rationale, external client choice, can be viewed as corresponding with a quasi-market system, and was the rationale for the 'league table' performance measure. If alternative measures corresponding to one of the other rationales are proposed, this in itself suggests a problem with the operation of a quasi-market policy, since this is dependent upon these very performance measures as part of its operation. It is the reluctance to accept that such a problem exists that has led to the ambiguity of UK policy.

5.4
In fact the inclinations of some of the protagonists in educational policy-making in the UK has tended towards modification of the tables. Morrison and Cowan point out the difference between a performance table and a league table that ranks schools in order of merit. They point out that the UK Department of Education (now the Department for Education and Employment [DfEE]) do not provide actual 'league table' rankings of either schools or Local Education Authorities.[14] They suggest this was largely the work of the press (Morrison and Cowan 1996 pp.242-43). However examination pass rates are only really useful as comparative measures anyway; they obtain meaning only in comparison with other schools (Glogg and Fidler, 1990; p.39). In this sense 'league' tables remain intrinsic to the whole question of performance indicators in the quasi-market.

5.5
It has been recognised for some time that the use of 'raw' examination pass rates was problematic, and both 'value-added' and socio- economic weighted measures were being discussed as far back as 1994 (Marston, 1994), and formed part of the UK government White Paper (consultation paper) on education. (Department for Education and Employment 1997). We will consider these below, but the different measures are associated with different rationales. The concept of 'value added', for example, might be viewed as relevant for investment decisions or perhaps for external audit (Glogg and Fidler 1990 p.39), such as inspections by the Office of Standards in Education (OFSTED), which is the official education inspectorate. From a recent OFSTED press release it appears that they are indeed used in this manner:

The selection [of local education authorities to be inspected] has been made on the same basis as the 12 LEA inspections announced by OFSTED last year, namely on the performance of schools at Key Stage 2 tests and GCSE examinations. Additionally, this selection has focused on LEAs where school performance is below the national average - although this in itself does not suggest that the LEA itself is performing badly overall. [OFSTED 1998]

5.6
However this marks a change in rationale for the data, and the changed purpose of the measures does not seem to have been recognised. While value added or socio-economic status weighted indicators might be useful for external audit, such as that done by OFSTED, or internal management decisions by the LEA or even individual head teachers, they are of little use to parents choosing between schools. By the same token, there seems little point in publishing sophisticated measures in the national press if they are to be used only for internal management purposes by the LEAs.

5.7
Nonetheless a variety of measures are now duly recorded and published, all of doubtful validity, all contradictory and all having different rationales and purposes. No guidance is given as to the relative importance of each of the measures, or how educational professionals are to react to the indicators. This has led to confusion and exasperation. As a headmaster commented in a recent newspaper interview, it seemed his school was being penalised for having able pupils (Lightfoot, 1998). This is a legitimate criticism; it is harder to 'add value' to pupils of an already high ability.

5.8
The confusion in the signals sent out by education policy-makers may well have its roots in government ambiguity towards both indicators and quasi-markets. At the same time as publishing alternative indicators, which itself suggests problems with the original performance tables, the Secretary of State for Education, David Blunkett, claimed that there was no precise match between poverty and school underachievement (Carvel, 2000; p.11). He even repeated the slogan coined by his department some years before ('poverty is no excuse') to counter the claims that the school examination outcome was influenced by the background of pupils. Yet it is surely this link that is behind the need for the alternative measures to the crude 'league table' of examination pass rates.

5.9
There have been two main measures promoted by the DfEE recently. A new 'value added' measure, based on a comparison of tests of pupils at both school entry and exit, has recently come on-stream. The previous alternative emphasised by the DfEE was the rate of 'improvement', which looks at the year-on-year changes in the GCSE pass rates. The changes published in the DfEE performance tables are for the higher '5 high grade GCSE' level, and for the no-pass rate.
Table 8: Frequencies of significant changes in LEA pass rates 1998- 1999, England
Size of change 1998-1999
5 High-grade passes (A-C)No passes
0-2%2156
2-4%1614
5-7%62
Total4372
No of LEAs120147
SOURCE: ESTIMATES FROM DFEE

5.10
While this calculation seems an obvious one to make, it is doubtful if it yields a great deal of useful data. In the first place we could expect a 'natural' level of year-on-year variation anyway, depending on the number of pupils and the initial pass rates. For example, the Camden LEA, which has a large number of pupils taking examinations, had a pass rate for 5 'High grade' GCSEs as 45.8% in 1997-8, and 47.1% in 1998-9. The rates for no passes were 6.4% and 6.9% respectively. The change in the 'high grade' pass rate is barely significant at the 1% level, and this may even be due to rounding errors. The no pass change is not significant; we cannot be 99% certain that the change is not due to natural variations. Even applying the same high number of pupils as in Camden to the rest of the country in 1997-8 and 1998-9, relatively few of the year-on-year changes were outside of the 1% confidence levels. Table 8 gives the breakdown of estimates of the changes in pass rates between 1998 and 1999 that we could say with 99% certainty are not due to chance.

5.11
We can be 99% certain that around one third of the changes in 'high grade' pass rates and around half the changes in no pass rate are not due to chance. In the other instances we cannot give any weight to the changes in the pass rates; they do not measure improvements or anything else. Of the significant changes, table 8 shows that very few of the LEAs have variations worth paying much mind to. Even in these cases it may be impossible in most cases to allocate cause for the changes.

5.12
In addition, interpreting these year-on- year changes will involve a consideration of the baseline pass rate from which the change occurred (Gorard, 1999 passim). A change of, say 4% would have different implications for Camden, with a pass rate of 45% in 1998, than for the Isles of Scilly with a pass rate of 66.7%. In any event, nothing on earth can be improved indefinitely, and eventually the pass rate must be as high as it can get. All improvements eventually plateau.

Conclusion

6.1
This study has confirmed what many commentators have long suspected, and what the Secretary of State for Education has been denying, that the school performance tables reflect the background of pupils rather than the ability of teachers and schools. The study investigated over 120 local councils over a three year period, using three measures of socio-economic background, and correlated them with five measures of educational performance. In all 15 cases the correlation was high. Particularly telling was the high correlation for the three year averages, which smoothed out the yearly fluctuations. These showed between 70% for the highest ability measure and 50% for the lowest.

6.2
The publication of examination performance tables was part of the move to a quasi-market model of educational provision, which emphasised financial independence linked to competition and parental (customer) choice. The study clearly shows that the main determinant of the examination results is the status of the parents themselves. Increasing the level of choice will therefore have little effect on the level of educational provision in the country. Furthermore, variations between professionals working in education are not the main cause of variations in examination pass rates. The main determinants of examination success occur outside of schools. Policy aimed solely at teachers and other educational professionals is likely to prove ineffective. In particular, introducing performance related pay on the basis of examination passes is clearly flawed. Indeed the differences in the system for assessing eligibility for performance related pay to that used for assessing school performance are notable, and show up well the ambiguity of government attitudes to examination data. The latter is based on examinations and tests, whereas in the former this data is to be augmented by personal report and review. It seems that the belief is that the performance of teachers cannot be assessed by examination results but the performance of schools can.

6.3
While there has been an acknowledgement of the deficiencies of the crude use of examination performance tables, the suggested alternatives all have considerable shortcomings, and serve different purposes. The objectives in publishing such data are not clear, since they can no longer be used to guide parental choice. The criticisms of individual local councils and Local Education Authorities are also not borne out by this data. In addition, the rationale for extending the publication of indicators to other public sector services is hazy and requires urgent clarification. In the light of the experience of the use of indicators in the education sector, the proposed extension of the publication of indicators to other sectors might justifiably be regarded with suspicion.


Appendix 1: Product Moment Correlations
RSG/ capita rAEN rFree Meals rn
5 High Passes Grade C or over1996-97-57.9%-67.2%-69.8%118
1997-98-57.4%- 67.0%-69.7%128
1998-99-58.1%- 67.2%-68.8%146
Avg 1996-99- 58.8%-67.2%-69.3%146
5 Passes any grade1996-97- 61.3%-67.1%-67.7%118
1997-98-56.1%- 60.8%-60.1%128
1998-99-50.1%- 53.4%-52.4%146
Avg 1996-99- 55.8%-59.2%-58.6%149
No passes1996-9736.1%50.1%55.4%107
1997-9832.8%42.8%56.7%128
1998-9925.7%39.1%41.3%146
Avg 1996-9933.4%44.5%47.2%146
Exam Points1997-98- 55.2%-65.5%-68.0%128
1998-99-52.5%- 63.1%-64.9%147
Unauthorised Absence1996-9766.3%66.4%65.1%118
1997-9863.3%65.1%63.0%130
1998-9954.4%53.7%49.5%146
Avg 1996-9936.0%37.9%34.1%148

Notes

1Darwin, incidentally, also used both these approaches at various times.

2See for example LeGrand and Bartlett 1993 passim.

3For a discussion of the concept of transaction costs see Mulberg (1995; pp. 133 ff). The concept was largely developed by Ronald Coase (1998).

4But see S. Gorard, Markets in Education and the UK Experiment for a discussion. < http://www.socresonline.org.uk/2/3/annexes/markets.html>

5 Barber 1996 cited Gerwitz 1998.

6Thomas and Mortimore arrived at a result of 10%, whereas earlier Reynolds and Packer estimated school effects to be between 8% and 15% (Thomas and Mortimore 1996, Reynolds and Packer 1992, cited Gerwitz 1998 pp.439-40)

7It should also be pointed out that a high association between socio- economic status and exam passes does not determine the outcome of individual pupil attainment (the so-called ecological fallacy). It is, of course, still possible for individuals with ability and application to buck the dominant trend. The point is that, as with schools, this requires extra effort and ability. Such cases therefore are not routine, and the dominant trend remains against such occurrences. This caveat is especially relevant to the present study, which uses regional data.

8Coleman (1966), Rutter et.al (1979)

9Audit Commission/HMI (1992), cited Buck (1993 p. 89)

10The General Certificate in Secondary Education (GCSE) is the main school examination in England, and is sat by most school leavers at age 16. They will typically take several examinations in a range of subjects. They replaced the General Certificate of Education Ordinary Level (GCE 'O' level); a grade of 'C' or higher in the new GCSE is the equivalent of an old 'O' level pass, and is usually a prerequisite for further study. In addition there are more vocationally oriented examinations (GVNQ).

11To aid ease of understanding I use percentages for correlation coefficients instead of the more usual proportions. See my forthcoming textbook Figuring Figures (Pearson).

12The official calculations of the financial effects of these factors weights lone parents and Income support 2.5:1 to the effect of ethnicity.

13All results were obtained using SPSS under Windows 95.

14Indeed it was quite noticeable how even the computerised data provided by the DfEE is organised in such a manner as to make the compilation of such tables difficult. This author had to spend several days editing the data for the present study. References Links School performance tables:< http://www.dfee.gov.uk/perform.shtml> OFSTED: <http://www.ofsted.gov.uk/press/index.htm>< /a> Standard Spending Assessments: <http://www.local.detr.gov.uk/finance /ssa/ssa0001.htm> SSA methodology guide: <http://www.local.detr.gov.uk/finan ce/ssa/methg978.htm> Excellence in schools DfEE white paper:< http://www.dfee.gov.uk/wpaper/mindex.htm> Times Newspaper:< http://www.newsinternational.co.uk> Guardian Newspaper: <http://www.newsunlimited.co.uk> Electronic Telegraph: <http://www.telegraph.co.uk>

Acknowledgements

The author would like to thank Larry Ray and an anonymous referee for their comments.

References

ADNETT and Davies (1999) Schooling quasi-markets: Reconciling economic and sociological analysis, British Journal of Educational Studies vol.47 (iii), Sept. pp. 221-34.

AUDIT Commission/HMI (1992) Getting in on the Act, London: HMSO.

BARTLETT W. (1993) 'Quasi Markets and Educational Reforms' in Legrand and Bartlett Quasi Markets and Social Policy, London: Macmillan.

BRADLEY S. and Taylor J. (1998) The Effect of School size on Exam performance in Secondary Schools, Oxford Bulletin of Economics and Statistics, Vol. 60 no.3 pp.291-324.

BUCK D. (1993) Value Addition: A reappraisal of School Exam League Tables, Educational Psychology in Practice, Vol. 9 no.2 (July) pp.89- 93.

CARVEL J. (2000) 'Poverty no excuse for failure, says Blunkett' Guardian 2 March, p.11.

COASE R. (1988) The Firm, the Market and the Law, Chicago: University Chicago Press.

COE R. and FITZ-GIBBON, C.T. (1998) School Effectiveness Research: criticisms and recommendations, Oxford Review of Education vol. 24 (iv) pp.421-38.

COLEMAN J.S. et.al (1966) Equality of Educational Opportunity, Washington DC: US GPO.

CUTTANCE P. (1981) School Effects Research: A Synoptic Review of Past Efforts and Some Suggestions for the Future, Australian and New Zealand Journal of Sociology, vol. 17 (iii), 1981 pp.65-69.

DAVIES N. (1999) 'Poverty is the key: not just an excuse', Guardian 2 March 2000.

DEPARTMENT for Education and Employment (1997) Excellence in Schools (white paper), London: HMSO.

DEPARTMENT for Environment, Transport and the Regions(1997), SSA methodology Guide, London: HMSO.

GEWIRTZ S. (1998) Can All Schools be Successful? An exploration of the determinants of school 'success', Oxford Review of Education vol. 24 (iv) pp.439-57.

GLOGG M. and FIDLER, B. (1990) Using Examination Results as Performance Indicators in Secondary Schools, Educational Management and Administration, Vol.18 no.4 pp.38-55.

GORARD S. (1999) Keeping a Sense of Proportion: The 'Politicians Error' in Analysing School Outcomes, British Journal of Education Studies, Vol. 47 no. 3 (September) pp.235-46.

GORARD, S. (1997) 'Market Forces, Choice and Diversity in Education: The Early Impact' Sociological Research Online, vol. 2, no. 3, ** LINK "FILENAME":< http://www.socresonline.org.uk/2/3/8.html>.

JESSON D. et al (1993) 'What the league tables don't reveal', Education, 9 April p.274.

LEGRAND J. and Bartlett W. eds. (1993) Quasi-Markets and Social Policy, London: Macmillan.

LEVACIC R. and Hardman J. (1988) Competing for Resources: the impact of social disadvantage and other factors on English secondary schools' financial performance, Oxford Review of Education, Vol. 24 no. 3 pp. 303-328.

LEVIN B. (1999) Editorial: Class and equity in a new era of social policy, British Journal of Educational Studies, vol. 47 iv. (December), pp.313-16.

LIGHTFOOT L. (1998) 'Heads demand school grades are scrapped', Electronic Telegraph 14 November.

MARSTON P. (1994) 'Exam table changes will account for pupil intake', Electronic Telegraph 23 November.

MCCULLUM, I. and TUXFORD< G. (1993) Counting the Context, Education 26 November p.400.

MORRISON H.G. and COWAN, P.C. (1996) The State School Book: a critique of a league table, British Educational Research Journal Vol. 22 no. 2 (April) pp. 241-49.

MORTIMER, P. and WHITTEY, G. (2000) Can school improvement overcome the effects of disadvantage? London: Institute of Education.

MULBERG J. (1995) Social Limits to Economic Theory, London: Routledge.

OFSTED (1998) 'Three more local education authorities to be inspected by OFSTED' Press release PN98-12, 8 April, Office For Standards In Education.

REYNOLDS, D. and CUTTANCE, P. (eds.) (1992) School Effectiveness: research, policy and practice, London: Cassell.

REYNOLDS D. and PACKER, A. (1992) 'School effectiveness and school improvement in the 1990s', in Reynolds and Cuttance (1992).

RUTTER M. et.al (1979) Fifteen Thousand Hours: Secondary Schools and their effects on Children, London: Open Books.

THOMAS S. and Mortimer P. (1996) Comparison of value added models for secondary school effectiveness, Research Papers in Education, 11 (1) pp. 5-33.

TURNER (1999) 'GCSE is failing the test', Electronic Telegraph 27 August.

WEINSTOCK A. (1997) League Tables: for better, for worse?, British Journal of Curriculum and Assessment Vol. 7 no. 3 (summer) p.13-15.

Copyright Sociological Research Online, 2000

Try and check original report?