Home > Keyword Search

Keyword Search

2 articles matched your search for the keywords:

Significance Testing, Inverse Probabilities, Statistical Errors, Ethics of Social Research

Damaging Real Lives Through Obstinacy: Re-Emphasising Why Significance Testing is Wrong

Stephen Gorard
Sociological Research Online 21 (1) 2

Keywords: Significance Testing, Inverse Probabilities, Statistical Errors, Ethics of Social Research
Abstract: This paper reminds readers of the absurdity of statistical significance testing, despite its continued widespread use as a supposed method for analysing numeric data. There have been complaints about the poor quality of research employing significance tests for a hundred years, and repeated calls for researchers to stop using and reporting them. There have even been attempted bans. Many thousands of papers have now been written, in all areas of research, explaining why significance tests do not work. There are too many for all to be cited here. This paper summarises the logical problems as described in over 100 of these prior pieces. It then presents a series of demonstrations showing that significance tests do not work in practice. In fact, they are more likely to produce the wrong answer than a right one. The confused use of significance testing has practical and damaging consequences for people’s lives. Ending the use of significance tests is a pressing ethical issue for research. Anyone knowing the problems, as described over one hundred years, who continues to teach, use or publish significance tests is acting unethically, and knowingly risking the damage that ensues.

Damaging the Case for Improving Social Science Methodology Through Misrepresentation: Re-Asserting Confidence in Hypothesis Testing as a Valid Scientific Process

James Nicholson and Sean McCusker
Sociological Research Online 21 (2) 11

Keywords: Significance Testing, Inverse Probabilities, Statistical Errors, Ethics of Social Research
Abstract: This paper is a response to Gorard’s article, “Damaging real lives through obstinacy: re-emphasising why significance testing is wrong” in Sociology Online Review 21(1). For many years Gorard has criticised the way hypothesis tests are used in social science, but recently he has gone much further and argued that the logical basis for hypothesis testing is flawed: that hypothesis testing does not work, even when used properly. We have sympathy with the view that hypothesis testing is often carried out in social science contexts when it should not be, and that outcomes are often described in inappropriate terms, but this does not mean the theory of hypothesis testing, or its use, is flawed per se. There needs to be evidence to support such a contention. Gorard claims “that those who continue to use teach, use or publish significance tests are acting unethically, and knowingly risking the damage that ensues.” This is a very strong statement which impugns the integrity, not just the competence, of a large number of highly respected academics. We believe the evidence he puts forward in this paper does not stand up to scrutiny: that Gorard misrepresents what hypothesis tests claim to do, and uses a sample size he should know is far too small to discriminate properly a 10% difference in means in a simulation he constructs which he then claims to simulate emotive contexts in which a 10% difference would be important to pick up, misrepresenting his simulation as a reasonable model of those contexts.