Australian news, and some related international items

Dr Ian Fairlea on Epidemiological Evidence of Cancer Risks

The Hazards of Tritium, , Dr Ian Fairlie, March 13, 2020   “……….Epidemiological Evidence of Risks Because of methodological limitations, epidemiology studies are a blunt tool for discovering whether adverse effects result from radiation exposures. These limitations include:

  • under-ascertainment, …
  • strict data requirements….
  • confounding factors: the true causes of morbidity or mortality can be uncertain due to confounding factors such as socio-economic status and competing causes of death.
  • bias: ……
  • poor signal to noise…..
  • uncertain doses:……
  • wide confidence intervals……
Many epidemiology studies are ecologic studies, that is, quick inexpensive studies which look at health statistics in tables and notate individual data. Their findings are usually regarded as indicative, but not conclusive. If their findings suggest an adverse effect then these should be investigated further by more detailed cohort or case-control studies. The latter match “cases” (i.e. those with an adverse health effect) with randomly-selected similar individuals without an adverse effect, in order to minimise under-ascertainment. However few of these are actually carried out because of their expense and long time-spans. Sometimes they are not carried out for political reasons because findings of increased cancers are not welcome.
A disconcerting finding is that a substantial number of epi studies near NPPs conclude there are no findings of ill health even though positive increases were in fact observed. That is, the researchers were unable to accept the evidence of their own work. It is difficult to comment on this cognitive dissonance (few studies seem to exist on this phenomenon) but it is apparently often due to unacknowledged biases or to group-think re the impossibility for ill-health effects to exist near nuclear facilities. In their conclusions, such authors have discounted their findings using a variety of reasons ………
However there is a serious problem here. If similarly increased health effects had been observed near, say, a lead smelting factory or an asbestos mine, would they be dismissed by referring to these rationales? I rather doubt it. In other words, what is occurring here is that hidden biases in favour of nuclear power are in play. In my view, such conflicts of bias should be declared at the outset just as conflicts of interest are nowadays.

The Abuse of Statistical Significance Tests

Many epi studies of cancer near NPPs have found increased risks but dismissed them as not “statistically significant”. This wording often misleads lay readers into thinking that a reported increase is unimportant or irrelevant. But, in statistics, the adjective “significant” is a specialist word used to convey a narrow meaning, ie that the likelihood of an observation being a fluke is less than 5% (assuming a p = 5% test were used). It does not mean important or relevant.
Also this phrase is usually employed without explaining that the chosen significance level is quite arbitrary. There is no scientific justification for using a 5% level or any other test level: it is merely a matter of convenience. In other words, it is quite possible for results which are “not significant” when a 5% test is applied, could become “significant” when a 10% or other test level were used.
The existence of this practice has historical parallels. In the 1950s, dozens of health studies financed by tobacco companies acted to sow seeds of doubt about the health effects of cigarette smoking for many years. The use of statistical significance was a common stratagem in these studies. as described in US books, see here and here. Similarly, pharmaceutical companies have been shown to run trials on their own drugs designed to minimise their side effects. See here. Again the lack of statistical significance was used as a ploy in these trials.However these bad practices may soon have to stop. In March 2019, the journal Nature published an important editorial “It’s time to talk about ditching statistical significance” which argued against the use of statistical tests in health studies. The same edition contained a commentary “Scientists rise up against statistical significance” signed by 853 scientists worldwide, with about 80 in the UK.  It called for call for an end to, inter alia, “the dismissal of possibly crucial effects” in health studies through the inappropriate use of statistical testing. In the US, it reported that the American Statistical Association (ASA) had published a scientific article with the same aim. See

The Nature editorial stated that statistical tests will continue to be needed in some industrial applications where a yes/no decision is needed, but crucially not in health research, ie epidemiology studies and clinical trials. Why? Because their use in health studies can be biased due to ulterior motives or be insufficiently nuanced.

The Nature article explained many health researchers in the past had “chucked their results into a bin marked ‘not significant’ without further thought”. Instead researchers should have considered matters such as “background evidence, study design, data quality and an understanding of underlying mechanisms, as these are often more important than p values or confidence intervals”. In particular, they should have discussed the health implications of their non-statistically significant findings.

The misuse of statistical significance is an important issue for four reasons. First, because the use of statistical significance tests has often led to the wrong result, especially in clinical trials, and the same is true in epidemiology studies in my experience. Several authors have reported that the rejection of findings for significance reasons can often hide real risks (Axelson, 2004; Whitley and Ball, 2002).

Second, as Nature states “the rigid focus on statistical significance encourages researchers to choose data and methods that .. .yield statistical non-significance for an undesired result, such as potential side effects of drugs — thereby invalidating conclusions.” This damning verdict applies with equal force to the undesired result of observed increases in health effects in an epidemiology study. For decades, some scientists, sadly including those employed at UK government agencies, have dismissed risk findings in epidemiology studies near nuclear facilities by concluding they showed no “significant” raised risks or that excess risks were “not significant”, or similar phrases.

A third reason also mentioned in the Nature article, is that we must re-examine past studies which used lack of statistical significance to dismiss observed increases as these conclusions are now unreliable. This verdict applies, for example, to past studies by the UK Government’s Committee on the Medical Aspects of Radiation in the Environment (COMARE) studies which observed leukemia increases near UK nuclear facilities but dismissed them because they were not statistically significant. These include, for example,

COMARE (2011) Committee on Medical Aspects of Radiation in the Environment Fourteenth Report. Further Consideration of the Incidence of Childhood Leukaemia Around Nuclear Power Plants in Great Britain. HMSO: London.

COMARE (2016) Committee on Medical Aspects of Radiation in the Environment (COMARE) Seventeenth Report. Further consideration of the incidence of cancers around the nuclear installations at Sellafield and Dounreay HMSO: London.

The fourth reason is the vital factor of size in epidemiological studies, ie the numbers of observed cases of ill effects in a population. This is because the probability (ie p-value) that an observed effect may be a fluke is affected by both the magnitude of effect and the size of study (Whitely and Ball 2002; Sterne and Smith, 2001). If the study size is small, its findings often will not be statistically significant regardless of the presence of the adverse effect (Everett et al, 1998)…….  Risks  ….

March 19, 2020 - Posted by | General News

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: