ORIMA Research submission re nuclear waste dump siting- all about their survey methods
ORIMA Research Submission to Senate Inquiry into the selection process for a national radioactive waste management facility in South Australia (Submission No 108)
We are aware that a number of submissions have been made to the Inquiry into the selection process for a national radioactive waste management facility in South Australia which have been questioning or critical of the surveys ORIMA Research conducted for the Department of Industry, Innovation and Science.
We find this quite understandable given the sensitivity of the subject matter to many people in the communities involved, and the quite technical nature of the work that we conducted. However, as specialist providers of these services, we remain very confident that the methodologies and processes we employed delivered the most reliable indicators of sentiment in each community possible within the constraints of the situation, which was our brief.
It is important to note in considering the comments made about the surveys to the inquiry that they relate primarily to just one of the five communities which the sentiment survey covered, Barndioota in SA. The five communities had quite different characteristics in terms of population, geography and potentially relevant boundaries. The Barndioota site was located in the smallest of the five communities, a fact which is relevant to considering some of the points raised about the survey in the submissions and responded to below.
ORIMA worked to the requirements of the Department (and the Independent Advisory Panel, IAP) to design and deliver a methodology that could be consistently and reasonably be applied to all five communities simultaneously (and indeed, with consideration to the potential need to replicate the methodology to produce comparable results in other communities in the future if necessary). In practice, this meant that the standardised methodology applied is not necessarily the same one which would have been used if a single community had been the sole focus. The use of different methods across communities would have failed to deliver on the key requirement of the Department to have reliable, comparable data. This is important, because it means that any methodological suggestions that could have been considered for Barndioota would also have had to be practical in the four other communities for them to have been viable for the task – and this is the primary answer to some of the questions raised around the method specifically for Barndioota (such as around sampling and data collection methods).
While we understand that in each of the communities surveyed there will inevitably be individuals displeased by the results (whichever way they fell), the processes put in place were intended to provide the fairest possible indicator of the distribution of sentiment across each of the five communities. Given the diverse results across the different communities, we feel professionally comfortable and confident that the survey delivered effectively on this intention.
General comments about the survey process
The survey process was guided by three perspectives:
1. The Department’s requirements – including for information that enabled it to progress the specific stage that the overall project was at, timeframes for providing that data to the Minister, and its available budgets.
2. ORIMA’s professional expertise as a specialist provider of community research, including our obligations to our industry Code of Professional Conduct, the Privacy Act 1988, and our ISO20252 accreditation.
3. The project Independent Advisory Panel (IAP), which reviewed and commented on the proposed method.
We felt that the combination of these perspectives resulted in a strong and pragmatic fit-for-purpose methodology.
As noted previously, a key consideration was that the survey process needed to be implemented in a consistent manner across five communities in SA, NSW, Qld and the NT. Some of the final processes implemented and their rationale will be commented on in more detail in the next section, though all are also documented in the report.
The Department ultimately requested that the final report did not place any commentary or interpretation around the data, but rather allowed it to stand very clearly on its own in a comparable and objective form. This is why the final report does not contain some of the narrative content that some submissions have commented on.
The Department also requested the results be accompanied by a transparent and specific explanation of the processes, which can be seen in the very detailed Methodology section of the report (starting on page 87). While this level of detail may be overly technical for some lay-readers (and inevitably cannot include every single specific detail a technical reader may think of), it is intended to ensure that comprehensive information on both the strengths and the limitations of the method can be seen. Answers to many of the questions raised in the submissions can be found in these details (for example, the survey was not conducted entirely using landline phone numbers, but also utilised the small number of mobile numbers that could be geolocated to the very small areas in question)
The survey methodology was also piloted prior to its full implementation, and the results of this were considered and implemented in the final survey. There were two key learnings:
1. The pilot indicated the original survey was too long, but even with that longer version more than three quarters of pilot respondents either gave no comment or positive comments about the survey (79%, p92), and indicated none of the questions concerned or confused them (78%, p93). The survey was cut back to the final version documented in the report in collaboration with the Department to ensure it was as short as possible while providing the primary information outcomes, in order to minimise respondent burden and maximise response rates.
2. Feedback was also received in the pilot that due to the importance of the subject to some people in the communities that a standard practice of survey research should be adapted in this case. In keeping with common survey practice, quotas on age and gender were intended to be applied – a process that ensures the final sample is optimally representative of the wider population, allowing the projection of the survey results to the wider population. In this case, people felt that being ‘screened out’ because a quota was full prevented them from having their say on the subject. The decision was taken by the Department to allow every person in a contacted household who wanted to participate to do so – to ensure no individual could feel they were unreasonably excluded. The inevitable disparity this process introduced into the final samples was corrected by a statistical weighting process (see page 99-100 of the report for details). This is also standard practice in surveys, with weighting an important part of maximising the representativeness of many survey samples collected by organisations such as professional research providers, academics and the ABS.
The final survey results, as reported, showed substantial variations in all indicators and measures across the five communities – including in the headline sentiment indicator. This clearly shows that the processes implemented were not in any way pre-disposed to eliciting a particular result – and indeed the greatest care of all throughout the survey was to ensure this was the case.
Responses to specific questions or critiques of the survey The comments and criticisms from people who were displeased with the results of the survey in Barndioota identify a number of topics, and we address some of these specifically below.
1. Sample size and response rates: In general terms, it is true that in research larger sample sizes are usually desirable – though only so long as they are not obtained in a manner that could introduce an invisible bias to the sample, and hence the results obtained. Much of the process that we apply in a survey, and especially this survey, involve trying to maximise the sample size while not compromising the representativeness of the survey sample. For example – ‘opt-in’ survey methodologies can sometimes generate a larger sample size, but because the sample’s representativeness cannot be assessed and is often questionable, results are less projectable even with the larger sample. This consideration has a number of flow on effects in the survey process, especially where we are needing a method that works as equally as possible across five disparate communities:
a. Sample was drawn from a single common source across all communities. Principles for defining survey areas were developed and then applied equally to all five sites (see page 89 of the report). 100% of all sample available within the defined areas was obtained from Australia’s leading provider of survey sample (the same source as used for countless Government and commercial surveys every year). This sample included all available landlines and mobile numbers, with mobile numbers making up 11% of the total sample available across the five communities. The small proportion of mobiles reflects the fact that most mobile numbers in Australia (and even more so in 2016 than in 2018) cannot be accurately geolocated for the purposes of a survey.
b. The opening preamble to the survey is deliberately general, to avoid eliciting disproportionately higher levels of response from some groups of the community.
c. The questions are deliberately as objective and neutral as possible, allowing respondents a fair chance to respond regardless of the views they express.
d. The survey was deliberately conducted in an anonymous manner, so that respondents could express their genuine opinion with no social consequences.
This standardised process resulted in different sample sizes in different communities, largely reflecting the different population sizes. In general, the three smaller communities (including Barndioota) had the higher response rates to the survey, and in those sites the available sample was fully ‘exhausted’, resulting in a limitation on final sample sizes possible (see pages 95-98).
At the Barndioota site, the final sample size was 146 interviews from 113 households. The final effective sample size after weighting was calculated to be Neff=104, and the estimated margin of error +9% at the 95% confidence level. This translates as meaning 19 times out of 20 (95% of the time) we would expect the real population result to be within no more than +9% of the survey result (see page 97). This was the highest margin of error for the five sites, with all of them between +5% and +9%.
It is a legitimate observation expressed in some submissions to wish the sample size was higher, but in practice it was adequate to provide an indicative measure of sentiment. Moreover, the only ways to make it higher (in the context of the full five-community survey and the timeframe available) would have potentially introduced sampling confounds that would have been detrimental to the interpretability of the results; and/or to modify the method for the one location and introduce risks to the comparability of results.
No comments yet.
Leave a Reply