In the previous post CanCERN’s questions were set out, followed by CERA’s responses. The choice of the word “responses” is deliberate, as the material supplied by CERA fell very short of being answers.
Lets look at the questions and responses, and add in some Avonside Blog commentary (in blue):
1. Why were the On-line survey results not part of this release? What was the reasoning behind this and can you give a clearer timeframe for when they will be released?
The Wellbeing Survey garnered responses from 2381 residents selected randomly from the electoral roll in Christchurch city, the Waimakariri and Selwyn districts. Once that had been completed the survey was then extended to all members of the public in greater Christchurch.
Because it was collected separately and at a later date, the data from this group is still being analysed but once this is completed we will release the findings. We will let you know when this is coming up.
Seems appropriate, although I wonder if CERA say analysed but mean interpreted? Will the data be analysed in exactly the same way as the first survey? It is logical that it should, however it doesn’t pay to make assumptions.
2. Will these results be analysed separately or will they inform the overall results of the survey?
The results are deliberately being analysed separately as the opt-in results are not representative of the population as a whole. People self-selected which creates a natural bias on the data. CERA undertook the opt-in survey to better understand the views of those who have taken the time to put forward their views, and will help inform CERA’s recovery planning and decision-making.
Yes and no.
Yes, the opt-in survey is biased, but only towards those who have something to say. Should an election result be disqualified because only those who had an interest voted? Why should a survey of this nature be treated any differently? In the absence of information on geographical distribution it is not accurate to say the bias is any different in extent to that which may exist in the data from the first survey.
No, the initial survey is not yet demonstrably superior.
To begin with, it was based upon electoral rolls. The introduction to the report talks about “The Electoral Roll”. How is that defined? Were they paper ones from the last election, or updated electronic versions from the the Chief Electoral Officer? Local body or central government rolls? What was the state of the rolls? - how accurate were they? This has not been disclosed and so casts doubt on the validity of the sample.
Which electorates were used, how many were chosen from each electorate, how many responded from each electorate? As you can imagine a low response (or selection) rate from Christchurch East and Christchurch Central, along with a high response rate from Wigram and Ilam, would give a fundamentally different picture to that where each electorate was reported on separately. From a social science perspective this disqualifies a lot of the data as it is currently presented.
How many letters were returned as undeliverable, and what was the geographic distribution of those returns?
There was a low uptake (52%) and no satisfactory evidence that those who did respond are representative either geographically, or of the substantive social issues (see policy below).
How will the data from the two surveys be weighted when CERA undertakes it’s recovery planning and decision-making? Will interpretations be made to adjust for what the marketers might consider to be the bias of self-selected samples ?
3. How will the results of this survey be incorporated into other wellbeing measures and what can people expect to see as a result of the survey results?
The Survey results will specifically inform operational and strategic decision making and the results are being used in multiple arenas i.e. informing Psycho-Social Action planning, informing the monitoring of SCIRT activities, informing policy thinking around housing etc.
This is doesn’t make much sense. In part that is because it is jargon – the question is whether it is the jargon of PR spin, ignorance, or deceit. It is also lacks sense because it is unclear what data will be used for what purpose.
How will the results specifically inform operational and strategic decision making? Who will use it? How? How important will it be to those activities? Can this be demonstrated?
What are the multiple arenas? (etc. is a lazy and at times deceitful word). The CDHB had a representative on the survey group as a client so there must be health planning issues involved. What are they? Is the information gathered by this survey sufficiently robust to be used for health planning?
How can psycho-social action planning and policy thinking around housing etc. be informed by location-less data (see Q4 below)? These things are based upon who needs what, where, when and how much.
It would be interesting to see the data objectives and specifications prepared for the establishment of this project.
4. Will the results all be broken down so that TC3 and an East/West analysis can be made?
Some initial analysis of the data by land zone and TC3 has occurred. There is no accepted boundary designation for ‘east’ and ‘west’ that could be used to undertake an analysis and CERA analysis look across the greater Christchurch region, including the South and North of Christchurch.
A politically motivated response? Probably. A working boundary could easily be developed if there was a will. East of Linwood Ave through to Stanmore, North Parade and Marshlands Road would be a good place to start. More than once court judges have referred to problems being caused in the “east” by miscreants and criminals. Are the judges using the term “east” in an unaccepted way? Chances are the decision not to break the information down is a premeditated one; the release of area specific analysis would not serve any useful political purpose.
It appears that responses are being closely tied to specific areas, and possibly neighbourhoods too. CERA says that it has carried out some initial analysis by “land zone and TC3” (isn’t TC3 a land zone?). For this to have happened the survey data must contain location identifiers for each respondent. As far as I can tell from the survey form no one was asked what area they lived in, so there must have been tagging in the background to enable the connection of responses with land zones (maybe from a coding of addresses? Every street is coded for land zoning in CERA’s GIS system so it wouldn’t have been difficult).
On the basis of the previous paragraph it should be possible to find out how many respondents there were per land zone (Red, TC3, TC2 and TC1) and their generalised distribution throughout the greater Christchurch area. Releasing these numbers would allow the integrity of the survey, and it’s interpretation, to be verified.
Following on from all this is the opportunity to ask really useful questions, such as those below, to check how vulnerable or potentially vulnerable groups are getting on, and what their future prospects might be.
How many selected from the Waimakariri area lived in Kaiapoi?
How many from Kaiapoi responded?
How many of the Kaiapoi residents who responded were Red Zoned?
How many in TC3 were selected to take part in the survey?
How were they distributed?
How many TC3 forms were returned undelivered?
How many in TC3 completed the survey?
How many in TC3 responded Good or Extremely Good to Q15?
How many in TC3 are aged over 65?
What number of those in TC3 have an income less than $30,000?
It is interesting to note that no one from Statistics New Zealand was involved in the client/design area. The bottom line at the moment remains that this was a marketing exercise rather than a social science survey. As such it should not be given credence.
An earlier blog entry on the content and quality of the wellbeing survey is here.