Each address in each sector (sampling point) was allocated to one of the portions of the sample: A, B or C. As mentioned earlier, a different version of the questionnaire was used with each of the three sample portions. If one serial number was version A, the next was version B and the third version C. Thus, each interviewer was allocated ten cases from each of versions A, B and C. There were 2,262 issued addresses for each of the three versions of the sample.
Interviewing was mainly carried out between June and September 2013, with a small number of interviews taking place in October and November.
Fieldwork was conducted by interviewers drawn from NatCen Social Research’s regular panel and conducted using face-to-face computer-assisted interviewing. Interviewers attended a one-day briefing conference to familiarise them with the selection procedures and questionnaires, with the exception of very experienced interviewers who completed a self-briefing containing updates to the questionnaire and procedures.
The mean interview length was 62 minutes for version A of the questionnaire, 67 minutes for version B and 61 minutes for version C. Interviewers achieved an overall response rate of between 53.6 and 53.8 per cent. Details are shown in Table A.1.
As in earlier rounds of the series, the respondent was asked to fill in a self-completion questionnaire which, whenever possible, was collected by the interviewer. Otherwise, the respondent was asked to post it to NatCen Social Research. If necessary, up to three postal reminders were sent to obtain the self-completion supplement.
A total of 412 respondents (13 per cent of those interviewed) did not return their self-completion questionnaire. Version A of the self-completion questionnaire was returned by 85 per cent of respondents to the face-to-face interview, version B of the questionnaire was returned by 89 per cent and version C by 88 per cent. As in previous rounds, we judged that it was not necessary to apply additional weights to correct for non-response to the self-completion questionnaire.
Interviewers were supplied with letters describing the purpose of the survey and the coverage of the questionnaire, which they posted to sampled addresses before making any calls.
A number of standard analyses have been used in the tables that appear in this report. The analysis groups requiring further definition are set out below. For further details see Stafford and Thomson (2006). Where relevant the name given to the relevant analysis variable is shown in square brackets – for example [REarn].
The dataset is classified by the 12 Government Office Regions.
Standard Occupational Classification
Respondents are classified according to their own occupation, not that of the ‘head of household’. Each respondent was asked about their current or last job, so that all respondents except those who had never worked were coded. Additionally, all job details were collected for all spouses and partners in work.
Since the 2011 survey, we have coded occupation to the Standard Occupational Classification 2010 (SOC 2010) instead of the Standard Occupational Classification 2000 (SOC 2000). The main socio-economic grouping based on SOC 2010 is the National Statistics Socio-Economic Classification (NS-SEC). However, to maintain time-series, some analysis has continued to use the older schemes based on SOC 90 – Registrar General’s Social Class and Socio-Economic Group – though these are now derived from SOC 2000 (which is derived from SOC 2010).
National Statistics Socio-Economic Classification (NS-SEC)
The combination of SOC 2010 and employment status for current or last job generates the following NS-SEC analytic classes:
- Employers in large organisations, higher managerial and professional
- Lower professional and managerial; higher technical and supervisory
- Intermediate occupations
- Small employers and own account workers
- Lower supervisory and technical occupations
- Semi-routine occupations
- Routine occupations
The remaining respondents are grouped as “never had a job” or “not classifiable”. For some analyses, it may be more appropriate to classify respondents according to their current socio-economic status, which takes into account only their present economic position. In this case, in addition to the seven classes listed above, the remaining respondents not currently in paid work fall into one of the following categories: “not classifiable”, “retired”, “looking after the home”, “unemployed” or “others not in paid occupations”.
Registrar General’s Social Class
As with NS-SEC, each respondent’s social class is based on his or her current or last occupation. The combination of SOC 90 with employment status for current or last job generates the following six social classes:
They are usually collapsed into four groups: I & II, III Non-manual, III Manual, and IV & V.
As with NS-SEC, each respondent’s Socio-Economic Group (SEG) is based on his or her current or last occupation. SEG aims to bring together people with jobs of similar social and economic status, and is derived from a combination of employment status and occupation. The full SEG classification identifies 18 categories, but these are usually condensed into six groups:
- Professionals, employers and managers
- Intermediate non-manual workers
- Junior non-manual workers
- Skilled manual workers
- Semi-skilled manual workers
- Unskilled manual workers
As with NS-SEC, the remaining respondents are grouped as “never had a job” or “not classifiable”.
All respondents whose occupation could be coded were allocated a Standard Industrial Classification 2007 (SIC 07). Two-digit class codes are used. As with social class, SIC may be generated on the basis of the respondent’s current occupation only, or on his or her most recently classifiable occupation.
Respondents can be classified as identifying with a particular political party on one of three counts: if they consider themselves supporters of that party, closer to it than to others, or more likely to support it in the event of a general election. The three groups are generally described respectively as ‘partisans’, ‘sympathisers’ and ‘residual identifiers’. In combination, the three groups are referred to as ‘identifiers’. Responses are derived from the following questions:
Generally speaking, do you think of yourself as a supporter of any one political party? [Yes/No]
[If “No”/“Don’t know”]
Do you think of yourself as a little closer to one political party than to the others? [Yes/No]
[If “Yes” at either question or “No”/“Don’t know” at 2nd question]
Which one?/If there were a general election tomorrow, which political party do you think you would be most likely to support?
[Conservative; Labour; Liberal Democrat; Scottish National Party; Plaid Cymru; Green Party; UK Independence Party (UKIP)/Veritas; British National Party (BNP)/National Front; RESPECT/Scottish Socialist Party (SSP)/Socialist Party; Other party; Other answer; None; Refused to say]
Two variables classify the respondent’s earnings [REarn] and household income [HHInc]. The bandings used are designed to be representative of those that exist in Britain and are taken from the Family Resources Survey (see http://research.dwp.gov.uk/asd/frs/). Four derived variables give income deciles/quartiles: [RearnD], [REarnQ], [HHIncD] and [HHIncQ]. Deciles and quartiles are calculated based on individual earnings and household incomes in Britain as a whole.
Since 1986, the British Social Attitudes surveys have included two attitude scales which aim to measure where respondents stand on certain underlying value dimensions – left–right and libertarian–authoritarian. Since 1987 (except in 1990), a similar scale on ‘welfarism’ has also been included. Some of the items in the welfarism scale were changed in 2000–2001. The current version of the scale is shown below.
A useful way of summarising the information from a number of questions of this sort is to construct an additive index (Spector, 1992; DeVellis, 2003). This approach rests on the assumption that there is an underlying – ‘latent’ – attitudinal dimension which characterises the answers to all the questions within each scale. If so, scores on the index are likely to be a more reliable indication of the underlying attitude than the answers to any one question.
Each of these scales consists of a number of statements to which the respondent is invited to “agree strongly”, “agree”, “neither agree nor disagree”, “disagree” or “disagree strongly”.
The items are:
Government should redistribute income from the better off to those who are less well off [Redistrb]
Big business benefits owners at the expense of workers [BigBusnN]
Ordinary working people do not get their fair share of the nation’s wealth. [Wealth]
There is one law for the rich and one for the poor [RichLaw]
Management will always try to get the better of employees if it gets the chance [Indust4]
Young people today don’t have enough respect for traditional British values [TradVals]
People who break the law should be given stiffer sentences [StifSent]
For some crimes, the death penalty is the most appropriate sentence [DeathApp]
Schools should teach children to obey authority [Obey]
The law should always be obeyed, even if a particular law is wrong [WrongLaw]
Censorship of films and magazines is necessary to uphold moral standards [Censor]
The welfare state encourages people to stop helping each other [WelfHelp]
The government should spend more money on welfare benefits for the poor, even if it leads to higher taxes [MoreWelf]
Around here, most unemployed people could find a job if they really wanted one [UnempJob]
Many people who get social security don’t really deserve any help [SocHelp]
Most people on the dole are fiddling in one way or another [DoleFidl]
If welfare benefits weren’t so generous, people would learn to stand on their own two feet [WelfFeet]
Cutting welfare benefits would damage too many people’s lives [DamLives]
The creation of the welfare state is one of Britain’s proudest achievements [ProudWlf]
The indices for the three scales are formed by scoring the leftmost, most libertarian or most pro-welfare position, as 1 and the rightmost, most authoritarian or most anti-welfarist position, as 5. The “neither agree nor disagree” option is scored as 3. The scores to all the questions in each scale are added and then divided by the number of items in the scale, giving indices ranging from 1 (leftmost, most libertarian, most pro-welfare) to 5 (rightmost, most authoritarian, most anti-welfare). The scores on the three indices have been placed on the dataset.
The scales have been tested for reliability (as measured by Cronbach’s alpha). The Cronbach’s alpha (unstandardised items) for the scales in 2013 are 0.82 for the left–right scale, 0.75 for the welfarism scale and 0.81 for the libertarian–authoritarian scale. This level of reliability can be considered “good” for the left–right and libertarian–authoritarian scales and “respectable” for the welfarism scale (DeVellis, 2003: 95–96).
Other analysis variables
These are taken directly from the questionnaire and to that extent are self-explanatory. The principal ones are:
- Household income
- Economic position
- Highest educational qualification obtained
- Marital status
- Benefits received
No sample precisely reflects the characteristics of the population it represents, because of both sampling and non-sampling errors. If a sample were designed as a random sample (if every adult had an equal and independent chance of inclusion in the sample), then we could calculate the sampling error of any percentage, p, using the formula:
where n is the number of respondents on which the percentage is based. Once the sampling error had been calculated, it would be a straightforward exercise to calculate a confidence interval for the true population percentage. For example, a 95 per cent confidence interval would be given by the formula:
Clearly, for a simple random sample (srs), the sampling error depends only on the values of p and n. However, simple random sampling is almost never used in practice, because of its inefficiency in terms of time and cost.
As noted above, the British Social Attitudes sample, like that drawn for most large-scale surveys, was clustered according to a stratified multi-stage design into 242 postcode sectors (or combinations of sectors). With a complex design like this, the sampling error of a percentage giving a particular response is not simply a function of the number of respondents in the sample and the size of the percentage; it also depends on how that percentage response is spread within and between sample points.
The complex design may be assessed relative to simple random sampling by calculating a range of design factors (DEFTs) associated with it, where:
and represents the multiplying factor to be applied to the simple random sampling error to produce its complex equivalent. A design factor of one means that the complex sample has achieved the same precision as a simple random sample of the same size. A design factor greater than one means the complex sample is less precise than its simple random sample equivalent. If the DEFT for a particular characteristic is known, a 95 per cent confidence interval for a percentage may be calculated using the formula:
Table A.2 gives examples of the confidence intervals and DEFTs calculated for a range of different questions. Most background questions were asked of the whole sample, whereas many attitudinal questions were asked only of a third or two-thirds of the sample; some were asked on the interview questionnaire and some on the self-completion supplement.
The table shows that most of the questions asked of all sample members have a confidence interval of around plus or minus two to three per cent of the survey percentage. This means that we can be 95 per cent certain that the true population percentage is within two to three per cent (in either direction) of the percentage we report.
Variables with much larger variation are, as might be expected, those closely related to the geographic location of the respondent (for example, whether they live in a big city, a small town or a village). Here, the variation may be as large as six or seven per cent either way around the percentage found on the survey. Consequently, the design effects calculated for these variables in a clustered sample will be greater than the design effects calculated for variables less strongly associated with area. Also, sampling errors for percentages based only on respondents to just one of the versions of the questionnaire, or on subgroups within the sample, are larger than they would have been had the questions been asked of everyone.
- Until 1991 all British Social Attitudes samples were drawn from the Electoral Register (ER). However, following concern that this sampling frame might be deficient in its coverage of certain population subgroups, a ‘splicing’ experiment was conducted in 1991. We are grateful to the Market Research Development Fund for contributing towards the costs of this experiment. Its purpose was to investigate whether a switch to PAF would disrupt the time-series – for instance, by lowering response rates or affecting the distribution of responses to particular questions. In the event, it was concluded that the change from ER to PAF was unlikely to affect time trends in any noticeable way, and that no adjustment factors were necessary. Since significant differences in efficiency exist between PAF and ER, and because we considered it untenable to continue to use a frame that is known to be biased, we decided to adopt PAF as the sampling frame for future British Social Attitudes surveys. For details of the PAF/ER ‘splicing’ experiment, see Lynn and Taylor (1995).
- This includes households not containing any adults aged 18 or over, vacant dwelling units, derelict dwelling units, non-resident addresses and other deadwood.
- In 1993 it was decided to mount a split-sample experiment designed to test the applicability of Computer-Assisted Personal Interviewing (CAPI) to the British Social Attitudes survey series. CAPI has been used increasingly over the past decade as an alternative to traditional interviewing techniques. As the name implies, CAPI involves the use of a laptop computer during the interview, with the interviewer entering responses directly into the computer. One of the advantages of CAPI is that it significantly reduces both the amount of time spent on data processing and the number of coding and editing errors. There was, however, concern that a different interviewing technique might alter the distribution of responses and so affect the year-on-year consistency of British Social Attitudes data.
Following the experiment, it was decided to change over to CAPI completely in 1994 (the self-completion questionnaire still being administered in the conventional way). The results of the experiment are discussed in the British Social Attitudes 11th Report (Lynn and Purdon, 1994).
- Interview times recorded as less than 20 minutes were excluded, as these timings were likely to be errors.
- An experiment was conducted on the 1991 British Social Attitudes survey (Jowell et al., 1992) which showed that sending advance letters to sampled addresses before fieldwork begins has very little impact on response rates. However, interviewers do find that an advance letter helps them to introduce the survey on the doorstep, and a majority of respondents have said that they preferred some advance notice. For these reasons, advance letters have been used on British Social Attitudes surveys since 1991.
- Because of methodological experiments on scale development, the exact items detailed in this section have not been asked on all versions of the questionnaire each year.
- In 1994 only, this item was replaced by: Ordinary people get their fair share of the nation’s wealth. [Wealth1]
- In constructing the scale, a decision had to be taken on how to treat missing values (“Don’t know” and “Not answered”). Respondents who had more than two missing values on the left–right scale and more than three missing values on the libertarian–authoritarian and welfarism scales were excluded from that scale. For respondents with fewer missing values, “Don’t know” was recoded to the mid-point of the scale and “Not answered” was recoded to the scale mean for that respondent on their valid items.