The interviewer effect (also called interviewer variance or interviewer error) is the distortion of response to an interviewer-administered data collection effort which results from differential reactions to the social style and personality of interviewers or to their presentation of particular questions. The use of fixed-wording questions is one method of reducing interviewer bias. Anthropological research and case-studies are also affected by the problem, which is exacerbated by the self-fulfilling prophecy, when the researcher is also the interviewer it is also any effect on data gathered from interviewing people that is caused by the behavior or characteristics (real or perceived) of the interviewer.
Interviewer effects can also be associated with the characteristics of the interviewer, such as race. Whether black respondents are interviewed by white interviewers or black interviewers has a strong impact on their responses to both attitude questions and behavioral ones. In the latter case, for example, if black respondents are interviewed by black interviewers in pre-election surveys, they are more likely to actually vote in the upcoming election than if they are interviewed by white interviewers.[1]
Furthermore, the race of the interviewer can also affect answers to factual questions that might take the form of a test of how informed the respondent is. Black respondents in a survey of political knowledge, for example, get fewer correct answers to factual questions about politics when interviewed by white interviewers than when interviewed by black interviewers. This is consistent with the research literature on stereotype threat, which finds diminished test performance of potentially stigmatised groups when the interviewer or test supervisor is from a perceived higher status group.
Interviewer effects can be mitigated somewhat by randomly assigning subjects to different interviewers, or by using tools such as computer-assisted telephone interviewing (CATI).[2]
See also
editReferences
editNotes
edit- ^ Barbara A. Anderson; Brian D. Silver & Paul R. Abramson (Spring 1988). "The Effects of Race of the Interviewer on Measures of Electoral Participation by Blacks in SRC National Election Studies". Public Opinion Quarterly. 52 (1): 53–83. doi:10.1086/269082.
- ^ Groves, Robert M.; Lou J. Magilavy (1986). "Measuring and Explaining Interviewer Effects in Centralised Telephone Surveys". Public Opinion Quarterly. 50 (2): 251. doi:10.1086/268979. ISSN 0033-362X.
Bibliography
edit- Anderson, Barbara A., Brian D. Silver, and Paul R. Abramson. "The Effects of Race of the Interviewer on Measures of Electoral Participation by Blacks in SRC National Election Studies," Public Opinion Quarterly 52 (Spring 1988): 53-83.
- Anderson, Barbara A., Brian D. Silver, and Paul R. Abramson, "The Effects of the Race of Interviewer on Race-Related Attitudes of Black Respondents in SRC/CPS National Election Studies," Public Opinion Quarterly 52 (August 1988): 289-324.
- Davis, Darren W., and Brian D. Silver. "Stereotype Threat and Race of Interviewer Effects in a Survey on Political Knowledge," American Journal of Political Science, 47 (December 2002): 33-45.
- Davis, R. E.; et al. (Feb 2010). "Interviewer effects in public health surveys". Health Education Research. 25 (1): 14–26. doi:10.1093/her/cyp046. PMC 2805402. PMID 19762354.
- Stokes, Lynn; Yeh, Ming-Yih (October 2001). "Chapter 22: Searching for Causes of Interviewer Effects in Telephone Surveys". In Groves, Robert M.; et al. (eds.). Telephone Survey Methodology. Wiley. pp. 357–110. ISBN 978-0-471-20956-0.
- Sudman, Seymour, and Norman Bradburn. Response Effects in Surveys. National Opinion Research Center: Chicago, 1974.