You’ve put together a questionnaire that is too cluttered and too long. Survey questions were unclear and did not flow. Some survey questions were confusing and did not mean the same thing to everyone. And, respondents were asked to remember dates and time frames that proved difficult for the hardiest of brains. The questionnaire was a mess.
Similarly, we have all tried to make sense of responses to a messy questionnaire. But, if we have garbage in questionnaire design, we get garbage out which makes it impossible to draw conclusions and make business decisions. We could have saved our time and money; sometimes it is a good idea to call on the experts!
How can we overcome a poor survey questionnaire design? It does little good to know that some people loved the food and some hated it without knowing why. We show how many people loved (hated) the food through numeric quantitative methods; the why is usually obtained through inferential analysis that is doomed to failure without good clean data collected by the questionnaire.
Poorly designed questionnaires generally do not take into consideration the needs of people with disabilities or people who are from diverse backgrounds and cultures. People may be, but are not limited to, color blind or may have impaired physical strength, range of motion, sensory, mobility, cognition, vision, speech, or hearing.
Do you and your team have the expertise to provide the questionnaire in alternative formats to be compliant with legislative mandates, including the Americans with Disabilities Act, the Telecommunications Act, and the Rehabilitation Act? If the answer is no, consider using a full service firm who is adept at providing the questionnaire in alternative formats for people from other cultures or for people with disabilities (e.g., foreign language, large print, Braille).
Item Wording and Formatting
Poorly designed questionnaires generally pay little attention to the wording and formatting of items. Example: Rather than ask for a respondent’s age, it is more accurate to ask “What was your age on your last birthday?” This will generate more consistent responses.
Consider the following open-ended item: How much money do you make in a year? First, not everyone has a job. Second, this is a sensitive question.
Are response alternatives mutually exclusive and exhaustive? Consider the response alternatives for the following multiple choice item about salary (i.e., not working, 0-25,000; 25,000-50,000; 50,000-75,000; 75,000-100,000; 100,000 and above). If an individual makes 25,000, will they select the second or third response alternative? Both contain their 25,000 salary! And, ‘not working’ is the same as earning ‘0’ in response alternatives one and two!
A full service survey research firm will work with you to make every word count. The firm might re-word and re-format the salary question offering respondents the following salary ranges from which to choose (i.e., not working, 1-25,000; 25,001-50,000; 50,001-75,000; 75,001-100,000; above 100,000). This wording makes the item more likely to be answered accurately.
Double barreled items are ambiguous and result in inaccurate answers. Consider the following item, “The exhibit was interesting and promoted interaction.” The only people who could accurately respond to that item are those who truly found the exhibit to be interesting and who actually interacted with it. All others would be telling half truths, resulting in misleading results.
The survey firm might re-word the double barreled item into a checklist format: Check all the words you would use to describe the exhibit (interesting, interactive, informative, fun). Separating the double barreled item into two questions would also work.
Do you and your team have the time to identify words that mean the same thing to everyone and format the questionnaire for easy access and response?
Poorly designed data collection initiatives use less than optimal methods to gather information, resulting in misleading, confusing information.
Online questionnaires are the least expensive way to reach the greatest number of people – globally. Although not everyone has a computer, tablet, or smart phone, computers are available at public libraries and community agencies.
Online questionnaires may look easy to create, but in fact are just as difficult as mailed questionnaires to do well. Email invitations, reminders, and the surveys themselves must be designed to be “responsive” to whatever type of device the respondent uses to take the survey, adjusting text sizes and graphics so that they are easily readable. Once designed, online questionnaires can be easily stored and used from year to year, revising, as necessary.
Obtaining information from in-person interviews may be the most personal approach and most effective way of gaining trust and cooperation from the respondent. It is easy to react to puzzled facial expressions, answer questions, probe for clarification or redirect responses. This type of survey can also be the most expensive to deploy.
Telephone interviews are less expensive than in-person interviews and may be more or less expensive than mailings, depending on numbers involved, yet not everyone has a telephone. And, in this age of identity theft and fraud, many of us are skeptical about sharing information with anyone for any reason over the phone.
A focus group can provide informative feedback as a follow up to other methods or as a method unto itself. Care must be taken, however, to carefully craft questions that are ultimately asked by a trained, unbiased facilitator.
Do you and your team have the expertise to identify and implement the most efficient and effective survey method(s)? Be honest now!
A full service survey research firm works with you to ensure selection of the optimal survey research method to accomplish your goals. Firms often use multiple methods to gather the most reliable, valid information. In an online application, they will also make sure that multiple submissions are not permitted that might skew results.
Poorly designed survey initiatives select a sample that is either too large or too small. They fail to take into account the diversity of potential respondents. If you ask the wrong people the right questions, you get the wrong answers!
A full service survey research firm will work with you to identify the goals and objectives for using the questionnaire. The research firm will work with you to select the most appropriate respondents for your questionnaire and explain why. They will establish criteria for inclusion and identify the proper method(s) of sample selection to ensure statistically significant and meaningful responses to your questionnaire.
A poorly designed questionnaire can be the reason for a large number of partial responses to an item or few responses to a large part of the questionnaire.
The most common reason for refusals in survey research used to be reaching people at an inconvenient time on the telephone, largely negated by use of online questionnaires which are available to people 24/7/365. Other common reasons for refusal include lack of time or interest in the subject of the questionnaire.
Do you and your team have the time and expertise to identify non-respondents and identify ways in which they are similar to and different to those who responded to your questionnaire? Careful, there is more to this than meets the eye!
A full service survey research firm will work with you to account for and counter various types of non-response. This ensures that every response is as meaningful and useful as possible in the final data analysis. This includes defining a protocol for handling partially completed items and questionnaires, expanding the sample size, and/or creating an interesting questionnaire. Survey research firms often also obtain information from people who were not asked to complete the questionnaire, to help determine how different the two samples really are. This can give you confidence about how broadly you can generalize your findings. The firm helps you identify what is most relevant to the purpose of your questionnaire.
A poorly designed questionnaire has no context or clearly defined purpose; it is a potpourri of questions thrown together by representatives from every department in the organization.
Do you and your team have the in-house expertise and time to attend to the details of questionnaire design or flow? While not rocket science, it is trickier than it looks!
Working with a full service survey research firm will ensure that your questionnaire has a clear purpose, professional tone, logical order, and uncluttered space.
A poorly designed questionnaire asks questions about sensitive or potentially embarrassing issues while singling out individuals or groups. A poorly designed questionnaire makes people feel that they cannot respond honestly, but need to respond in a way that is ‘socially acceptable’. Therefore, respondents often report smoking less and exercising more, stretching the truth to make themselves look good.
Do you and your team have the expertise to phrase sensitive items so respondents will feel comfortable in responding to them in a straightforward, honest way?
A full service survey research firm works with you to inform respondents about how responses will be used, where and to whom information will be disseminated. Respondents have a right to know whether their answers will be attached to them individually or be reported in group numbers so that people feel comfortable (or not) sharing personal behaviors, opinions, or attitudes.
Item developers at survey firms are expert at crafting items that respondents feel safe answering, rather than sharing what they think people want to hear. When asking about drunken driving, they might rephrase and ask about driving under the influence instead.
A full service firm also has expertise in cross cultural issues, whether aligning word meanings across cultures or accounting for those cultures who are pre-disposed to revealing more information about themselves than others. A survey research firm works with you to guarantee respondent anonymity or confidentiality, which can help equalize socially desirable behaviors and attitudes.
A poorly designed questionnaire asks respondents to remember what they did weeks, months, and years ago in varied detail. Despite the best of intentions, respondents make errors because they displace events in time, associate the behavior with the wrong time period, or forget entirely. They also might not understand the question.
Information gathered on a questionnaire ultimately depends on the ability to recall experiences or feelings. Do you and your team have the expertise to help respondents recall necessary information without bias or being led to the desired response? This is both an art and a science.
A full service survey research firm is adept at creating items that facilitate recall. Asking people, for example, how often between Memorial Day and Labor Day they go to the beach, defines a time period and helps them frame the question. Longer questions may give people the necessary time to think and produce better responses. Memory can also be stimulated by asking for similar information in different formats throughout the questionnaire. Asking about reactions, attitudes, and behaviors during an actual event also ensures accurate recall.
Reliability / Validity
In poorly designed questionnaires, little to nothing has been done to ensure the reliability and validity of each item and, hence, the entire questionnaire. The questionnaire is not tested prior to its implementation.
Every word in every item on the questionnaire counts. Do you and your team have the time and expertise required to create questionnaires that are both reliable and valid? It is definitely more difficult than it appears.
A full service survey research firm will usually recommend at least one pilot test of the questionnaire to determine whether results yield the same information over time and whether items measure what they were intended to measure.
If you and your team would like to learn more about how the full service survey research services at NBRI can minimize your time and maximize your results, contact us now at 800-756-6168.
Terrie Nolinske, Ph.D.
National Business Research Institute
Survey research is sometimes regarded as an easy research approach. However, as with any other research approach and method, it is easy to conduct a survey of poor quality rather than one of high quality and real value. This paper provides a checklist of good practice in the conduct and reporting of survey research. Its purpose is to assist the novice researcher to produce survey work to a high standard, meaning a standard at which the results will be regarded as credible. The paper first provides an overview of the approach and then guides the reader step-by-step through the processes of data collection, data analysis, and reporting. It is not intended to provide a manual of how to conduct a survey, but rather to identify common pitfalls and oversights to be avoided by researchers if their work is to be valid and credible.
data reporting , health care surveys , methodology , questionnaires , research design , survey methods , surveys
What is survey research?
Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth  and Joseph Rowntree ), and indeed survey research remains most used in applied social research. The term ‘survey’ is used in a variety of ways, but generally refers to the selection of a relatively large sample of people from a pre-determined population (the ‘population of interest’; this is the wider group of people in whom the researcher is interested in a particular study), followed by the collection of a relatively small amount of data from those individuals. The researcher therefore uses information from a sample of individuals to make some inference about the wider population.
Data are collected in a standardized form. This is usually, but not necessarily, done by means of a questionnaire or interview. Surveys are designed to provide a ‘snapshot of how things are at a specific time’ . There is no attempt to control conditions or manipulate variables; surveys do not allocate participants into groups or vary the treatment they receive. Surveys are well suited to descriptive studies, but can also be used to explore aspects of a situation, or to seek explanation and provide data for testing hypotheses. It is important to recognize that ‘the survey approach is a research strategy, not a research method’ . As with any research approach, a choice of methods is available and the one most appropriate to the individual project should be used. This paper will discuss the most popular methods employed in survey research, with an emphasis upon difficulties commonly encountered when using these methods.
Descriptive research is a most basic type of enquiry that aims to observe (gather information on) certain phenomena, typically at a single point in time: the ‘cross-sectional’ survey. The aim is to examine a situation by describing important factors associated with that situation, such as demographic, socio-economic, and health characteristics, events, behaviours, attitudes, experiences, and knowledge. Descriptive studies are used to estimate specific parameters in a population (e.g. the prevalence of infant breast feeding) and to describe associations (e.g. the association between infant breast feeding and maternal age).
Analytical studies go beyond simple description; their intention is to illuminate a specific problem through focused data analysis, typically by looking at the effect of one set of variables upon another set. These are longitudinal studies, in which data are collected at more than one point in time with the aim of illuminating the direction of observed associations. Data may be collected from the same sample on each occasion (cohort or panel studies) or from a different sample at each point in time (trend studies).
This form of research collects data to ascertain the effects of a planned change.
Advantages and disadvantages of survey research
The research produces data based on real-world observations (empirical data).
The breadth of coverage of many people or events means that it is more likely than some other approaches to obtain data based on a representative sample, and can therefore be generalizable to a population.
Surveys can produce a large amount of data in a short time for a fairly low cost. Researchers can therefore set a finite time-span for a project, which can assist in planning and delivering end results.
The significance of the data can become neglected if the researcher focuses too much on the range of coverage to the exclusion of an adequate account of the implications of those data for relevant issues, problems, or theories.
The data that are produced are likely to lack details or depth on the topic being investigated.
Securing a high response rate to a survey can be hard to control, particularly when it is carried out by post, but is also difficult when the survey is carried out face-to-face or over the telephone.
Essential steps in survey research
Good research has the characteristic that its purpose is to address a single clear and explicit research question; conversely, the end product of a study that aims to answer a number of diverse questions is often weak. Weakest of all, however, are those studies that have no research question at all and whose design simply is to collect a wide range of data and then to ‘trawl’ the data looking for ‘interesting’ or ‘significant’ associations. This is a trap novice researchers in particular fall into. Therefore, in developing a research question, the following aspects should be considered :
Be knowledgeable about the area you wish to research.
Widen the base of your experience, explore related areas, and talk to other researchers and practitioners in the field you are surveying.
Consider using techniques for enhancing creativity, for example brainstorming ideas.
Avoid the pitfalls of: allowing a decision regarding methods to decide the questions to be asked; posing research questions that cannot be answered; asking questions that have already been answered satisfactorily.
The survey approach can employ a range of methods to answer the research question. Common survey methods include postal questionnaires, face-to-face interviews, and telephone interviews.
This method involves sending questionnaires to a large sample of people covering a wide geographical area. Postal questionnaires are usually received ‘cold’, without any previous contact between researcher and respondent. The response rate for this type of method is usually low, ∼20%, depending on the content and length of the questionnaire. As response rates are low, a large sample is required when using postal questionnaires, for two main reasons: first, to ensure that the demographic profile of survey respondents reflects that of the survey population; and secondly, to provide a sufficiently large data set for analysis.
Face-to-face interviews involve the researcher approaching respondents personally, either in the street or by calling at people’s homes. The researcher then asks the respondent a series of questions and notes their responses. The response rate is often higher than that of postal questionnaires as the researcher has the opportunity to sell the research to a potential respondent. Face-to-face interviewing is a more costly and time-consuming method than the postal survey, however the researcher can select the sample of respondents in order to balance the demographic profile of the sample.
Telephone surveys, like face-to-face interviews, allow a two-way interaction between researcher and respondent. Telephone surveys are quicker and cheaper than face-to-face interviewing. Whilst resulting in a higher response rate than postal surveys, telephone surveys often attract a higher level of refusals than face-to-face interviews as people feel less inhibited about refusing to take part when approached over the telephone.
Designing the research tool
Whether using a postal questionnaire or interview method, the questions asked have to be carefully planned and piloted. The design, wording, form, and order of questions can affect the type of responses obtained, and careful design is needed to minimize bias in results. When designing a questionnaire or question route for interviewing, the following issues should be considered: (1) planning the content of a research tool; (2) questionnaire layout; (3) interview questions; (4) piloting; and (5) covering letter.
Planning the content of a research tool
The topics of interest should be carefully planned and relate clearly to the research question. It is often useful to involve experts in the field, colleagues, and members of the target population in question design in order to ensure the validity of the coverage of questions included in the tool (content validity).
Researchers should conduct a literature search to identify existing, psychometrically tested questionnaires. A well designed research tool is simple, appropriate for the intended use, acceptable to respondents, and should include a clear and interpretable scoring system. A research tool must also demonstrate the psychometric properties of reliability (consistency from one measurement to the next), validity (accurate measurement of the concept), and, if a longitudinal study, responsiveness to change . The development of research tools, such as attitude scales, is a lengthy and costly process. It is important that researchers recognize that the development of the research tool is equal in importance—and deserves equal attention—to data collection. If a research instrument has not undergone a robust process of development and testing, the credibility of the research findings themselves may legitimately be called into question and may even be completely disregarded. Surveys of patient satisfaction and similar are commonly weak in this respect; one review found that only 6% of patient satisfaction studies used an instrument that had undergone even rudimentary testing . Researchers who are unable or unwilling to undertake this process are strongly advised to consider adopting an existing, robust research tool.
Questionnaires used in survey research should be clear and well presented. The use of capital (upper case) letters only should be avoided, as this format is hard to read. Questions should be numbered and clearly grouped by subject. Clear instructions should be given and headings included to make the questionnaire easier to follow.
The researcher must think about the form of the questions, avoiding ‘double-barrelled’ questions (two or more questions in one, e.g. ‘How satisfied were you with your personal nurse and the nurses in general?’), questions containing double negatives, and leading or ambiguous questions. Questions may be open (where the respondent composes the reply) or closed (where pre-coded response options are available, e.g. multiple-choice questions). Closed questions with pre-coded response options are most suitable for topics where the possible responses are known. Closed questions are quick to administer and can be easily coded and analysed. Open questions should be used where possible replies are unknown or too numerous to pre-code. Open questions are more demanding for respondents but if well answered can provide useful insight into a topic. Open questions, however, can be time consuming to administer and difficult to analyse. Whether using open or closed questions, researchers should plan clearly how answers will be analysed.
Open questions are used more frequently in unstructured interviews, whereas closed questions typically appear in structured interview schedules. A structured interview is like a questionnaire that is administered face to face with the respondent. When designing the questions for a structured interview, the researcher should consider the points highlighted above regarding questionnaires. The interviewer should have a standardized list of questions, each respondent being asked the same questions in the same order. If closed questions are used the interviewer should also have a range of pre-coded responses available.
If carrying out a semi-structured interview, the researcher should have a clear, well thought out set of questions; however, the questions may take an open form and the researcher may vary the order in which topics are considered.
A research tool should be tested on a pilot sample of members of the target population. This process will allow the researcher to identify whether respondents understand the questions and instructions, and whether the meaning of questions is the same for all respondents. Where closed questions are used, piloting will highlight whether sufficient response categories are available, and whether any questions are systematically missed by respondents.
When conducting a pilot, the same procedure as as that to be used in the main survey should be followed; this will highlight potential problems such as poor response.
All participants should be given a covering letter including information such as the organization behind the study, including the contact name and address of the researcher, details of how and why the respondent was selected, the aims of the study, any potential benefits or harm resulting from the study, and what will happen to the information provided. The covering letter should both encourage the respondent to participate in the study and also meet the requirements of informed consent (see below).
Sample and sampling
The concept of sample is intrinsic to survey research. Usually, it is impractical and uneconomical to collect data from every single person in a given population; a sample of the population has to be selected . This is illustrated in the following hypothetical example. A hospital wants to conduct a satisfaction survey of the 1000 patients discharged in the previous month; however, as it is too costly to survey each patient, a sample has to be selected. In this example, the researcher will have a list of the population members to be surveyed (sampling frame). It is important to ensure that this list is both up-to date and has been obtained from a reliable source.
The method by which the sample is selected from a sampling frame is integral to the external validity of a survey: the sample has to be representative of the larger population to obtain a composite profile of that population .
There are methodological factors to consider when deciding who will be in a sample: How will the sample be selected? What is the optimal sample size to minimize sampling error? How can response rates be maximized?
The survey methods discussed below influence how a sample is selected and the size of the sample. There are two categories of sampling: random and non-random sampling, with a number of sampling selection techniques contained within the two categories. The principal techniques are described here .
Generally, random sampling is employed when quantitative methods are used to collect data (e.g. questionnaires). Random sampling allows the results to be generalized to the larger population and statistical analysis performed if appropriate. The most stringent technique is simple random sampling. Using this technique, each individual within the chosen population is selected by chance and is equally as likely to be picked as anyone else. Referring back to the hypothetical example, each patient is given a serial identifier and then an appropriate number of the 1000 population members are randomly selected. This is best done using a random number table, which can be generated using computer software (a free on-line randomizer can be found at http://www.randomizer.org/index.htm).
Alternative random sampling techniques are briefly described. In systematic sampling, individuals to be included in the sample are chosen at equal intervals from the population; using the earlier example, every fifth patient discharged from hospital would be included in the survey. Stratified sampling selects a specific group and then a random sample is selected. Using our example, the hospital may decide only to survey older surgical patients. Bigger surveys may employ cluster sampling, which randomly assigns groups from a large population and then surveys everyone within the groups, a technique often used in national-scale studies.
Non-random sampling is commonly applied when qualitative methods (e.g. focus groups and interviews) are used to collect data, and is typically used for exploratory work. Non-random sampling deliberately targets individuals within a population. There are three main techniques. (1) purposive sampling: a specific population is identified and only its members are included in the survey; using our example above, the hospital may decide to survey only patients who had an appendectomy. (2) Convenience sampling: the sample is made up of the individuals who are the easiest to recruit. Finally, (3) snowballing: the sample is identified as the survey progresses; as one individual is surveyed he or she is invited to recommend others to be surveyed.
It is important to use the right method of sampling and to be aware of the limitations and statistical implications of each. The need to ensure that the sample is representative of the larger population was highlighted earlier and, alongside the sampling method, the degree of sampling error should be considered. Sampling error is the probability that any one sample is not completely representative of the population from which it has been drawn . Although sampling error cannot be eliminated entirely, the sampling technique chosen will influence the extent of the error. Simple random sampling will give a closer estimate of the population than a convenience sample of individuals who just happened to be in the right place at the right time.
What sample size is required for a survey? There is no definitive answer to this question: large samples with rigorous selection are more powerful as they will yield more accurate results, but data collection and analysis will be proportionately more time consuming and expensive. Essentially, the target sample size for a survey depends on three main factors: the resources available, the aim of the study, and the statistical quality needed for the survey. For ‘qualitative’ surveys using focus groups or interviews, the sample size needed will be smaller than if quantitative data is collected by questionnaire. If statistical analysis is to be performed on the data then sample size calculations should be conducted. This can be done using computer packages such as G*Power ; however, those with little statistical knowledge should consult a statistician. For practical recommendations on sample size, the set of survey guidelines developed by the UK Department of Health  should be consulted.
Larger samples give a better estimate of the population but it can be difficult to obtain an adequate number of responses. It is rare that everyone asked to participate in the survey will reply. To ensure a sufficient number of responses, include an estimated non-response rate in the sample size calculations.
Response rates are a potential source of bias. The results from a survey with a large non-response rate could be misleading and only representative of those who replied. French  reported that non-responders to patient satisfaction surveys are less likely to be satisfied than people who reply. It is unwise to define a level above which a response rate is acceptable, as this depends on many local factors; however, an achievable and acceptable rate is ∼75% for interviews and 65% for self-completion postal questionnaires [9,13]. In any study, the final response rate should be reported with the results; potential differences between the respondents and non-respondents should be explicitly explored and their implications discussed.
There are techniques to increase response rates. A questionnaire must be concise and easy to understand, reminders should be sent out, and method of recruitment should be carefully considered. Sitzia and Wood  found that participants recruited by mail or who had to respond by mail had a lower mean response rate (67%) than participants who were recruited personally (mean response 76.7%). A most useful review of methods to maximize response rates in postal surveys has recently been published .
Researchers should approach data collection in a rigorous and ethical manner. The following information must be clearly recorded:
How, where, how many times, and by whom potential respondents were contacted.
How many people were approached and how many of those agreed to participate.
How did those who agreed to participate differ from those who refused with regard to characteristics of interest in the study, for example how were they identified, where were they approached, and what was their gender, age, and features of their illness or health care.
How was the survey administered (e.g. telephone interview).
What was the response rate (i.e. the number of usable data sets as a proportion of the number of people approached).
The purpose of all analyses is to summarize data so that it is easily understood and provides the answers to our original questions: ‘In order to do this researchers must carefully examine their data; they should become friends with their data’ . Researchers must prepare to spend substantial time on the data analysis phase of a survey (and this should be built into the project plan). When analysis is rushed, often important aspects of the data are missed and sometimes the wrong analyses are conducted, leading to both inaccurate results and misleading conclusions . However, and this point cannot be stressed strongly enough, researchers must not engage in data dredging, a practice that can arise especially in studies in which large numbers of dependent variables can be related to large numbers of independent variables (outcomes). When large numbers of possible associations in a dataset are reviewed at P < 0.05, one in 20 of the associations by chance will appear ‘statistically significant’; in datasets where only a few real associations exist, testing at this significance level will result in the large majority of findings still being false positives .
The method of data analysis will depend on the design of the survey and should have been carefully considered in the planning stages of the survey. Data collected by qualitative methods should be analysed using established methods such as content analysis , and where quantitative methods have been used appropriate statistical tests can be applied. Describing methods of analysis here would be unproductive as a multitude of introductory textbooks and on-line resources are available to help with simple analyses of data (e.g. [19, 20]). For advanced analysis a statistician should be consulted.
When reporting survey research, it is essential that a number of key points are covered (though the length and depth of reporting will be dependent upon journal style). These key points are presented as a ‘checklist’ below:
Explain the purpose or aim of the research, with the explicit identification of the research question.
Explain why the research was necessary and place the study in context, drawing upon previous work in relevant fields (the literature review).
Describe in (proportionate) detail how the research was done.
State the chosen research method or methods, and justify why this method was chosen.
Describe the research tool. If an existing tool is used, briefly state its psychometric properties and provide references to the original development work. If a new tool is used, you should include an entire section describing the steps undertaken to develop and test the tool, including results of psychometric testing.
Describe how the sample was selected and how data were collected, including:
How were potential subjects identified?
How many and what type of attempts were made to contact subjects?
Who approached potential subjects?
Where were potential subjects approached?
How was informed consent obtained?
How many agreed to participate?
How did those who agreed differ from those who did not agree?
What was the response rate?
Describe and justify the methods and tests used for data analysis.
Present the results of the research. The results section should be clear, factual, and concise.
Interpret and discuss the findings. This ‘discussion’ section should not simply reiterate results; it should provide the author’s critical reflection upon both the results and the processes of data collection. The discussion should assess how well the study met the research question, should describe the problems encountered in the research, and should honestly judge the limitations of the work.
Present conclusions and recommendations.
The researcher needs to tailor the research report to meet:
The expectations of the specific audience for whom the work is being written.
The conventions that operate at a general level with respect to the production of reports on research in the social sciences.
Anyone involved in collecting data from patients has an ethical duty to respect each individual participant’s autonomy. Any survey should be conducted in an ethical manner and one that accords with best research practice. Two important ethical issues to adhere to when conducting a survey are confidentiality and informed consent.
The respondent’s right to confidentiality should always be respected and any legal requirements on data protection adhered to. In the majority of surveys, the patient should be fully informed about the aims of the survey, and the patient’s consent to participate in the survey must be obtained and recorded.
The professional bodies listed below, among many others, provide guidance on the ethical conduct of research and surveys.
Survey research demands the same standards in research practice as any other research approach, and journal editors and the broader research community will judge a report of survey research with the same level of rigour as any other research report. This is not to say that survey research need be particularly difficult or complex; the point to emphasize is that researchers should be aware of the steps required in survey research, and should be systematic and thoughtful in the planning, execution, and reporting of the project. Above all, survey research should not be seen as an easy, ‘quick and dirty’ option; such work may adequately fulfil local needs (e.g. a quick survey of hospital staff satisfaction), but will not stand up to academic scrutiny and will not be regarded as having much value as a contribution to knowledge.
A Quaker Businessman: Biography of Joseph Rowntree (1836–1925). London: Allen & Unwin,
The Good Research Guide: For Small-scale Social Research Projects. Buckingham: Open University Press,
Real World Research: A Resource for Social Scientists and Practitioner-researchers. Oxford: Blackwell Publishers,
Health Measurement Scales: A Practical Guide to their Development and Use. Oxford: Oxford University Press,
Int J Qual Health Care
Research Methods in Health. Investigating Health and Health Services. Buckingham: Open University Press,
Researching Social Life. London: SAGE Publications,
Int J Nurs Stud
Int J Qual Health Care
Br Med J
Br J Educ Psychol
2003; in press.
Health Psychology in Practice. London: SAGE Publications,
2003; in press.
Br Med J
Nursing Research: The Application of Qualitative Approaches. London: Chapman and Hall,
Understanding Statistics: An Introduction for the Social Sciences. London: SAGE Publications,
International Journal for Quality in Health Care 15(3) © International Society for Quality in Health Care and Oxford University Press 2003; all rights reserved
Address reprint requests to John Sitzia, Research Department, Worthing Hospital, Lyndhurst Road, Worthing BN11 2DH, West Sussex, UK. E-mail: firstname.lastname@example.org