Module 2 - Data Collection Methods

2.1 Qualitative Research & Content Analysis

Qualitative Versus Quantitative Research

slbr4.gif (4051 bytes)

Qualitative research tends to focus on the collection of detailed amounts of primary data from relatively small samples of subjects by asking questions and observing behavior (Hair, Bush and Ortinau).

slbl3.gif (530 bytes) "Qualitative research is the collection, analysis and interpretation of data that cannot be meaningfully quantified, that is summarized in the form of numbers." (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Qualitative data is often referred to as "soft data," while quantitative data is often called "hard data."
slbl3.gif (530 bytes) Any nonstructured or observational approach which has not be quantified can be called qualitative (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Quantitative research is the collection of data that typically involves larger, more representative respondent samples and the numerical calculation of results (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Qualitative research accounts for a significant portion of research expenditures (about 32 percent) (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Many researchers have taken great pains to integrate the techniques rather than separate them.
slbl3.gif (530 bytes) Many researchers take great pains to "quantify" qualitative or observational data.
   

Focus Group Interviews

slbr4.gif (4051 bytes)

Focus groups are the most popular qualitative method
(Hair, Bush and Ortinau).

slbl3.gif (530 bytes) A focus group is a research technique that relies on an objective discussion leader or moderator who introduces a topic to a group of respondents and directs their discussion of that topic in a nonstructured and natural fashion. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Focus Group Characteristics
slbl3.gif (530 bytes) Group Composition
slbl3.gif (530 bytes) Focus Group Dynamics
slbl3.gif (530 bytes) Moderators
slbl3.gif (530 bytes) Conducting Focus Groups
slbl3.gif (530 bytes) Advantages of Focus Groups
slbl3.gif (530 bytes) Disadvantages of Focus Groups
slbl3.gif (530 bytes) Applications of Focus Groups
slbl3.gif (530 bytes) Technology and Focus Groups
   

Focus Group Characteristics

slbr4.gif (4051 bytes)

Focus groups offer more stimulation to participants (Aaker and Day).

slbl3.gif (530 bytes) Groups of 6 to 12
slbl3.gif (530 bytes) Informal and Relaxed Atmosphere
slbl3.gif (530 bytes) Conversational
slbl3.gif (530 bytes) Session Leader/Moderator
slbl3.gif (530 bytes) Written Outline from general to specific - but the conversation often progresses out of order
   

Focus Group Composition

slbr4.gif (4051 bytes)

As a rule, it is undesirable to combine participants from different social classes or stages in the life cycle (Aaker and Day).

slbl3.gif (530 bytes) Groups should have homogeneity with respects to demographic and socioeconomic characteristics.
slbl3.gif (530 bytes) Homogeneity increases the relaxed and natural feel.
slbl3.gif (530 bytes) Heterogeneous groups introduce noise into the data.
slbl3.gif (530 bytes) It is best to use several homogeneous groups with different characteristics to gain a complete picture.
slbl3.gif (530 bytes) Making generalizations using focus groups is a statistically invalid.  It is a empirical misuse of the data to draw generalizations from focus groups, even if a large number of focus groups are conducted using a large number of respondents.
       

Focus Group Dynamics

slbr4.gif (4051 bytes)

Poorly conducted focus groups yield misleading results
 
(Aaker and Day).

slbl3.gif (530 bytes) Untrustworthy Answers - Respondents will often modify answers in a group setting to impress the other members of the session or to avoid embarrassment in front of others.
slbl3.gif (530 bytes) Focus group interviews can be unduly influences by the interviewer, effectively resulting in Interviewer-Related Error.
slbl3.gif (530 bytes) If the group of participants varies too greatly, the resulting data are potentially corrupted by a Lack of Homogeneity.
slbl3.gif (530 bytes) Finally, because of the difficulty in securing sufficient respondents, researchers may utilize Poor Screening of Participants.  This negatively impacts the results.
     

Focus Group Moderators

slbr4.gif (4051 bytes)

Effective moderation encourages all participants to discuss their feelings (Aaker and Day).

slbl3.gif (530 bytes) Focus group moderators are also sometimes called session leaders or discussion leaders.
slbl3.gif (530 bytes) The effectiveness of the moderator is extremely critical to overall success of the session.
slbl3.gif (530 bytes) Moderators require a set of characteristics, much like the skills need to conduct good observational research.
slbl3.gif (530 bytes) Good observation skills
slbl3.gif (530 bytes) Good interpersonal skills
slbl3.gif (530 bytes) Good communication skills
slbl3.gif (530 bytes) Good interpretative skills
slbl3.gif (530 bytes) Good note keeping skills
slbl3.gif (530 bytes) Skills needed for Moderation of a focus group session (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Kind, but Firm/Permissiveness - Encouraging and directing of conversation
slbl3.gif (530 bytes) Encouraging Involvement - immersed in the conversation without directing the responses by using noncommittal remarks and probing responses (e.g. direct the topic, not the answers)
slbl3.gif (530 bytes) Flexible and sensitive - understand the flow of conversation, keeping it appropriate and undistracted, while not discouraging input
slbl3.gif (530 bytes) Must be able to communicate what was meant as well as what was said.
slbl3.gif (530 bytes) Needs to be able to fit in, but not necessarily be a part of the group.
         

Conducting Focus Groups

slbr4.gif (4051 bytes)

There is no one particular approach acceptable to all researchers (Hair, Bush and Ortinau).

slbl3.gif (530 bytes) Length
slbl3.gif (530 bytes)

A session typically lasts 1 1/2 to 2 hours

slbl3.gif (530 bytes)

Time must be used efficiently

slbl3.gif (530 bytes) Atmosphere
slbl3.gif (530 bytes) Setting should be relaxed, with an "at home feel"
slbl3.gif (530 bytes) Light refreshments
slbl3.gif (530 bytes) Recorded - important but risky
slbl3.gif (530 bytes) Improves data collection
slbl3.gif (530 bytes) Increased cost
slbl3.gif (530 bytes) May distract or influence the conversation
slbl3.gif (530 bytes) Needs to be inconspicuous, but open and honest
       

Advantages of Focus Groups

slbr4.gif (4051 bytes)

Focus groups provoke more spontaneity and candor (Aaker and Day).

slbl3.gif (530 bytes) Richness of Data
slbl3.gif (530 bytes) Group setting provides different results from one-on-one interviews.
slbl3.gif (530 bytes) Synergy of group interaction
slbl3.gif (530 bytes) Versatility
slbl3.gif (530 bytes) Almost any topic involving people can be studied in this way.
slbl3.gif (530 bytes) Introduction of new ideas
slbl3.gif (530 bytes) Ability to Study Special Respondents
slbl3.gif (530 bytes) With respondents who are difficult to measure individually, such as children
slbl3.gif (530 bytes) Others who refuse to be individually interviewed, but are willing to compare notes with colleagues
slbl3.gif (530 bytes) Impact on Managers
slbl3.gif (530 bytes) Managers feel more involved in focus group research
slbl3.gif (530 bytes) Mangers understand focus group research
       

Disadvantages of Focus Groups

slbr4.gif (4051 bytes)

Poorly conducted focus groups waste a great deal of money
(Aaker and Day).

slbl3.gif (530 bytes) Lack of Generalizability
slbl3.gif (530 bytes) Typical focus group respondent is different from the population as a whole
slbl3.gif (530 bytes) Group setting may not be the normal decision making setting
slbl3.gif (530 bytes) Opportunity for Misuse
slbl3.gif (530 bytes) The temptation to generalize
slbl3.gif (530 bytes) Moderator issues
slbl3.gif (530 bytes) Conducting
slbl3.gif (530 bytes) Interpreting
slbl3.gif (530 bytes) Cost
slbl3.gif (530 bytes) Professional conducted focus groups are expensive
slbl3.gif (530 bytes) Compounded if looking at several groups
       

Applications of Focus Groups

slbr4.gif (4051 bytes)

Focus groups can be used in almost any setting
(Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Understanding Audiences
slbl3.gif (530 bytes) Gain insight in to audience perceptions
slbl3.gif (530 bytes) Understand more about decision making in a group setting
slbl3.gif (530 bytes) Advertising
slbl3.gif (530 bytes) Pretests of creative concepts
slbl3.gif (530 bytes) Pretests of copy
slbl3.gif (530 bytes) Creative Planning
slbl3.gif (530 bytes) Introduction of new ideas
slbl3.gif (530 bytes) Generation of new ideas
       

Technology in Focus Groups

slbr4.gif (4051 bytes)

Technological advances have made focus group research easier and more productive (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Electronic Group Interviewing (EGI) using keypads or electronic devices to reduce unproductive discussion time. Each participant is provided with a keypad, all of which are connected to a common video display screen visible to the entire group. This method can be used for polling opinion within the group and can be used anonymously. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Videoconference Focus Group allows clients at multiple sites to view focus groups from remote locations. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Online Focus Groups allow researchers to conduct interaction using live, modified chat rooms.
slbl3.gif (530 bytes) Some researchers do not consider technologically implemented focus groups to be true focus groups (Wimmer and Dominick)
     

Qualitative Interviews and Case Studies

slbr4.gif (4051 bytes)

There is a longer, more flexible relationship with the respondent (Aaker and Day).

slbl3.gif (530 bytes) Interviews
slbl3.gif (530 bytes) In-Depth or Intensive Interviews
slbl3.gif (530 bytes) An in-depth interview is a one-to-one interview with customers that explores issues in depth. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Nondirective Interviews give respondents maximum freedom to respond within the bounds of the topic. (Aaker and Day)
slbl3.gif (530 bytes) Semistructured or focused in-depth interviews consist of semistructured, probing questions. (Aaker and Day; Hair, Bush and Ortinau)
slbl3.gif (530 bytes) This type of interview is extremely demanding and depends on the skill of the interviewer. (Aaker and Day)
slbl3.gif (530 bytes) In-Depth Interviews allow researchers to collect both attitudinal and behavioral data. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) Effective in-depth interviewing requires trained, skilled interviewers who avoid framing questions to a "no" response. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) Crowded One-on-One Interviews
slbl3.gif (530 bytes) In a crowded one-on-one interview, up to three client personnel are present in the room and observe a depth interview as it is conducted by a professional interviewer in a conventional fashion. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) After the interview, the client personnel are allowed to ask additional, clarifying questions. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) This type of technique is useful for new concepts and designs in the formulation stage. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Experience Surveys
slbl3.gif (530 bytes) Sometimes called expert interviews or executive interviews.
slbl3.gif (530 bytes) Experience surveys refer to informal gathering of information from individuals thought to be knowledgeable on the issues relevant to the information research problem. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) Experience surveys do no require, nor do they imply, that the respondents are representative of an overall group of subjects. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) Case Studies
  slbl3.gif (530 bytes) A case study uses as many data sources as possible to systematically investigate individuals, groups, organizations or events (Wimmer and Dominick)
slbl3.gif (530 bytes) A case study is a comprehensive description and analysis of a single situation. (Aaker and Day)
slbl3.gif (530 bytes) The data are typically collected using a series of long, unstructured interviews combined with secondary and internal data. (Aaker and Day)
slbl3.gif (530 bytes) There are some circumstances where case studies are the only way to understand a complex situation. (Aaker and Day)
           

Projective Techniques

slbr4.gif (4051 bytes)

The underlying objective with projective techniques is to learn more about the subjects in situations where they might nor reveal their true thoughts under direct questioning (Hair, Bush and Ortinau).

slbl3.gif (530 bytes) With projective techniques, a fairly ambiguous stimulus is presented to respondents, who, by reacting to or describing the stimulus, indirectly reveal their own inner feelings. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Word Association Tests
slbl3.gif (530 bytes) In a word association test, the words are read aloud to each respondent one at a time. The respondent is asked to say the first word that comes to mind as soon as each stimulus word is presented. These responses are then interpreted.
slbl3.gif (530 bytes) Researchers look for hidden meanings and associations between responses and words on the list. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) This technique has been particularly useful in obtaining reactions to potential brand names and advertising slogans. (Aaker and Day)
slbl3.gif (530 bytes) Sentence Completion Tests
slbl3.gif (530 bytes) In a sentence completion test, respondents are asked to finish a set of incomplete sentences. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Sentences are usually third person (Aaker and Day)
slbl3.gif (530 bytes) This technique can be expanded to the completion of an incomplete narrative. (Aaker and Day)
slbl3.gif (530 bytes) Thematic Apperception Test (TAT)
slbl3.gif (530 bytes) The thematic apperception test is a nonstructured, disguised form of questioning in which respondents are shown a series of pictures, one at a time, and asked to write a story about each. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Zaltman's Metaphor Elicitation Technique
slbl3.gif (530 bytes) ZMET (Zaltman's Metaphor Elicitation Technique) tries to bring to the surface the mental models that drive consumer thinking by analyzing metaphors that consumers might use. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) A metaphor is a figure of speech that implies comparison between two unlike entities. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Picture Tests
slbl3.gif (530 bytes) In a picture test, the respondent is shown an ambiguous picture or photograph and is asked to describe it. (Aaker and Day)
slbl3.gif (530 bytes) Picture tests are based on TAT approaches.
slbl3.gif (530 bytes) This is a flexible approach with many applications
slbl3.gif (530 bytes) Cartoon or Balloon Tests
slbl3.gif (530 bytes) A cartoon test is a pictorial technique like the TAT. The respondent is asked to examine the stimulus picture and fill in the empty "balloon" with words, reflecting thoughts or verbal statements of the characters involved. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) This method uses cartoon figures, in a vague manner to avoid suggesting a response. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) Protocol Interviews
slbl3.gif (530 bytes) In protocol interviewing, the subject is placed in a specific decision-making situation and asked to verbally express the process and activities that he or she would undertake to make a decision. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) This technique is useful for uncovering motivational and procedural elements of decision making.
slbl3.gif (530 bytes) Role Playing
slbl3.gif (530 bytes) In role-playing, the subject is asked to act out someone else's behavior in a specific setting. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) These techniques are also called Third Person Techniques.
slbl3.gif (530 bytes) Respondents are asked to take on the identity of a third person, such as a neighbor, friend or "most people," placed in a specific, predetermined situation. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) This technique can help diffuse socially desirable, but untruthful, responses. (Aaker and Day)
slbl3.gif (530 bytes) Multiple Projective Techniques
slbl3.gif (530 bytes) To fully understand the motivation behind behavior, often multiple projective techniques are used. (Teweles)
slbl3.gif (530 bytes) Using only one technique to research feelings is inadequate.
           

Content Analysis

slbr4.gif (4051 bytes)

Content Analysis has become a popular research data collection technique (Wimmer and Dominick).

slbl3.gif (530 bytes) Content analysis is a data collection technique, not a data analysis method.
slbl3.gif (530 bytes) Content analysis is considered to be a qualitative data collection technique because it can be applied to interpretive cases in an intensive manner (Reinard).
slbl3.gif (530 bytes) Content analysis is a systematic procedure devised to examine the content of any recorded information (Wimmer and Dominick).
slbl3.gif (530 bytes) Content analysis is a highly structure method for quantifying secondary data.
slbl3.gif (530 bytes) While the data collection method is considered to be qualitative, the data analysis methods are most often descriptive.
slbl3.gif (530 bytes) Applications
slbl3.gif (530 bytes) Advertising (both print and broadcast)
slbl3.gif (530 bytes) Children's stories
slbl3.gif (530 bytes) Web Pages
slbl3.gif (530 bytes) Broadcast programming
slbl3.gif (530 bytes) Historical and government documents
slbl3.gif (530 bytes) Speeches
slbl3.gif (530 bytes) Many others
       

2.2 - Measurement and Scaling

Overview of the Measurement Process

slbr4.gif (4051 bytes)

Critical to the process of collecting primary data is the development of well-constructed measurement procedures (Hair, Bush and Ortinau).

 

slbl3.gif (530 bytes) Measurement can be defined as "rules for assigning numbers to objects in such a way as to represent quantities of attributes. (Bennett)
slbl3.gif (530 bytes) Measurement is the assignment of numbers to responses based on a set of guidelines. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) The measurement process consists of two distinctly different development processes.
  slbl3.gif (530 bytes) Scale Measurement
    slbl3.gif (530 bytes) The focus is on measuring the existence of various properties or characteristics of a person's response (Hair, Bush and Ortinau).
    slbl3.gif (530 bytes) Assignment property (or description or category property) is the employment of unique descriptor to identify each object in a set which allows the placement of responses into mutually exclusive groups (Hair, Bush and Ortinau) .
    slbl3.gif (530 bytes) The order property refers to the relative magnitude between descriptors or labels assigned to each scale point descriptor which allows for an assignment of greater than or less than and the ordering of responses (Hair, Bush and Ortinau).
    slbl3.gif (530 bytes) The distance property is the measurement of exact (or absolute) distances between each of the scale points (Hair, Bush and Ortinau).
    slbl3.gif (530 bytes) The origin property refers to a unique starting point designated as being "true natural zero" (Hair, Bush and Ortinau).
  slbl3.gif (530 bytes) Construct Development
    slbl3.gif (530 bytes) Objects refer to any tangible item in a person's environment that can be clearly and easily identified through his or her senses (Hair, Bush and Ortinau).
    slbl3.gif (530 bytes) Constructs are hypothetical variables made up of a set of component responses or behaviors that are thought to be related (Hair, Bush and Ortinau).
    slbl3.gif (530 bytes) Many constructs (quality, satisfaction, preferences) cannot be directly measured and researchers attempt to indirectly measure them through operationalization of their components (Hair, Bush and Ortinau) .
    slbl3.gif (530 bytes) Construct Operationalization is a process wherein the researcher explains a construct's meaning in measurement terms by specifying the activities or operations necessary to measure it (Hair, Bush and Ortinau).
       

Measurement Levels

slbr4.gif (4051 bytes)

Scientist have distinguished four different ways to measure things, or four different levels of measurement (Wimmer and Dominick).

slbl3.gif (530 bytes) Quantified responses fall into one of four measurement levels.  Measurement levels are also known as scales of measurement (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) The assignment of numbers is made according to rules that should correspond to the properties of whatever is being measured (Aaker and Day).
  slbl3.gif (530 bytes) We are measuring attributes about something - not the thing or the person.
  slbl3.gif (530 bytes) We can use four different approaches to assigning numbers.
slbl3.gif (530 bytes) Central Tendency
  slbl3.gif (530 bytes) Central tendency is a number depicting the "middle" position in a given range or distribution of numbers (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Three measures of central tendency include mode, median and mean.
  slbl3.gif (530 bytes) The mode is the most frequent category (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) The median is the category into which the 50th percentile responses falls when all responses are arranged from lowest to highest (or highest to lowest). (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) The mean is the simple average of the various numbers (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Nominal-Scaled Responses
slbl3.gif (530 bytes) Ordinal-Scaled Responses
slbl3.gif (530 bytes) Interval-Scaled Responses
slbl3.gif (530 bytes) Ratio-Scaled Responses
       

Nominal-Scaled Responses

slbr4.gif (4051 bytes)

Nominal scales allow the research only to categorize the raw responses into mutually exclusive subsets that do not illustrate the relative magnitudes between them (Hair, Bush and Ortinau).

 
 
slbl3.gif (530 bytes) On a nominal scale, numbers are no more than labels and are used solely to identify different categories or responses (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) A nominal scale has the assignment property only and there is no implied ordering or ranking of the categories (categories may be dichotomous [yes/no] or multiple groups [freshman, sophomore, junior, senior]).
slbl3.gif (530 bytes) There is no set distance (interval) between the categories.
slbl3.gif (530 bytes) Nominal-Scaled data are classified as nonmetric because the interval between categories has no meaning. 
slbl3.gif (530 bytes) Dummy Variables
  slbl3.gif (530 bytes) Conversion of categorical data to nominal level data results in a series of "dummy" variables (Wimmer and Dominick)
  slbl3.gif (530 bytes) For example, data collected about the school in which a Berry student is seeking a degree might be entered as data as Evans (1), Charter (2), MNS (3) and CSOB (4).  Use of this method would create problems with data analysis with an implied rank ordering.
  slbl3.gif (530 bytes) The data should be converted into four dummy variables of Evans (0/1), Charter (0/1), MNS (0/1) and CSOB (0/1).
slbl3.gif (530 bytes) Central Tendency
  slbl3.gif (530 bytes) Central tendency is a number depicting the "middle" position in a given range or distribution of numbers.  (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) The only acceptable application of mathematics to nominal-scaled data is counting, reporting percentages and reporting mode.
  slbl3.gif (530 bytes) The mode is the most frequent category. (Parasuraman, Grewal, Krishnan)
   

  slbl3.gif (530 bytes) No other measure, beyond, mode is an  acceptable measure of central tendency.
     

Ordinal-Scaled Responses

slbr4.gif (4051 bytes)

With an ordinal scale, the researcher can rank-order the raw responses into a hierarchical pattern (Hair, Bush and Ortinau).

 
 
slbl3.gif (530 bytes) An ordinal scale is more powerful than a nominal scale in that the numbers possess the property to rank order (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) An ordinal scale has the assignment and order properties.
slbl3.gif (530 bytes) Ordinal scales provide an order to the data, but not an interval.  Exact measures cannot be inferred from the scale. (e.g. We know that category two is greater than category one, but we do not know by how much.)
slbl3.gif (530 bytes) Ordinal-Scaled data are classified as nonmetric because the interval between categories has no meaning. 
slbl3.gif (530 bytes) Central Tendency
  slbl3.gif (530 bytes) Both mode and median are acceptable measures of central tendency with ordinal data.
  slbl3.gif (530 bytes) A mean should not be calculated.
   
     

Interval-Scaled Responses

slbr4.gif (4051 bytes)

Interval scales demonstrate absolute differences between the scale points (Hair, Bush and Ortinau).

 
slbl3.gif (530 bytes) An interval scale has all the properties of an ordinal scale; additionally, the differences between scale values can be meaningfully interpreted (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) A interval scale has the assignment, order and distance properties.
slbl3.gif (530 bytes) Interval data is continuous.
slbl3.gif (530 bytes) Strictly speaking, variables such as attitudes, opinions and preferences cannot be quantified to yield an exact interval scale (Parasuraman, Grewal Krishnan).
  slbl3.gif (530 bytes) However, we frequently assume that semantic differential and Likert-type scales are interval, based on an assumption that respondents will treat the differences between categories as an equal distance.
  slbl3.gif (530 bytes) This assumption may be erroneous.
  slbl3.gif (530 bytes) This use of data is sometimes called an Ordinal-Interval Hybrid Scale (Hair, Bush and Ortinau).
slbl3.gif (530 bytes) Interval-Scaled data are classified as metric because the interval between categories has meaning. 
slbl3.gif (530 bytes) Central Tendency
  slbl3.gif (530 bytes) Interval scaling permits the use of mean and standard deviation in addition to both mode and median as measures of central tendency.
   

  slbl3.gif (530 bytes) Standard deviation is a measure of dispersion and is the degree of deviation of the numbers from their mean (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Ratios of Interval Data
  slbl3.gif (530 bytes) The ratios of two values on an interval scale is arbitrary and has no meaningful interpretation because it depends on the scale's starting point (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Changing the number assigned to the response category changes the value of the ratio.
     

Ratio-Scaled Responses

slbr4.gif (4051 bytes)

A ratio scale tends to be the most sophisticated scale in the sense that it allows the researcher not only to identify the absolute differences between each scale point (or raw response) but also to make absolute comparisons between the raw responses  (Hair, Bush and Ortinau).

 
slbl3.gif (530 bytes) A ratio scale possess all the properties of an interval scale, and the ratios of numbers on these scales have meaningful interpretations (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) A ratio scale has all four properties, the assignment, order, distance and origin properties.
slbl3.gif (530 bytes) Ratio scaled data have a natural, unambiguous starting point of zero which has not been chosen arbitrarily (Parasuraman, Grewal, Krishnan).
 

slbl3.gif (530 bytes) Ratio-Scaled data are classified as metric because the interval between categories has meaning. 
   

Classes of Variables

slbr4.gif (4051 bytes)

Identification of which properties should be investigated requires knowledge and understanding of constructs (Hair, Bush and Ortinau).

slbl3.gif (530 bytes) Attributes
slbl3.gif (530 bytes) Behavior-Related Variables
slbl3.gif (530 bytes) Mind-Related Variables
 
 
     

Attributes

slbr4.gif (4051 bytes)

Attributes are directly observable, physically verifiable and measurable (Hair, Bush and Ortinau).

 

slbl3.gif (530 bytes) An attribute is a personal or demographic characteristic, such as education level, age, size of household and number of children  (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) State-of-Being Data are verifiable facts (Hair, Bush and Ortinau).
   

Behavior-Related Variables

slbr4.gif (4051 bytes)

Behavioral variables relate to such things as frequency of visits to a store and extent of magazine readership (Parasuraman Grewal, Krishnan).

slbl3.gif (530 bytes) The Action or Intentions Component refers to behavior toward an object (Aaker and Day).
slbl3.gif (530 bytes) State-of-Behavior data (past and current behaviors) are a person's or organization's current observable or recorded actions or reactions. These variables are verifiable (Hair, Bush and Ortinau).
slbl3.gif (530 bytes) State-of-Intention Data (planned future behavior) are a person's or organization's expressed plans of future behavior.  These variables are difficult to verify, but are verifiable (Hair, Bush and Ortinau).  Note:  Intentions have a mental component, see Mind-Related Variables.
     

Mind-Related Variables

slbr4.gif (4051 bytes)

Data quality and accuracy may be limited to the honesty and abilities of the respondent (Hair, Bush and Ortinau).

slbl3.gif (530 bytes) State-of-Mind (mental thoughts or emotional feelings) are mental attributes or emotional feelings of people.  These variables are not observable and exist only within the mind of the respondent. Verification through external sources is all but impossible (Hair, Bush and Ortinau).
slbl3.gif (530 bytes)

There are three types of state-of-mind measures commonly used in behavioral research.

  slbl3.gif (530 bytes) The Cognitive or Knowledge Component
    slbl3.gif (530 bytes) The Cognitive or Knowledge Component represents the  persons information about the object, including awareness, beliefs, and importance (Aaker and Day).
    slbl3.gif (530 bytes) A belief relates to knowledge and what respondents consider (correctly or incorrectly) to be true  (Parasuraman, Grewal, Krishnan).
    slbl3.gif (530 bytes) Beliefs occur at the conscious level and respondents can express them.
  slbl3.gif (530 bytes) The Affective or Liking Component
    slbl3.gif (530 bytes) The Affective or Liking Component summarizes the person's feelings about the object, including liking and preference (Aaker and Day).
    slbl3.gif (530 bytes) An attitude is similar to a belief, except that it also reflects a respondent's evaluative judgment (Parasuraman, Grewal, Krishnan).
    slbl3.gif (530 bytes) Attitudes have direction.
    slbl3.gif (530 bytes) Attitudes occur at the conscious level and respondents can express them.
  slbl3.gif (530 bytes) Values
    slbl3.gif (530 bytes) A broad tendency to prefer certain states of affairs over others. Values are an attribute of both individuals and collectives (Hofstede).
    slbl3.gif (530 bytes) Values have both intensity and direction (Hofstede).
    slbl3.gif (530 bytes) Values occur on the subconscious level and respondents have difficulty expressing them.
    slbl3.gif (530 bytes) Values are often measured indirectly through attitudes.
         

Attitude Scaling

slbr4.gif (4051 bytes)

Attitudes are widely believed to be a key determinant of behavior (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Observing Overt Behavior
slbl3.gif (530 bytes) Analyzing Reactions to Partially Structured Stimuli
slbl3.gif (530 bytes) Evaluating Performance on Objective Tasks
slbl3.gif (530 bytes) Monitoring Physiological Responses
slbl3.gif (530 bytes) Self-Report Measurement of Attitudes
 
 
     

Observing Overt Behavior

slbr4.gif (4051 bytes)

Observation of overt behavior is useful when other attitude measurements are inconvenient or infeasible
(Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Because a number of factors other than attitudes influence behavior, observation of behavior yield only rough estimates of attitudes  (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Assumption:  Behaviors are consistent with attitudes.  (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Observation may be coupled with other methods as a means of verification of attitude.
 
 
 
     

Analyzing Reactions to Partially Structured Stimuli

slbr4.gif (4051 bytes)

Partially structure stimuli might give insight into attitudes.

slbl3.gif (530 bytes) This method is implemented by asking respondents to react to or describe an incomplete stimulus (one of several projective techniques)  (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Assumption:  Response is shaped by attitudes  (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Analyzing Reactions to Partially Structured Stimuli yield only rough estimates of attitudes.
     

Evaluating Performance on Objective Tasks

slbr4.gif (4051 bytes)

This method relies on the concepts of selective retention and selective distortion to uncover underlying attitudes.

slbl3.gif (530 bytes) This method is implemented by asking respondents to complete a well-defined task  (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Assumption:  Respondent will only remember information consistent with attitudes or will distort information to conform to attitudes.
slbl3.gif (530 bytes) Construction of such tasks is difficult, but may be useful when the researcher thinks that respondents may not be truthful if questioned directly.
     

Monitoring Physiological Responses

slbr4.gif (4051 bytes)

Physiological responses indicate reactions.

slbl3.gif (530 bytes) This method is implemented by mechanically and electronically measuring physical responses to stimuli.
slbl3.gif (530 bytes) Assumption:  Respondent will have an involuntary physiological change based on an emotional reaction to stimulus.
slbl3.gif (530 bytes) Monitoring Physiological Responses measure only emotional arousal, not attitudes.
     

Self-Report Measurement of Attitudes

slbr4.gif (4051 bytes)

Self-report measures are commonly used.

slbl3.gif (530 bytes) Self report measures are the most straightforward approach to measuring attitudes  (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) This method is implemented by asking respondents directly about their attitudes.
slbl3.gif (530 bytes) Assumption:  Respondent can express their attitudes and will do so truthfully.
     

Use of Rating Scales in Self-Report Measurements

slbr4.gif (4051 bytes)

Rating scales take on a variety of physical forms
(Parasuraman, Grewal, Krishnan).

 
slbl3.gif (530 bytes) Graphic versus Itemized Formats
slbl3.gif (530 bytes) Comparative versus Noncomparative Assessments
slbl3.gif (530 bytes) Forced versus Nonforced Response Choices
slbl3.gif (530 bytes) Balanced versus Unbalanced Response Choices
slbl3.gif (530 bytes) Labeled versus Unlabeled Response Choices
slbl3.gif (530 bytes) Number of Scale Positions
slbl3.gif (530 bytes) Measurement Level of Data Obtained
   

Graphic versus Itemized Formats

slbr4.gif (4051 bytes)

The itemized category scale is widely used by many researchers (Aaker and Day).

slbl3.gif (530 bytes) A graphic rating scale presents a continuum, in the form of a straight line, along which a theoretically infinite number of ratings are possible (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Physical measurements are taken to quantify responses (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Respondents may not effectively use the scale (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) An itemized rating scale has a set of distinct response categories; any suggestion of an attitude continuum underlying the categories is implicit (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) This scale is easier for respondents and for coding (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Continuous nature of the data is only implied.
slbl3.gif (530 bytes) Some researchers use a combination of the two (Parasuraman, Grewal, Krishnan).
     

Comparative versus Noncomparative Assessments

slbr4.gif (4051 bytes)

With comparative assessments, the respondent is asked specifically to compare to other conditions.

slbl3.gif (530 bytes) A comparative rating scale provides all respondents with a common frame of reference (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Researchers are more confident that respondents are using the same frame of references and hence are answering the same question (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) The common frame of reference may not be meaningful to all potential respondents, hence bring validity into question (Parasuraman, Grewal, Krishnan).
 
slbl3.gif (530 bytes) A noncomparative rating scale implicitly permits respondents to use any frame of reference, or even none at all (Parasuraman, Grewal, Krishnan).
     

Forced versus Nonforced Response Choices

slbr4.gif (4051 bytes)

Different results may be obtained with a forced-choice format.

 
slbl3.gif (530 bytes) A forced-choice scale does not give respondents the option to express a neutral or middle ground (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) A scale with an even number of response categories forces respondents to take a position toward one endpoint or another (e.g. negative/positive or strong/weak).
  slbl3.gif (530 bytes) There is no ambiguous answers, and may discourage a reluctant respondent from "sitting on the fence" or failing to reveal a true response.
  slbl3.gif (530 bytes) Answers may misrepresent respondents true feelings.
  slbl3.gif (530 bytes) Reluctant respondents may refuse to answer.
 
slbl3.gif (530 bytes) A nonforced-choice scale give respondents the option to express a  neutral or middle ground (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Typically, if an odd number of response categories is given (excluding an unbalanced design), the middle position is neutral.
  slbl3.gif (530 bytes) Respondents may feel more comfortable with a neutral option.
     

Balanced versus Unbalanced Response Choices

slbr4.gif (4051 bytes)

Itemized rating scales, in general, should be balanced to reduce response biases (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) A balanced scale has an equal number of positive/favorable and negative/unfavorable response choices (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) A balanced scale is not as sensitive to differences in opinion when most opinions fall on one side of the issue.
  slbl3.gif (530 bytes) Unbalanced is useful for predominantly one-sided issues and when the degree or magnitude of response is needed.
slbl3.gif (530 bytes) An unbalanced scale has a larger number of response choices on the side of the scale where the overall attitude of the respondent sample is likely to fall (Parasuraman, Grewal, Krishnan).
     

Labeled versus Unlabeled Response Choices

slbr4.gif (4051 bytes)

No rules exist for determining the number and types of labels to include in a scale (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) An anchor label defines one of the two extremes of a rating scale (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Rating scales typically have pairs of anchor labels.
  slbl3.gif (530 bytes) Rating scales may label one or more points between the anchor labels.
slbl3.gif (530 bytes) A scale can have a label for every point.
 

slbl3.gif (530 bytes) A labeled scale can use pictorial representations of the scale points (which tends to be useful when collecting data from children.)
 

slbl3.gif (530 bytes) A scale can have an anchor label at each endpoint and then use number labels between.
 

slbl3.gif (530 bytes) A scale can have an anchor label at each endpoint and then without using number labels between.
 

     

Number of Scale Positions

slbr4.gif (4051 bytes)

The number of points or categories to include in a rating scale is another area with no rigid rules (Parasuraman, Grewal, Krishnan).

 

slbl3.gif (530 bytes) Researchers typically include between 5 and 9 scale points.
slbl3.gif (530 bytes) Another common approach is to use 100 points for assigning percentages or "grades."
slbl3.gif (530 bytes) Precision can increase as the number of scale points increases, but. . .
slbl3.gif (530 bytes) Respondents have more difficulty with large numbers of scale points.
  slbl3.gif (530 bytes) When a respondent finds a scale too large, (s)he will simply use a subset of the points.
  slbl3.gif (530 bytes) The subsets used will vary from respondent to respondent with some using the endpoints and middle point, which others will concentrate around the middle.
slbl3.gif (530 bytes) Scale preferences vary across cultures.
     

Measurement Level of Data Obtained

slbr4.gif (4051 bytes)

The type of questions and the type of rating scale used have a major bearing on the measurement level of the data generated.

slbl3.gif (530 bytes) The measurement level indicates how powerful the data are - whether they are nominal, ordinal, interval or ratio (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Constant-Sum Scale
  slbl3.gif (530 bytes) A constant-sum scale has a natural starting point (zero) and asks respondents to allocate a given set of points among several attitude objects (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Natural starting point is one which is nonarbitrary and remains constant across respondents.
  slbl3.gif (530 bytes) This scale is ratio.
slbl3.gif (530 bytes) Paired-Comparison Rating Scale
  slbl3.gif (530 bytes) A paired-comparison rating scale consists of a question seeking comparative evaluation of two objects at a time (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Despite strengths, it can be difficult to implement with a large number of comparisons. (An assessment of n items requires [n(n-1)/2] paired comparisons.)
slbl3.gif (530 bytes) Rankings
  slbl3.gif (530 bytes) Easy to administer
  slbl3.gif (530 bytes) This approach provides ordinal data.
slbl3.gif (530 bytes) Response categories
  slbl3.gif (530 bytes) Easy to administer
  slbl3.gif (530 bytes) This approach provides ordinal to interval data.
slbl3.gif (530 bytes) Measurement level
  slbl3.gif (530 bytes) Single Item
    slbl3.gif (530 bytes) A single-item scale attempts to measure feelings through just one rating scale (Parasuraman, Grewal, Krishnan).
    slbl3.gif (530 bytes) A single measurement of a non-tangible construct is undesirable.
  slbl3.gif (530 bytes) Multiple Item
    slbl3.gif (530 bytes) A multiple-item scale contains a number of statements pertaining to the attitude object, each with a rating scale attached to it; the combined rating, usually obtained by summing the ratings on the individual items, is treated as a measure of attitudes toward the object (Parasuraman, Grewal, Krishnan).
    slbl3.gif (530 bytes) Multiple item measures are more desirable.
         

Commonly Used Multiple-Item Scales

slbr4.gif (4051 bytes)

Multiple-item scales are frequently used in behavioral research to measure attitudes (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Likert Scale
  slbl3.gif (530 bytes) A Likert scale consists of a series of evaluative statements (or items) concerning an attitude object; respondents are asked to rate the object on each statement (or item) using a five-point agree-disagree scale (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Developed by Rensis Likert.
  slbl3.gif (530 bytes) Includes a mixture of favorable and unfavorable statements.
  slbl3.gif (530 bytes) Typically includes 20 to 30 statements.
  slbl3.gif (530 bytes) Typically start with more than 30 (between 50 and 100) and weed out bad or poor performing questions - those that do not highly discriminate.
  slbl3.gif (530 bytes) Responses are summed.
slbl3.gif (530 bytes) Likert-Type Scales
  slbl3.gif (530 bytes) The Likert scale has been modified and used in a number of alternative ways.
  slbl3.gif (530 bytes) May use more or less than five points.
  slbl3.gif (530 bytes) May use something other than agree/disagree.
  slbl3.gif (530 bytes) May use forced choice.
  slbl3.gif (530 bytes) Multiple items are still used.
  slbl3.gif (530 bytes) Items may be summed or averaged.
slbl3.gif (530 bytes) Semantic-Differential Scale
  slbl3.gif (530 bytes) A semantic-differential scale is similar to the Likert scale in that it consists of a series of items to be rated by respondents; however, the items are presented as bipolar adjectival phrases or words that are placed as anchor labels of a seven-category scale with no other numerical or verbal label (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Uses some reversed scaled items.
slbl3.gif (530 bytes) Stapel Scale
  slbl3.gif (530 bytes) A Stapel scale is a variation of the semantic-differential scale; however, each item consists of just one word or phrase on which the respondents rate the attitude object using a ten-item scale with just numerical labels (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) This is a forced choice scale.
       

Strength of Multiple-Item Scales

slbr4.gif (4051 bytes)

Single item measures are rather crude measures 
(Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Validity
  slbl3.gif (530 bytes) Validity is the extent to which a rating scale truly reflects the underlying variable it is attempting to measure (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Multiple items scales have greater validity.
  slbl3.gif (530 bytes) Content validity (sometimes called face validity):  Represents the extent to which the content of a measurement scale seems to tap all relevant facets of an issue that can influence respondents attitudes (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Construct validity assesses the nature of the underlying variable or construct measured by the scale by examining the scale's convergent and discriminant validity (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Predictive validity answers the question "how well do the attitude measures provided by the scale predict some other variable or characteristic it is supposed to influence (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Reliability
  slbl3.gif (530 bytes) Reliability measures how consistent or stable the ratings generated by the scale are likely to be (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Multiple items scales have greater reliability.
    slbl3.gif (530 bytes) Test-Retest reliability measures the stability of ratings over time and relies on administering the scale to the same group of respondents at two different times (Parasuraman, Grewal, Krishnan).
    slbl3.gif (530 bytes) Split-half reliability measures the degree of consistency across items within a scale and can only be assessed for multiple item scales (Parasuraman, Grewal, Krishnan).
slbl3.gif (530 bytes) Sensitivity
  slbl3.gif (530 bytes) Sensitivity is closely tied to reliability and focuses specifically on a scale's ability to detect subtle differences in the attitudes being measured (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) Multiple item scales have greater sensitivity.
         
 

2.3 - Questionnaire Design

Questionnaire Design

slbr4.gif (4051 bytes)

A questionnaire is simply a set of questions designed to generate the data necessary to accomplish a research project's objectives
(Parasuraman, Grewal, Krishnan).

 

slbl3.gif (530 bytes) Complexity of Questionnaire Design
  slbl3.gif (530 bytes) It is not as easy to write questions as it may first appear.
  slbl3.gif (530 bytes) No rules can guarantee flawless questionnaire. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Questionnaire's Impact on Data Accuracy
  slbl3.gif (530 bytes) Conclusive research is often conducted using a questionnaire.  It must:
    slbl3.gif (530 bytes) Communicate what is being asked of the respondent.
    slbl3.gif (530 bytes) Communicate the respondent's answer back to the researcher.
  slbl3.gif (530 bytes) Communication channel noise will affect a questionnaire.
    slbl3.gif (530 bytes) A poorly designed questionnaire will generate a great deal of noise.
    slbl3.gif (530 bytes) A well designed questionnaire will generate less.
    slbl3.gif (530 bytes) Use of intermediaries (such as interviewers) adds another layer of complexity and conceivably more noise if the questions are poorly written.
slbl3.gif (530 bytes) Questionnaire Design Process
  slbl3.gif (530 bytes) The process is useful for questionnaire, used in face-to face interviews, telephone surveys, mail surveys and Internet surveys.
  slbl3.gif (530 bytes) The questionnaire design process is an iterative process.  (Parasuraman, Grewal, Krishnan)
    slbl3.gif (530 bytes) Translate the data requirements into rough questions.
    slbl3.gif (530 bytes) Check each question for:
      slbl3.gif (530 bytes) proper form
      slbl3.gif (530 bytes) relevance
    slbl3.gif (530 bytes) Decide on the sequencing of questions
    slbl3.gif (530 bytes) Develop the questionnaire layout
    slbl3.gif (530 bytes) Pretest the questionnaire
  slbl3.gif (530 bytes) After each step, the questionnaire should be revised and refined.
         

Question Form

slbr4.gif (4051 bytes)

There are two basic forms of questions.

 
slbl3.gif (530 bytes) Nonstructured Questions
slbl3.gif (530 bytes) Structured Questions
slbl3.gif (530 bytes) Response Category
   

Nonstructured Questions

slbr4.gif (4051 bytes)

Respondents are free to answer open-ended questions in their own words (Churchill and Brown).

 

slbl3.gif (530 bytes) Nonstructured Questions  are also called open-ended or unstructured questions.
slbl3.gif (530 bytes) Open-ended questions do not all require lengthy answers, although many do.
slbl3.gif (530 bytes) Lengthy open-ended questions are typically used in exploratory rather than conclusive research, but short or specific open-ended questions (What year were you born?  What is your country of birth?) are appropriate and conceivably more accurate than the same question provided in a fixed response format.
slbl3.gif (530 bytes) Interviews often use open-ended questions to "break the ice."
   

Structured Questions

slbr4.gif (4051 bytes)

There are two different approaches to structured questions.

slbl3.gif (530 bytes) Structured Questions are also called fixed response or closed ended questions.
slbl3.gif (530 bytes) Fixed response questions are distinguished by the number of response categories.
 
  slbl3.gif (530 bytes) Dichotomous Question
    slbl3.gif (530 bytes) A dichotomous question offers just two answer choices, typically yes/no. (Parasuraman, Grewal, Krishnan)
    slbl3.gif (530 bytes) Some questions simply have only two viable options.
    slbl3.gif (530 bytes) Some questions are written to force a choice between two options.
 
  slbl3.gif (530 bytes) Multiple-Category Question (which is also called multichotomous)
    slbl3.gif (530 bytes) A multiple-category question has more than two answer choices. (Parasuraman, Grewal, Krishnan)
    slbl3.gif (530 bytes) The number of choices, is in part, driven by the questionnaire format and the characteristics of the question.
       

Response Category

slbr4.gif (4051 bytes)

If multiple responses are used, a researcher faces several considerations.

slbl3.gif (530 bytes) Response Category Sequence
  slbl3.gif (530 bytes) Often response categories will follow a natural chronological sequence where no other alternative (other than potentially reversing the sequence) seems reasonable. (age ranges, income ranges)
  slbl3.gif (530 bytes) The sequence of responses may introduce bias into the questions.
    slbl3.gif (530 bytes) When the response categories have numbers, responses in the middle are more commonly selected than at the extremes.
    slbl3.gif (530 bytes) When the response categories are words or statements,  the extremes are more commonly selected than the middle.
  slbl3.gif (530 bytes) This bias can be reduced with response category sequence rotation - changing the order in which respondents see the responses across the respondents (e.g. Respondent A will see a different order from Respondent B).
  slbl3.gif (530 bytes) When the response categories are naturally ordered we may present half the respondents in order and half in reverse order - split-ballot technique.
  slbl3.gif (530 bytes) Why might that be less effective?
 
slbl3.gif (530 bytes) Response Category Content
  slbl3.gif (530 bytes) Collectively Exhaustive - taken together, they should provide for every possible answer a respondent might give.
    slbl3.gif (530 bytes) It is difficult for respondents to answer when their preferred choice is not an option.
    slbl3.gif (530 bytes) Some will refuse to answer, others will provide an answer - but one that is inaccurate.
  slbl3.gif (530 bytes) Mutually Exclusive - they should not overlap
    slbl3.gif (530 bytes) Respondents do not know how to answer when there is overlap.
    slbl3.gif (530 bytes) Which category is correct for a 21 year old in the following age ranges?  - 18-21; 21-24; 24-27
  slbl3.gif (530 bytes) This requires an understanding of the entire range of potential answers and careful construction.
  slbl3.gif (530 bytes) Inclusion of an "other" category increases the complexity for recording responses, but solves some serious problems.
  slbl3.gif (530 bytes) Sometimes, a consumer's opinion or attitude may overlap, even when we give them collectively exhaustive and mutually exclusive questions.  Allowing a respondent to "mark all that apply" helps with this issue, but increases the complexity of recording data.
slbl3.gif (530 bytes) Number of Response Categories
  slbl3.gif (530 bytes) Prior precedent is useful in determining the number of response categories.
  slbl3.gif (530 bytes) Questionnaire method drives the number of categories that can reasonably be accommodated.
  slbl3.gif (530 bytes) If aided recall methods are used, all important response categories must be given, but can be cumbersome.
       

Question Relevance and Wording

slbr4.gif (4051 bytes)

One of the most critical tasks after drawing up a rough questionnaire draft is to ensure the relevance of every question on it
(Parasuraman, Grewal, Krishnan).

 

slbl3.gif (530 bytes) Can the respondent answer the question?
slbl3.gif (530 bytes) Will the respondent answer the question?
slbl3.gif (530 bytes) Avoiding Double-Barreled Questions
slbl3.gif (530 bytes) Avoiding Leading Questions
slbl3.gif (530 bytes) Avoiding One-Sided Questions
slbl3.gif (530 bytes) Avoiding Questions with Implicit Assumptions
slbl3.gif (530 bytes) Avoiding Complex Questions
   

Can the Respondent Answer the Question?

slbr4.gif (4051 bytes)

The respondent must have a meaningful basis for answering it (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Does the respondent have the skill or knowledge to answer the question?
 

slbl3.gif (530 bytes) Can the respondent remember or calculate the answer?
 

   

Will the Respondent Answer the Question?

slbr4.gif (4051 bytes)

When a question deals with sensitive or embarrassing issues the respondent may refuse to answer the question
(Parasuraman, Grewal, Krishnan).

 

slbl3.gif (530 bytes) When the respondent is unwilling to answer a question they will typically:
  slbl3.gif (530 bytes) Leave the question blank
  slbl3.gif (530 bytes) Answer less than truthfully
  slbl3.gif (530 bytes) Drop out of the research project
slbl3.gif (530 bytes) Include these questions only when necessary.
slbl3.gif (530 bytes) It may be useful to disguise the question.
slbl3.gif (530 bytes) It may be useful to word the question in a less sensitive manner.
slbl3.gif (530 bytes) It may be useful to reassure the respondent of their anonymity.
     

Avoiding Double-Barreled Questions

slbr4.gif (4051 bytes)

Double-barreled questions are difficult to interpret
(Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) . . .and may be impossible to answer accurately.
 

slbl3.gif (530 bytes) Responses may be interpreted differently.
  slbl3.gif (530 bytes) A "no" means that neither professors nor administrators are believed to care about academic success.
  slbl3.gif (530 bytes) A "no" means that professors do, but administrators do not.
  slbl3.gif (530 bytes) A "no" means that administrators do, but professors do not.
slbl3.gif (530 bytes) Recomposing the question
 

     

Avoiding Leading Questions

slbr4.gif (4051 bytes)

Leading questions are also known as loaded questions
(Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Leading questions direct a respondent to a particular answer, regardless of their true opinion.
slbl3.gif (530 bytes) Not all respondents are equally led, in fact some respondents will be angered by the question and responses overall are less meaningful.
 

slbl3.gif (530 bytes) Consider instead . . .
 

   

Avoiding One-Sided Questions

slbr4.gif (4051 bytes)

A one-sided question presents only one aspect of the issue on which respondent's reactions are being sought (Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) One-side questions may introduce bias into the data.
slbl3.gif (530 bytes) Acquiescence bias (yea-saying) is the bias resulting from a respondent's tendency to agree with whatever side is presented by one-sided questions. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) A split ballot technique usually involves two versions of the same questionnaire, with one version with questions presenting one side of the issues and the second version with questions presenting the other side. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Unbalanced scales may lead to one-sided questions.
 

slbl3.gif (530 bytes) Rewording often helps.
 

   

Avoiding Questions with Implicit Assumptions

slbr4.gif (4051 bytes)

Responses can be greatly influenced by what the respondent thinks is being asked of them.

slbl3.gif (530 bytes) Questions with implicit assumptions do not provide, or imply, the same frame of reference to all respondents. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Different meaning may lead to different answers.
 

slbl3.gif (530 bytes) A filter question qualifies respondents for a subsequent question or ensures that the question is within their realm of experience. (Parasuraman, Grewal, Krishnan)
 
slbl3.gif (530 bytes) Using categories may prove helpful in diminishing the assumption.
 

   

Avoiding Complex Questions

slbr4.gif (4051 bytes)

Words that a question writer may understand perfectly may be unfamiliar or sound complicated to the respondent
(Parasuraman, Grewal, Krishnan).

slbl3.gif (530 bytes) Use of jargon, particularly if the words have alternative meanings in "real life," make questions complex.
slbl3.gif (530 bytes) A hard to answer question is also complex.  Complicated tasks, calculations and long-term-memory related issues, make questions complex.
 

slbl3.gif (530 bytes) Write the questions to your audience.
slbl3.gif (530 bytes) For the general population, questions should be written using the simplest word possible.
slbl3.gif (530 bytes) Adult respondents can typically be considered as intelligent, but uniformed.
   

Sequencing of Questions

slbr4.gif (4051 bytes)

Questions must be arranged in a logical sequence to minimize data errors and facilitate easy and smooth administration of the questionnaire (Parasuraman, Grewal, Krishnan).

 

slbl3.gif (530 bytes) Position of Demographic and Sensitive Questions
  slbl3.gif (530 bytes) As a rule of thumb, demographic-related and sensitive questions should be placed at the end of the questionnaire.  An exception to this might be a case where the question is a filter question.
  slbl3.gif (530 bytes) Classification data are useful in obtaining a profile of the respondent sample and in cross-classifying respondents to other questions that pertain directly to the study object (Parasuraman, Grewal, Krishnan).
  slbl3.gif (530 bytes) These questions may irritate some respondents who might then refuse to continue.
  slbl3.gif (530 bytes) Arrange the questions from simple to more complex.
slbl3.gif (530 bytes) Arrangement of Related Questions
  slbl3.gif (530 bytes) Questions can be clustered around related topics.
  slbl3.gif (530 bytes) Some questions following a meaningful sequence.
slbl3.gif (530 bytes) Funnel and Inverted-Funnel Sequences
  slbl3.gif (530 bytes) A funnel sequence begins with a very general question on a topic, gradually leading to a narrowly focused question on the same topic. (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) An inverted-funnel sequence begins with specific questions on a topic, gradually leading to more general questions on the same topic. (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) Funnel approaches are generally preferred, but some topics are more suited for inverted-funnels.
slbl3.gif (530 bytes) Skip Patterns
  slbl3.gif (530 bytes) Proper sequence allows for simple "skip patterns" where a set of questions may be skipped by a subset of respondents.
  slbl3.gif (530 bytes) Skip questions direct respondents to the next applicable question.
     

Questionnaire Appearance and Layout

slbr4.gif (4051 bytes)

The way a questionnaire looks and the way questions are laid out within it can influence the degree of respondent cooperation as well as the quality of the data collected (Parasuraman, Grewal, Krishnan).

 

slbl3.gif (530 bytes) Appearance
  slbl3.gif (530 bytes) The first page should provide vital information regarding who is conducting the research.
  slbl3.gif (530 bytes) The questionnaire must be free from error and grammatically perfect.
  slbl3.gif (530 bytes) The questionnaire must look professional, neat, attractive and uncluttered.  This can add a great deal to the questionnaire while adding little or nothing to the cost of administration.
  slbl3.gif (530 bytes) Professional printing on high quality paper is preferred, but can get expensive.
  slbl3.gif (530 bytes) The questionnaire should look as short as possible without over-crowding.  "White space" is important in paper and pencil questionnaires.
  slbl3.gif (530 bytes) Numbered questions may ease data process, but may also make the questionnaire look longer.
  slbl3.gif (530 bytes) Carefully construct skip patterns and "go to" statements.
  slbl3.gif (530 bytes) Color coding can be useful.
  slbl3.gif (530 bytes) Clearly state the specific action the respondent is expected to do. (e.g. mark an X in the blank in front of the response that most closely . . .)
  slbl3.gif (530 bytes) Layout should be designed to avoid confusion on where and how to mark an answer.
  slbl3.gif (530 bytes) Appearance is even more important when there is no one there to encourage participation.
slbl3.gif (530 bytes) Pretesting
  slbl3.gif (530 bytes) The questionnaire pretest is vital because it shows how it will perform under actual conditions.  (Churchill and Brown)
  slbl3.gif (530 bytes) Pretesting is administering the questionnaire to a limited number of potential respondents and other individuals capable of pointing out design flaws. (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) A pretest is not a substitute for careful construction.
  slbl3.gif (530 bytes) Even the most carefully conducted pretest may not uncover all potential problems.
  slbl3.gif (530 bytes) Even if the questionnaire is to be administered in a mailed or emailed format, a face-to-face pretest may prove useful. Pretesting can also include a review by skilled researchers.  Regardless, it must also be pretested with members of the target population, using the exact administration method as will be used for implementation.
  slbl3.gif (530 bytes) A pretest can refine time estimates.
  slbl3.gif (530 bytes) A pretest should include a debriefing.
  slbl3.gif (530 bytes) If substantial changes are made, the questionnaire should be pretested again.
  slbl3.gif (530 bytes) Pretest responses cannot be included in data for analysis.
     

Questionnaires for Computerized and Online Interviewing

slbr4.gif (4051 bytes)

A very important development in research has been the use of "smart" questionnaires provided by personal computers
(Hair, Bush and Ortinau).

 

slbl3.gif (530 bytes) Collecting Data Electronically
  slbl3.gif (530 bytes) In computerized interviewing, the questionnaire appears on a monitor and the responses are directly entered into computer memory. (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) In online interviewing, respondents selected from a database are invited to visit an a website to respond to an electronic form of a survey. (Parasuraman, Grewal, Krishnan)
  slbl3.gif (530 bytes) Electronic data collection can cost more in terms of time and money with regard to survey creation and programming, but may ultimately cost less because of the time and money saved with online data entry and reduced dependence on paper copies.
slbl3.gif (530 bytes) Setting up an online survey
  slbl3.gif (530 bytes) An online survey will look different to different respondents because of default font (or font availability on the computer), size and type of monitor, color and resolution settings.
  slbl3.gif (530 bytes) Resolution settings can create problems for the respondent, who may not be able to see the entire questionnaire or even move around the screen without resetting their resolution.
  slbl3.gif (530 bytes) Limiting the characters per line can greatly help in the visual appeal of the survey across a wide range of computer conditions and settings.
  slbl3.gif (530 bytes) Seriously  - Avoid scrolling left to right.
  slbl3.gif (530 bytes) Consider providing questions screen-by-screen.
slbl3.gif (530 bytes) Randomizing response choices
  slbl3.gif (530 bytes) Survey software can be programmed to randomize the order in which various respondents see the choice options.
  slbl3.gif (530 bytes) Because the data are entered directly, the data entry problems created by altering response choice patterns are eliminated.
slbl3.gif (530 bytes) Checking for response consistency
  slbl3.gif (530 bytes) Survey software can be programmed for checks of internal consistency on exactly the same, or similar questions previously given to the respondent.
  slbl3.gif (530 bytes) In some cases, the inconsistencies can be pointed out to the respondent, giving him/her a chance to correct the error, thereby improving accuracy. (e.g. asking respondents to "spend $100)
slbl3.gif (530 bytes) Incorporating complex skip patterns
  slbl3.gif (530 bytes) Skip patterns become seamless when programmed correctly.
  slbl3.gif (530 bytes) If - then statements direct the respondent to the next applicable question without the respondent having to go search for the next applicable question.
slbl3.gif (530 bytes) Personalization
  slbl3.gif (530 bytes) Key personal characteristics, such as name or answers to previous questions, can be collected stored and inserted at appropriate times within the later parts of the survey.
  slbl3.gif (530 bytes) Software can be used to shape the questionnaire's design with responses to key questions changing the entire look or focus of the survey. (Similar to skip patterns)
  slbl3.gif (530 bytes) Personalization can increase rapport.
  slbl3.gif (530 bytes) Increased rapport leads to a greater commitment to the research project.
slbl3.gif (530 bytes) Ability to draw questions from computer libraries
  slbl3.gif (530 bytes) Libraries of standard questions can be used to assist in the creation of surveys.
  slbl3.gif (530 bytes) Survey template ease in the development of layout.
slbl3.gif (530 bytes) Adding "new" response categories
  slbl3.gif (530 bytes) Software can be used to add a consistently used "other" response to the list of response categories, after some set minimum number has been reached.
  slbl3.gif (530 bytes) The survey becomes progressive from respondent to respondent.
     

First Contact

slbr4.gif (4051 bytes)

First contacts make lasting impressions.

slbl3.gif (530 bytes) Designing Cover Letters for Mail and Online Surveys (Erdos)
  slbl3.gif (530 bytes) Personal Communication, including how selected
  slbl3.gif (530 bytes) Asking a favor with a note of urgency
  slbl3.gif (530 bytes) Importance of research, of the recipient, and of replies in general
  slbl3.gif (530 bytes) Importance of the replies where the reader is not qualified to answer most questions
  slbl3.gif (530 bytes) How the recipient (potentially as a member of society as a whole) may benefit from the research
  slbl3.gif (530 bytes) The questionnaire can be answered easily and completed in a short amount of time. (Time estimates must be accurate.)
  slbl3.gif (530 bytes) Easy reply expectations (e.g. stamped reply envelope, easy submission steps)
  slbl3.gif (530 bytes) Anonymity and confidentiality
  slbl3.gif (530 bytes) Appreciation and an offer of aggregated results
  slbl3.gif (530 bytes) Information about the sender, the sender's company, the purpose of the research
  slbl3.gif (530 bytes) Optional incentive
slbl3.gif (530 bytes) Openers for Personal and Telephone Interviews
  slbl3.gif (530 bytes) Greeting
  slbl3.gif (530 bytes) Information about the sender, the sender's company, the purpose of the research
  slbl3.gif (530 bytes) The survey can be answered easily and completed in a short amount of time. (Time estimates must be accurate.)
  slbl3.gif (530 bytes) Ask permission to conduct interview
  slbl3.gif (530 bytes) The opener will not be as long as a cover letter, but the interviewer should be prepared to answer questions about information given above if asked.
       

Designing Observation Forms

slbr4.gif (4051 bytes)

The researcher needs to make very explicit decisions about what is to be observed and the categories and units that will be used to record this behavior (Churchill and Brown).

slbl3.gif (530 bytes) Clarity of instructions to observers
slbl3.gif (530 bytes) May or may not use a structured form
slbl3.gif (530 bytes) Structured forms:
  slbl3.gif (530 bytes) Specific observations
  slbl3.gif (530 bytes) Efficient recording
  slbl3.gif (530 bytes) Can be similar to a form used for telephone interviewing
  slbl3.gif (530 bytes) Avoid cluttered layout
  slbl3.gif (530 bytes) Avoid too many pages - one is best
       

2.4 Experimentation

A Review of Descriptive versus Experimental Research

slbr4.gif (4051 bytes)

All research practices require the manipulation or the measurement of variable (Hair, Bush and Ortinau).

slbl3.gif (530 bytes) An experiment is a procedure in which one (or sometimes more than one) independent variable (or cause) is systematically manipulated and data on the dependent variable (or effect) are gathered, while controlling for other variables that may influence the dependent variable. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) In experimentation the researcher manipulates variables to measure effect.
slbl3.gif (530 bytes) In descriptive research the researcher examines and describes phenomenon.
slbl3.gif (530 bytes) The two can be considered more like a continuum than dichotomous choices. Descriptive research suggests causation, the greater the degree of control and manipulation, the greater the confidence in suggested cause.
slbl3.gif (530 bytes) In practice complete control is rarely possible. (Parasuraman, Grewal, Krishnan)
       

Conditions for Inferring Causality

slbr4.gif (4051 bytes)

Hume argued that the inferences of a causal relationship between unobservables is never justified logically.

slbl3.gif (530 bytes)

Causation means that a change in one variable will produce a change in another (Aaker and Day).

slbl3.gif (530 bytes) Association measures, by themselves do not demonstrate causation (Aaker and Day).
slbl3.gif (530 bytes) Remember correlation does not imply causation, both variables may be responding to changes in an unobserved or unmeasured variable.
slbl3.gif (530 bytes) A spurious association is one where the association results from a third variable. Some examples include: the number of churches in a community tend to correlate well with the number of liquor stores and the amount of damage at a fire tends to correlate with the number of fire trucks that arrive at the location (Aaker and Day).
slbl3.gif (530 bytes) Interpretation of a relationship as causal, hinges not on the statistical model used, but on the nature of the design employed (Maxwell and Delaney).
slbl3.gif (530 bytes)

Conditions

slbl3.gif (530 bytes) Temporal ordering of variables
slbl3.gif (530 bytes) Evidence of association
slbl3.gif (530 bytes) Control of other causal factors
slbl3.gif (530 bytes) All three factors must be met in order to establish causality.
slbl3.gif (530 bytes) Even in a controlled experimental setting, we may have difficulty being sure that all other potential causes are effectively controlled.
       

Laboratory Versus Field Experiments

slbr4.gif (4051 bytes)

The artificiality of the experimental process can be criticized as actually changing the behavior being tested (Davis and Cosenza).

slbl3.gif (530 bytes) Laboratory Versus Field Experiments
slbl3.gif (530 bytes) A laboratory experiment is a research study conducted in a contrived setting in which the effect of all, or nearly all, influential but irrelevant independent variables is kept to a minimum. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) A field experiment is a research study conducted in a natural setting in which one or more independent variables are manipulated by the experimenter under conditions controlled as carefully as the situation will permit. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Deciding Which Type of Experiment to Use (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Validity
slbl3.gif (530 bytes) Time - Field experimentation is more time consuming than laboratory.
slbl3.gif (530 bytes) Cost - Field experimentation is more expensive than laboratory.
slbl3.gif (530 bytes) Exposure to competition - A theory tested in the field may allow access by the competition to the idea.
slbl3.gif (530 bytes) Nature of manipulation - Some things simply cannot be tested adequately in a laboratory.
       

Threats to Validity

slbr4.gif (4051 bytes)

Validity means essentially truth or correctness, a correspondence between a proposition describing how things work in the world and how they really work (Maxwell and Delaney).

slbl3.gif (530 bytes) Internal Validity
slbl3.gif (530 bytes) Internal validity is the extent to which observed results are due solely to the experimental manipulation.
slbl3.gif (530 bytes) Is there a causal relationship between the variables?
slbl3.gif (530 bytes)

Tighter control positively impacts internal validity. The introduction of uncontrolled variables negatively impacts internal validity. Therefore, in a general sense, laboratory experiments tend to have better internal validity.

slbl3.gif (530 bytes) Internal validity threats are typically "third" variable problems (Maxwell and Delaney).
slbl3.gif (530 bytes) History effects are specific external events or occurrences during an experiment that are likely to affect the dependent variable. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) The maturation effect is the effect of physiological or physical changes in the units that occur with the passage of time on the dependent variable being measured. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) The pretesting effect occurs when responses given during a later measurement are influenced by those given during a previous measurement. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) The instrument variation effect is a bias that relates to differences between pretest and posttest measurements owing to changes in the instruments (questionnaires) and/or procedures used to measure the dependent variable. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) The selection effect occurs when multiple groups participating in an experiment differ on characteristics that have a bearing on the dependent variable. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) The mortality effect occurs when certain participating units drop out of an experiment and, as a result, the set of units completing the experiment significantly differs from the original set of units. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) External Validity
slbl3.gif (530 bytes) External validity is the extent to which observed results are likely to hold beyond the experimental setting.
slbl3.gif (530 bytes) External Validity is a measure of the stability across other contexts.
slbl3.gif (530 bytes)

Can I generalize to the population or to other settings?

slbl3.gif (530 bytes) Internal validity is a necessary but not sufficient condition for external validity. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Contrived settings negatively impact external validity. Therefore, in general, field experiments tend to have better external validity.
slbl3.gif (530 bytes) External validity is threatened by bias.
slbl3.gif (530 bytes) In reactive bias, participants exhibit abnormal or unusual behavior simply because they are participating in an experiment.
slbl3.gif (530 bytes) Pretest-manipulation interaction bias is a special form of reactive bias that is unique to experiments relying on premeasurement of consumers before they are exposed to the experimental manipulation; it arises when the premeasurement increases or decreases respondents' sensitivity to the experimental manipulation.
slbl3.gif (530 bytes) Nonrepresentative sample bias occurs when the units participating in an experiment are not representative of the larger body of units to which the experimental results are to be generalized.
         

Experimental Design

slbr4.gif (4051 bytes)

A knowledge of alternative experimental designs can lead to a more effective experiment (Aaker and Day).

slbl3.gif (530 bytes) Pre-Experimental Design
slbl3.gif (530 bytes) Pre-Experimental Designs exert little or no control over the influence of extraneous factors. A pre-experiment is not really an experiment. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) One-Group, After-Only Design (also called a One-Shot Study)
slbl3.gif (530 bytes) One group, Before and After Design (also called a One-Group, Pretest-Posttest)
slbl3.gif (530 bytes) Two Group, Ex Post Facto Design (also called a Static Group Comparison)
slbl3.gif (530 bytes) True Experimental Design
slbl3.gif (530 bytes) True Experimental Designs have built-in safeguards for controlling all threats to internal and external validity. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) True experimental designs use random assignment. Random assignment distributes the sample units chosen for a study to various groups on a strictly objective basis so that the group compositions can be considered equivalent. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) True experimental designs use of one or more control groups.
slbl3.gif (530 bytes) Two-Group, Before-After Design (also called Pretest-Posttest, Control Group)
slbl3.gif (530 bytes) Posttest Only, Control Group
slbl3.gif (530 bytes) Solomon Four Group
slbl3.gif (530 bytes) Quasi-experimental Design
slbl3.gif (530 bytes) Quasi-experimental design lies between pre-experimental and true experimental design. These methods are used when we can control some variables, but cannot utilize randomization to groups. (Hair, Bush and Ortinau)
slbl3.gif (530 bytes) Nonequivalent Control Group
slbl3.gif (530 bytes) Separate-Sample Pretest Posttest (sometimes called Split Subjects Pretest Posttest)
         

One-Group, After-Only Design

slbr4.gif (4051 bytes)

A one-group after-only design is also called a One-Shot Study.

slbl3.gif (530 bytes) This design fails to control any extraneous variables.
slbl3.gif (530 bytes) There are no between group comparisons.
slbl3.gif (530 bytes) There is no premeasurement - a measure of how conditions would have been if there was no manipulation. There is no benchmark.
   

One Group, Before and After Design

slbr4.gif (4051 bytes)

A one group, before and after design is also called a One-Group Pretest-Posttest design.

slbl3.gif (530 bytes) There is a pretest measure of the conditions prior to manipulation, which provides a benchmark against which the posttest measure can be compared.
slbl3.gif (530 bytes) This design fails to control any extraneous variables.
slbl3.gif (530 bytes) There are no between group comparisons.
   

Two Group, Ex Post Facto Design

slbr4.gif (4051 bytes)

A two group, ex post facto design is also called a Static Group Comparison.

 
slbl3.gif (530 bytes) This is a two group design including an experimental group and a control group. The experimental group is exposed to the manipulation, the control group is not.
slbl3.gif (530 bytes) There is no premeasurement - a measure of how conditions would have been if there was no manipulation. Both measures are taken after the manipulation occurred. There is no benchmark.
slbl3.gif (530 bytes) There is no random assortment to treatment groups.
 

Two-Group, Before-After Design

slbr4.gif (4051 bytes)

A Two-Group, Before-After Design is also called a Pretest-Posttest, Control Group.

R - Random assignment

slbl3.gif (530 bytes) This is a two group design including an experimental group and a control group, where assignment to groups occurs through randomization. The experimental group is exposed to the manipulation; the control group is not.
slbl3.gif (530 bytes) There is pretest measure of the conditions prior to manipulation, which provides a benchmark against which the posttest measure can be compared.
slbl3.gif (530 bytes) This design accounts for many threats to validity, but not mortality effect, reactive bias, pretest-manipulation interaction bias, and nonrepresentative sample bias.
   

Posttest Only, Control Group

slbr4.gif (4051 bytes)

Posttest Only, Control Group design looks a great deal like the Two Group, Ex Post Facto design, with the inclusion of random treatment.

R - Random assignment

slbl3.gif (530 bytes) Posttest Only, Control Group design does not use a pretest measure to establish a benchmark.
slbl3.gif (530 bytes) This is a two group design including an experimental group and a control group, where assignment to groups occurs through randomization. The experimental group is exposed to the manipulation; the control group is not.
slbl3.gif (530 bytes) There is no premeasurement - a measure of how conditions would have been if there was no manipulation. Both measures are taken after the manipulation occurred. There is no benchmark.
slbl3.gif (530 bytes) This design recognizes the threats associated with pretesting effect and pretest-manipulation interaction bias.
   

Solomon Four Group

slbr4.gif (4051 bytes)

This design is the most comprehensive comparison, but it is so complicated, many researchers do not use it.

R - Random assignment

slbl3.gif (530 bytes) This is a four group design including two experimental groups and two control groups, where assignment to groups occurs through randomization. The experimental groups are exposed to the manipulation; the control groups are not.
slbl3.gif (530 bytes) There is pretest measure of the conditions prior to manipulation, which provides a benchmark against which the posttest measure can be compared. But there are also groups which have not been exposed to the pretest environment to avoid issues of bias.
slbl3.gif (530 bytes) Subjects in design one are exposed to the pretest, subjects in design two are not.
slbl3.gif (530 bytes) Comparisons are made between
(O2 - O1); (O2 - O4); (O5 - O6); and (O5 - O3)
   

Nonequivalent Control Group

slbr4.gif (4051 bytes)

Nonequivalent Control Group Experiments looks a great deal like Two-Group, Before-After Design, without random treatment.

slbl3.gif (530 bytes) This is a two group design including an experimental group and a control group, but assignment to groups was not through a process of randomization.
slbl3.gif (530 bytes) The experimental group is exposed to the manipulation; the control group is not.
slbl3.gif (530 bytes) Efforts are made to match the groups.
slbl3.gif (530 bytes) Matching forms groups in such a way that the composition of units is similar across groups with respect to one or more specific characteristics. (Parasuraman, Grewal, Krishnan)
slbl3.gif (530 bytes) Matching is not random.
   

Separate-Sample Pretest Posttest

slbr4.gif (4051 bytes)

A separate-sample pretest posttest is sometimes called a Split Subjects Pretest Posttest.

slbl3.gif (530 bytes) Observation 2 is taken on a separate group of respondents.
slbl3.gif (530 bytes) There is pretest measure of the conditions prior to manipulation, which provides a benchmark against which the posttest measure can be compared.
slbl3.gif (530 bytes) There is no random assortment to treatment groups.
slbl3.gif (530 bytes) This design recognizes the threats associated with pretesting effect and pretest-manipulation interaction bias.
   

Managerial Considerations for Research Design

slbr4.gif (4051 bytes)

The complexity of research design makes it difficult for most managers (Davis and Cosenza).

slbl3.gif (530 bytes) There is no single, correct design for a research problem.
slbl3.gif (530 bytes) Design research to answer the research problem.
slbl3.gif (530 bytes) All research design represents a compromise.
slbl3.gif (530 bytes) A research design is not a framework to be followed blindly and without deviation. (Davis and Cosenza)
     

slbr4.gif (4051 bytes)

Copyright Dr. Nancy D. Albers-Miller, All Rights Reserved