The conjoint analysis method is an evolving tool set that encompasses an entire host of different choices, panels, questions, data formats and analytical methods, which can appear to be so different that the casual observer/user may not recognize that they are related conjoint analysis methods. In fact, many individuals are generally aware of only a few conjoint analysis methods and are probably not aware of the full conjoint analysis tool set. The object of this article is to highlight some of the more common conjoint analysis methods, as noted in Figure 1. (Click on diagram to enlarge.)
Figure 1: Some well known conjoint analysis methods
The typical Six Sigma Design of Experiments (DOE) environment consists of screening, characterization and optimization. While in Six Sigma screening Design of Experiments are highly fractionated/lower resolution and optimization Design of Experiments would likely be described as full or almost full factorial, conjoint analysis often uses different tools for screening, characterization and optimization. This article outlines some of the more common conjoint analysis methods and where or how they are utilized. Many conjoint analysis methods require specialized software, and only a few methods, such as best/worst or traditional conjoint, can be analyzed with common statistical software (e.g. Minitab or Jump).
Adaptive Conjoint Analysis (ACA)
ACA requires special software or a Web application that collects and analyzes the interviewees’ responses. Responses are analyzed in real time, since stimuli selection is modified based on previous responses. The ACA program first determines where the respondent has the strongest preferences within the design space and then produces profiles within the strongest preference part or Design of Experiments fraction, where the most valuable information can be obtained from a respondent. The ACA program avoids stimuli where the respondent doesn’t have a preference since this part of the design space would produce minimal effect data. The ACA analysis combines data from a large respondent sample size, so ACA can handle up to 30 attributes. The analysis determines attribute significance level. It also provides a product feature model.
The ACA program employs a multi-step process in the analysis. The process starts by asking some general questions, as shown in Figure 2, to gage the respondents’ feeling intensity. Based on the information obtained in these first few choice cards, additional paired choice cards are generated (typical example shown in Figure 3). Respondents often compare this method to a computer game. (Click on diagrams to enlarge.)
Figure 2: Example of a potential attribute / respondent preference strength card
Figure 3: Example of a paired choice card
Choice Based Conjoint (CBC) Analysis
The CBC method simulates the consumer purchasing process where respondents are shown choices in full profile. These choices are not rated or ranked by the respondent. The respondent is requested to indicate which choice they would purchase. Note that one of the choices is to not purchase. This conjoint analysis method also requires special software and several companies specialize in this conjoint technique. (Click on diagram to enlarge.)
Figure 4: CBC card based on the limited TV example
Partial Profile Conjoint Analysis Methods
Several partial profile conjoint methods exist, such as Partial Profile Choice Experiments (PPCE) and Partial Profile Choice Base Conjoint (CBC). The shortcoming of these methods is that the amount of data is extremely limited and each choice card has only a partial profile (a profile where many attributes are removed). The partial profile concept is like a fractional subset of a fractional Design of Experiments. Since these methods require specialized software, more information can be found by a Web search for practitioners (companies) who provide such a service.
Traditional Conjoint Analysis
Traditional full profile conjoint analysis is useful to measure and to quantify up to about six attributes. Since respondents see the options in full profile, this greatly limits the experimental design. This method is similar to a fractional Design of Experiments. Respondents are requested to rank or rate a series of choice cards. These typically number in the range of 10 to 20 cards range, with a maximum number of about 30.
The first article in this series (The Mythical Transactional Business Process Design of Experiments) highlights this method in detail.
Best/Worst (B/W) Conjoint Analysis
B/W analysis will be covered in more detail, since most common statistical packages can perform a graphical and analytical analysis. B/W is unlike any other conjoint analysis method. It fulfills a niche need when analysts need to identify a choice within a profile, where other conjoint analysis methods require a choice between profiles. For B/W, a respondent has to choose the most liked (best) and the least liked (worst) attribute level from a full profile card. A typical card and question would look like the following for the abbreviated television example: (Click on diagram to enlarge.)
Typically a B/W will be a fractional design since a full factorial Design of Experiments would quickly exceed the respondent maximum number of questions (~30 maximum; with 10 to 20 cards being preferred). A good design should have at least 1.3 to 1.5 times the minimum Design of Experiments design runs to have enough independent data points for an acceptable error term/calculation. Also, prohibited combinations need to be removed from the design. A prohibited combination is normally a brand-feature or brand-model combination that can not exist (e.g. a Honda Prius; Prius is a Toyota brand name that cannot be obtained from Honda).
Six Sigma data analysis is often built around practical, graphical, and analytical (PGA) analysis. B/W simple graphical analysis is to plot the net total sum (i.e. a worst response is equal to a -1 and a best response is equal to a +1) by each attribute level as shown in Figure 5. This figure graphically shows that television size is important (bigger is better) and the resolution is important (higher [1080i] is better than lower [720p]), and when looking across attributes, lower resolution is more of a negative when the largest size is a positive. (Click on diagram to enlarge.)
Figure 5: Graphical Representation of the Data from a B/W DOE
The next step in PGA is an analytical analysis. B/W results can be analyzed with regression. To convert the B/W responses, a data set is built from each card (e.g. code the data set; Best = +1, Worst = -1, Not Selected = 0). The attribute and levels need to be turned into indicator factors or dummy coded. This data than can be analyzed with ordinal logistical regression. The logit function often provides the best data fit. Generally, conjoint analysis results seem to be more "noisy" than typical Design of Experiments results. Samples sizes may have to be increased to get an acceptable signal to noise ratio. Since not all respondents are the same, analyzing the data set by logical subgroups also increases the signal to noise. Figure 6 contains a typical regression analysis with some non-significant attribute levels combinations removed. (Click on diagram to enlarge.)
Figure 6: Regression Analysis for a B/W DOE
RECOMMENDED
Upcoming Events
OPEX Week: Business Transformation World Summit
27 - 29 January, 2025
Hyatt Regency Miami
Register Now |
View Agenda |
Learn More
Chief Transformation Officer Exchange USA
April 28-30, 2025
Texas, USA
Register Now |
View Agenda |
Learn More
Australasia Shared Services and Outsourcing Week
12th– 15th May 2025
Marvel Stadium, VIC
Register Now |
View Agenda |
Learn More