Download "Top 10 Mistakes in Market Research"


The Inside Scoop on the Birth of the JD Power CSI

We all know about the JD Power CSI – you can’t shop for a car or visit a bank without seeing it. It is deployed across many industries, worldwide.  Its influence is enormous, and rightly so: the guidance CSI provides enables organizations to allocate their efforts to improve based on Voice of the Customer science.  The payback to those who use it wisely is huge.  But wait – where did all this come from? Who created this juggernaut of a tool? I’ve invited Mark Rees, the statistician who created the JD Power CSI, to share the never-before-told inside scoop with us – Enjoy! Julia

Mark Rees:  Sometimes the phrase “right place at the right time” really does apply.  I was lucky enough to have such an experience early in my career with J.D. Power and Associates.  In the mid 1980’s I was working as a Project Director there and one of our primary self-funded projects was a customer satisfaction study for the automotive industry.  J.D. (Dave) Power developed a key analytical output he labelled the J.D. Power Customer Satisfaction Index (CSI).

At that time, the J.D. Power CSI was a weighted average of the attribute ratings grouped by areas such as sales, service and product.  Dave subjectively created the weights based on his years of experience in the auto industry.

When we would present our findings to our auto clients it seemed that we would spend more time arguing about the weights than the results.  This was particularly true for the Domestic car companies whose customers weren’t very satisfied at the time.

The controversy all came to a head one day when a VP at Ford wrote Dave a memo entitled: “The J.D. Power Whatchamacallit”.  He refused to call it an Index and spent 5 pages disparaging it.  He ended the memo with a copy of the directory of Statisticians in Southern California who were members of the ASA (American Statistical Association) and recommended that Dave contact someone on the list to add some “integrity” to the metric.

Dave came over to my cubicle (it wasn’t very far as there were only five of us at the time) with the directory and asked “Is this you?”  It was, as I had joined the ASA a year earlier.  I was truly in the right place at the right time.

The Second Generation J.D. Power CSI – How it Came to Be

The first step in designing the Index was to identify the criteria it needed to satisfy.  These criteria apply to any analysis of market research data.  They are:

  1. Statistically valid approach
  2. Identifies and prioritizes strengths / opportunities
  3. Provides actionable next steps and targets
  4. Understandable by users with all levels of statistical background

My initial thoughts were to simply run a multiple regression of the attribute ratings against a summary rating such as Overall Satisfaction.  It quickly became clear that we had a big problem with multicollinearity – a situation where the predictor variables are highly correlated with one another.  For example, when we ask a respondent to rate a service person on courtesy and friendliness we often find that people who gives a high rating on one of these also gives a high rating on the other and vice versa.

We may think we are getting evaluations on the two separate dimensions but we likely aren’t.  There is a halo effect across the two.  The problem is that regression sees that halo effect in the data and the resulting weights are meaningless.  They essentially measure the impact of the combined dimensions rather than each individually.  If only one of these attributes were included, it would have a much higher weight.  So, multicollinearity needs to be addressed in order to create a strong, valid, index.

The approach I selected was one that was very common in the social sciences but hadn’t been applied often in market research yet:  factor analysis.

Factor analysis sifts through the data and identifies groups of highly correlated variables.  Each group is called a factor.  The output indicates how many of these groups exist, which variables are contained in each group and what is each variable’s contribution (weight) in defining the group.  It isn’t simple: there are many decisions the statistician makes during the analysis.  Developing a robust and valid Factor Analysis is part art and part science.  I’m glad to discuss should you have an interest.

Once we have the factors, we need to identify each factor’s contribution in predicting overall satisfaction.  The next step in factor analysis is to transform (called rotate) the factors so they are totally uncorrelated (called orthogonal) and that makes the problem of multicollinearity disappear.  Now, we can confidently apply regression analysis and each factor’s coefficient can be directly interpreted as its “importance” weight.

white paper image 1

 

The final step is to apply the weights to the attributes and factors, calculate company level scores and transform the data so that it is centered at 100 with a reasonable level of spread for statistical differences.  The result could legitimately be labeled an “Index”.  I was comfortable with the analysis and results – but would it hold up to scrutiny?  We wanted outside, independent validation.

A description of the analytical approach as well as the underlying data set were sent to three academics for their review.  These were:  Chairman of the Statistics Dept. at University of Michigan (for the Domestic auto companies in Detroit), Marketing Professor at USC (for the LA-based Japanese auto companies) and my major Professor at Iowa State.  All three signed off on the approach and we were ready to go.

Dave Power and I spent the next six weeks visiting all of the auto manufacturers and distributors to present our new methodology.  Many of them (particularly the Detroit companies) brought in outside consultants.  In the end, they all signed off on our approach.

The J.D. Power CSI methodology was born.

Nearly 30 years later, J.D. Power still uses virtually this same approach across all of the industries they measure, worldwide.

How to Use the CSI data to Guide Improvement

For any quantitative market research analysis to be of value, a valid statistical approach is required. But, this is “necessary but not sufficient”. The analyst also needs to provide actionable next steps and targets in order to improve customers’ level of satisfaction.

We start by creating “weighted gaps”.  The weights are the importance weights derived earlier.  The gaps are the distance from a goal, which may vary depending on the current level of satisfaction.  Say, for example we are looking at the auto industry and CSI is calculated for each make (e.g. Honda) and 25 makes are ranked.  If you were a make scoring below the industry average, a reasonable target might be the industry average.  Similarly, if you performed above average, your target might be to match the score of the highest performing make.

Calculate the differences (gaps) between your make and the target for each factor and then multiply the results by the respective factor weight.  These weighted gaps are rank ordered and the result is a high-level priority plan which incorporates both performance and importance.

Next, repeat the process at the attribute level.  Look at attributes in the factors that have the largest weighted gaps.  This provides more detail on which areas are most impactful in improving CSI.  Continuing with the automotive example:  analysis revealed that the largest weighted factor gap is in “Service and Repair”.   Further, the attribute within that factor with the largest weighted gap is the customer’s evaluation of “Length of Time Make an Appointment”. This provides a specific area of focus for improvement.

Deconstructing The CSI Index For Analysis

white paper image 2

We now know that we need to look more closely at the wait time between calling for a service appointment and bringing the car in.  This usually requires another research study, because what we don’t know yet is, from the customer’s perspective, how much time is too much time?  We can derive a target level for this specific item by analyzing what are called “diagnostics” in the graphic above.  These are also called key performance indicators (KPI’s) or critical-to-quality measures (CTQ’s).

An example of a diagnostic would be to ask customers how long they had to wait for their appointment (and also asking satisfaction questions).  A simple graph of the diagnostic plotted against satisfaction with the appropriate attribute will show us where there is a significant drop off in satisfaction.  The example below shows the drop off after 3 days. That is now the target: don’t let a customer wait more than 3 days for an appointment.

white paper image 3

While J.D. Power and Associates has made improvements over the years, this approach to measuring and analyzing CSI and has proven to be valuable and useful over 30 years.  It has held up across industries and borders. There are countless examples showing how organizations with higher CSI scores outperform their competitors.   Here’s just one, from JD Power: those automakers whose dealers provide the highest levels of satisfaction during the warranty period retain a greater share of future service visits at the dealership, even after the warranty period expires (taken from a JDP 2009 study).

How Can You Use This?

Thanks, Mark, for this insightful article! If you’d like to know more about the history of JD Power and how they’ve used the CSI to drive their clients’ success, contact Mark Rees at mrees2764@gmail.com .

There is nothing proprietary about the approach Mark used at JD Power: it reflects solid, sophisticated design and statistical analysis, applied beautifully.  Nufer Marketing Research has successfully used this analytic path to develop an improvement plan for clients across a broad array of industries, in B2B and B2C.  If you’d like to explore the potential for using this approach to guide your own organization’s improvement and growth, please contact me at jnufer@nufermr.com  – Julia

Leave a reply