Results: Understanding and interpreting survey scores
Accessing survey scores
To view the scores for individual questions in your survey, navigate to the Scores tab within the results dashboard.
Once in the Scores tab, you can select comparison scores to analyze your results. These comparisons can be made against:
- Higher-level scores, such as benchmarks, the entire organization, or previous survey results.
- Lower-level scores, such as specific teams or departments within your organization.
To choose a comparison, click on the Comparisons field and select either higher-level scores (e.g., organizational benchmarks) or lower-level scores (e.g., team-level results).
For a comprehensive guide on using the results dashboard, refer to: How to use the results dashboard.
Understanding the scores
To help you read and understand the scores, the results show whether scores have absolute differences and/or statistical significance. Absolute differences (coloring of the scores) reflect whether this difference is practically noticeable, while statistical significance (*) indicates if a difference between scores is real rather than due to random chance.
Interpreting these together helps determine whether a result is important enough to act on immediately, monitor over time, investigate further, or safely ignore.
Which is more important: the asterisk (*) or the color difference?
They serve different purposes:
- The asterisk (*) tells you that a difference is statistically significant.
- The color shows the absolute difference, which might still be important even if not statistically significant. Since it is less likely to have a significant difference on smaller groups, the coloring helps you point out differences even on lower levels.
- Together, they provide a more complete picture of the data.
Understanding the coloring of the scores
The colors in the dashboard represent differences in absolute scores:
- Green indicates a positive difference (your score is higher than the comparison group).
- Red indicates a negative difference (your score is lower than the comparison group).
- The shade of the color represents the magnitude of the difference—the darker the shade, the greater the absolute difference from the comparison group.
Why use absolute differences?
- Consistency and clarity: Absolute differences provide a standardized and easily interpretable comparison.
- Relevance: Absolute differences ensure a uniform evaluation regardless of sample response.
Understanding significance
The asterisk (*) in the results dashboard represents whether a difference in scores is significant, meaning whether it is statistically meaningful and not likely caused by chance. This helps in identifying patterns in employee feedback.
If a score is marked as significant, it means that the difference is statistically meaningful at a 95% confidence level; in other words there is a 95% probability that it is not due to random variation. This helps users understand which changes in scores reflect actual trends rather than chance fluctuations.
Should I only look at significant differences?
Not necessarily. While significance helps highlight key differences, other insights might still be valuable, even if not statistically significant. It’s always best to use significance as one of multiple factors when interpreting data.
Understanding the differencesBy combining absolute differences with statistical significance, scores can be grouped into four distinct categories.
- No absolute difference (color) and no significant difference (*)
- No absolute difference (color), but significant difference (*)
- Absolute difference (color), but no significant difference (*)
- Absolute difference (color) and significant difference (*)
1. No absolute difference (color) and no significant difference (*)
You are seeing a difference that:
- Is not statistically significant: it could have occurred by random chance
- Is practically small: it has little impact in practice.
What does this mean?
The difference is too small to warrant action — it’s neither statistically nor practically significant.
2. No absolute difference (color), but significant difference (*)
You are seeing a difference that:
- Is statistically significant: the difference is probably not due to random chance.
- Is practically small: the impact in practice is limited.
What does this mean?
The difference is demonstrably real, but you are unlikely to notice much of it in everyday practice.
Note
Such small but statistically significant differences often occur with a large group of respondents or when there is little variation in the responses.
Advice
Keep an eye on it, but it does not need to be a priority. It's good to track these kinds of signals, especially if you're monitoring trends over time.
3. Absolute difference (color), but no significant difference (*)
You are seeing a difference that:
- Is not statistically significant: it could be due to random chance.
- Is practically relevant: the absolute difference is large.
What does this mean?
You see a noticeable difference, but we cannot say with certainty that it is a "real" difference. In practice, it could be important, but there is no statistical proof.
Note
Take this difference seriously, but be cautious in drawing conclusions. Consider conducting additional research or checking if this difference appears again.
Advice
These types of differences often occur with a small group of respondents or when there is a lot of variation in the answers.
4. Absolute difference (color) and significant difference (*)
You are seeing a difference that:
- Is statistically significant: the difference is almost certainly not due to random chance.
- Is practically relevant: the difference is large enough to have an impact in practice.
What does this mean?
This is an important difference. It is both demonstrably real and noticeable in practice. You can act on it.
Advice
Pay attention to this difference. Discuss it within your organization and determine whether action is needed.