Most employees have this instinctive, visceral feeling that the traditional performance management process is in someway, deeply unfair.
But it's hard to put your finger on exactly what it is that's unfair? Because the theory is absolutely flawless and makes complete sense.
The process goes like this... List the competencies we want to see in the organisation - Professionalism, Leadership, Critical thinking, Initiative, Ownership. Then at the end of the year have managers rate employees against each competency and map them into a nine box grid or even a bell curve.
You might have some calibration meetings and ultimately end up with a map of people labelled into boxes which becomes the model to decide salaries, bonuses and promotion moves.
Right up to board level it's a very defensible, scientific approach to performance management that's based on data, right?
But when you're a participant, whether you're a manager, employee or HR.... it STILL feels unfair.
So much so, that when we ran the process in my previous organisation, we ended up short circuiting the process.
Interested in Continuous Performance? Subscribe to our blog.
We won't take you away from this page.
After the ratings were collected I would sit with the other co-founders and our HR Manager and we'd do our version of a calibration meeting. Which involved going through each person and making decisions that had a closer reflection to reality.
It's not that our managers or employees were trying to game the performance management process or deliberately skew the data. In fact, I know they were committed to making the process as fair as possible.
I've since learned that us humans are terrible at rating other humans.
For example, I often wear my bright jacket (I needed some way to stand out against Josh Bersin!) and I ask people to rate me on a scale of 0 to 10 on how 'Professional' I look.
I've had every score from minus 8 to 10! This is called the idiosyncratic rater effect, which means the rating is far more about the rater's perception of what 'Professional' is.
For some raters, 'Professional' might be a tailored jacket and for others it might be a black jacket.
Now, imagine there were two teams of people, all wearing the same bright jackets; and the two different managers had opposing perceptions of what 'Professional' is.
You can see where I'm going. With this performance management rating system, one team is going to be labelled in our nine box grid as 'Under Performers' and the other team is going to be labelled as 'Future Leaders' - yet they are in fact, exactly the same!
Now we're seeing the reality behind the boxes or bell curve. It's starting to make sense why it doesn't feel fair.
And we've only looked at ONE of the flaws in the underlying data.
The following bias' also contribute to flawed performance data that's underpinning so many of our performance management systems.
- HALO BIAS Tendency to give favorable ratings due to strong performance in one or two areas.
- HORNS BIAS Tendency to give unfavorable ratings due to poor performance in one or two areas.
- PRIMACY BIAS Establishing a positive or negative opinion of an employee or their work early in the review period and allowing that to influence all later perceptions of performance.
- RECENCY BIAS Allowing the employee’s most recent performance level to skew the opinion of the total work for the cycle.
- SPILLOVER BIAS Continuing positive or negative ratings for an employee based on the employee’s performance in previous cycles.
- REFRESH BIAS Ignoring patterns of positive or negative performance that carryover to current cycle.
- LENIENCY BIAS Consistently rating employee(s) higher than deserved.
- SEVERITY BIAS Consistently rating employee(s) lower than deserved.
- NORMATIVE BIAS Rating employees the same and ignoring individual differences.
- COMPARATIVE BIAS Rating an employee in comparison to each other instead of evaluating based on their ability to meet the defined performance expectations.
- SITUATIONAL BIAS Tendency to upgrade or downgrade employee ratings by attributing factors outside the employee’s control to the employee.
- DISPOSITIONAL BIAS Tendency to upgrade or downgrade employee ratings based on the supervisor’s opinion of the employee’s personality/character.
- AFFINITY BIAS Tendency to give higher ratings to those employees with whom the supervisor believes they have more in common.
- ALIENATION BIAS Tendency to give lower ratings to those with whom the supervisor believes they have less in common.
- IDENTITY BIAS Tendency to view and rate employee performance filtered through stereotypical assumptions (“microaggressions”) about sex, gender, gender identity, gender expression, sexual orientation, race, ethnicity, national origin, religion, political affiliation, socioeconomic status, educational background, age, disability, genetic information, or veterans status.
Source: University of N. Carolina
Rate your experience, not other people
So what do we do? Well it might sound obvious but the process of rating other people needs to be ditched. Sure, it used to be accurate in the days when the ratings were about widgets per hour, but it's not about that anymore.
Actually, the real challenge is not so much about stopping, but what do we replace it with? Just because the old way no longer works, it doesn't mean we should get rid of performance management all together.
We're seeing a shift toward rating your own experience, which is far more accurate than rating other people.
What does this look like specifically?
We like the approach taken by Deloitte with their 4 questions:
- Given what I know of this person’s performance, I would always want him or her on my team. (Rating 1-5)
- Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus. (Rating 1-5)
- This person is ready for promotion today. (Yes/No)
- This person is at risk for low performance. (Yes/No)
Additionally, we ask these questions each quarter so we can record managers sentiment on a more frequent basis in the flow of work, away from the pressure of an annual process.
We cover this in more detail in our continuous performance management blog
And rather than asking team members to rate each other through a 360 review - we're an advocate for the work Marcus Buckingham has done with 8 questions to predict high performing teams:
Once again, you'll notice each question is asking for a rating of the team member's experience.
So far, we've covered why the old way of creating performance management data is flawed and how you can replace this data with a new style of question. This is really the first step towards a more coaching and development approach to performance management.
Read here to understand the full Continuous Performance Management process. The definition, some examples and best practices behind a continuous performance management strategy.
Like what you're reading? Subscribe to our blog.