Ahh, the performance review. That necessary process that we all love to hate, but realize it’s a critical ingredient towards the growth of our company, as well as helping individuals grow professionally. So, the question is – why is it so common for this to be such a painful process? How can this be turned into something that is good for the company, and maybe even somewhat enjoyable and valuable to the individual?
I worked as a Program Manager at an enterprise technology company for a number of years in the late 2000s. During this time, the company was still using their previous annual review process to assess each employee’s performance over the year and this would then be used in “calibration” where the management team met together to decide what final ratings to give to each individual. This of course tied directly to the amount of your bonus which in most cases would be a fairly significant part of your total compensation each year. After going through the process five years in a row, I became well acquainted with the system and understood what it was supposed to be designed to accomplish.
The problem is, it didn’t work – and everyone knew it. While the principles behind it may have been sound, in execution it failed year after year. Those that paid attention found ways to circumvent the system to significantly increase their odds of getting a top review. And if you chose not to play the games and hoped your performance alone would be enough, you were almost certainly destined to be at the low end of the curve. It’s not unlike a big game of Survivor – and if you don’t play the game, you get kicked off the island.
The system broke because people realized that at the end of the day, your performance rating was still based on some key social dynamics that have very little (if anything) to do with your professional performance. Your ultimate rating came down to a number of managers arguing in a room for you or against you to determine who falls where in the curve. They are supposed to come equipped with data to support their argument, but naturally there are managers in the room that carry more sway than others and may be respected more than others. The managers would come away from this meeting with each person (in your level band) slotted into a place on the curve – and then it was the job of your manager to make the data fit that assessment. If your manager made a weak case for you, this of course resulted in puzzling examples used in describing why you just got the low review, such as a one-off comment you supposedly made passing someone in the hallway six months ago, or a single email sent to the wrong team in the last month before calibration. The punishment often seemed to not fit the crime, and similar offenders were getting top reviews somehow. How was this happening?
(Failed) Principles from the old system
Just like re-writing an existing code base, you look for the major bugs and architecture flaws that caused the previous system to fail and your new design directly addresses those items. So what architectural principles in this process contributed to its ultimate demise?
Principle 1 – Who’s Your Manager? In order to “win” the game, you had to have a well-respected manager who demonstrated that they were an effective advocate for you. If they had little sway with their peers, your ability to get a top review (and subsequent good bonus/growth opportunities) was significantly hampered. Your performance could be top notch, but if no one listens to your manager, it’s a moot point. So when considering taking a new job, always consider the ranking or standing of your manager among their peers – their success will be your success.
Principle 2 – Become a Perception Management Engineer. My first two years I was told repeatedly that even though my projects were on track and all succeeding, there was a “perception” that they were not. And until I addressed other people’s “perception” of reality, I would continue to receive low scores (I was never told “whose” perception that was – I had to attempt to discover that on my own). To me it was quite strange that perception of reality was ranked higher than reality itself. I started to notice the people that received the top reviews talked a lot – and to the right people at the right times. And the greater the public setting, the more they spoke – with often not-so-subtle self-promoting references about the wonderful successes of their programs and projects. The facts that the data showed their programs were sometimes failing or that the results were mediocre at best didn’t seem to matter. The perception was that they were on top of everything. And this was rewarded time and time again. So the lesson here was to not only focus on generating results in your work, but to also be your own public relations agent simultaneously.
Principle 3 – Timing is Everything. Since calibration occurs at the end of the fiscal year, the most recent things that have occurred for you carry the most weight from your team’s perspective. That home run you hit last October is a distant memory by now – they probably forgot whose work that was by this point. So save your best performance for the last month or two before calibration. And if you’re looking for an opportunity to slack off a bit, that would most certainly by in the first half of the fiscal year when no one will remember anyway. Also, you should look out for team mates who are generally friendly to you the other three quarters of the year who may begin taking public pot shots at you in the fourth quarter in order to hopefully bump you down the curve (bringing them up). So be on your best game during review season.
What do we take away from this?
All of these social principles can be learned and then exploited (and were), but none of them serve to effectively rate the performance of a professional who is earnestly trying to grow in their own area of specialization and provide true value to the company. While they do dramatically increase competition among employees (survival of the fittest, right?), these principals ultimately undermine the core value of the performance review itself, which was to provide an objective, data-driven view of individual performance. This in the long run hurts the company and causes significant morale issues.
It should be noted that this company has since changed their review process to address some of these challenges (among others), and the results of the new system are still being assessed. In a future blog article we’ll take a look at how the newer review/assessment models are addressing some of these key items and discuss other alternate views at how we can build and grow more effective teams and organizations over the long term.