In this post we will discuss the performance review from the point of view of the company, and how to fake it. Or perhaps how to make something useful out of it.
For clarity we will speak about “employee” and “manager”, even though at MoveinBlue we don’t currently have managers nor employees — just freelance engineers. But dirty jobs have to be done too.
When you have many people working together you can expect some of them to do better than others. Performance reviews are a way for employees to know how well they are doing; good ones are expected to keep course while bad ones are generously given a chance to correct their ways. Also, salary increases (or their absence) are used as a motivating factor.
How well do these reviews work in practice? It depends, but apparently not so well. Let us first analyze why, and then we will propose an alternative method which shows clearly that our engineering cluelessness can be applied to areas like human resources, well outside the realms of technology. In this case we will not even attempt to code our way out of what is essentially a human task, promise!
In a classical performance evaluation a number of goals are set at the start of the year, and the performance is reviewed based on how many of those goals have been achieved. Setting long-term goals is thus ingrained in the review process.
This makes me wonder. Goals have to be set when it makes sense; strategies and priorities should flow across the organization whenever they change. Unless you are a fossilized organization, doesn’t it make sense to change priorities in the middle of the year? How are annual objectives going to be evaluated if they change in the middle of the year? It doesn’t matter if they have been met or not; changing goals suddenly is an administrative nightmare for human resources.
Perhaps this kind of goal-setting makes sense in a sales team: each salesperson has a quota that has to be met. There is no such thing as a “quota” in an engineering team. And unless your goals are so bland that they make no sense (like “customer excellence”, things that no sane person would dream of not pursuing), the ability to change course is important for an organization. Perhaps the company has to watch the competition more, or less; maybe benefits must be put before growth this year. Whatever. These objectives should be known by all employees, but it is hardly fair to evaluate all employees with respect to them, even those that have nothing at all to do with revenues or growth.
Another common complaint with performance reviews is that feedback comes too late to do anything about it. The annual review may let a full year pass before receiving feedback on new responsibilities or a different situation; or even a perceived deficiency. Frictions with other teams are allowed to grow and bloom for months on end.
Is the manager supposed to keep track of all the bad things done during the year, like a corporate Santa Claus, and then decide what presents have been earned? Actually, yes. Should the vengeful manager give the feedback right away so that the employee can change their ways? Perhaps, but that would take the fun out of the performance review in a world full with average people.
Motivating Key Employees
Sometimes it is heard that performance reviews are crucial to motivate key employees. If you think about it, this opinion seems a bit backwards: it is not about reviewing the performance of every employee and finding out who has done astoundingly well, but rather about motivating those employees already known to be key. So, why not just let key employees know they are important and treat them well?
There is also the question of those employees who are not key: what is going to be their reaction to a performance review done for the benefit of a few select group? While we are at it, why not treat everyone well?
Finally, if performance reviews are really objective, perhaps they should be a way to locate previously unknown key employees. But who is going to believe that? Any manager that recognized in public that they had identified a key employee during a routine performance review would probably be sneered upon by their peers. Maybe rightly so — after all, how can a manager ignore that fact for a full year?
In my short experience, the results of the performance review depend as much on the managers as on the employees: the same worker under one manager performed quite differently than under another. It should not surprise anyone: managers set the goals and make the reviews, and their incompetence will probably reflect as the incompetence of their employees.
In practice what is being evaluated is how employee and manager fit together. Some systems like 360-degree feedback allow employees to rate their bosses, but not the specific combination of employee and manager. The fiction of an objective review is allowed to continue.
In our case most of our contract workers work remotely, so the reviewer (me, sadly) is not there every day to appraise their performance. We might use technology as the answer: gather some objective metrics (such as git commits or number of problem reports closed) and use that to find out who are key employees are. But that would be as useless as well. Let us face it: if people are doing their jobs then their performance is adequate. We are not so many people that we need a chart.
There is one excellent use for the performance review: to get feedback out of employees. Suddenly it makes sense to do it with contract workers and temps, too: what do they think about their job? Do they want to explore other areas, do new things, or diversify their assignments? Perhaps there are other useful tasks that they can undertake?
If anything in the feedback surprises the manager, then they are not doing a crucial part of their jobs properly: communicating with employees. That is a great feedback on the manager’s performance too, and something that can be useful on itself.
How to Fake It
The most common issue with performance reviews is that they represent a lot of work for everyone involved: managers, employees, human resources. Improvements in the process like mid-term evaluations or 360º feedback only increase the amount of paperwork.
At MoveinBlue we use a few simple questions not to review performance, but to keep the ball rolling. They represent the start of a dialogue between reviewer and reviewee that should take place every three months approximately. The first six should be answered by the reviewee:
- I like about MoveinBlue:
- I dislike about MoveinBlue:
- I would change about MoveinBlue:
- I like about my work:
- I dislike about my work:
- I would change about my work:
The last three should be answered by the reviewer (in this case for someone called Trinidad):
- I like about Trinidad’s work:
- I dislike about Trinidad’s work:
- I would change about Trinidad’s work:
These questions are sent and answered by mail. Then follows a personal interview, which should take between half an hour and an hour; any outstanding points can be discussed at length. Finally the reviewer attaches the questionnaire with answers to the reviewee’s file.
This process allows some difficult things to be said that would usually be perceived as conflictive or negative, and gather opinions from everyone involved. Good ideas can come out of it too; the kind of crazy propositions that engineers usually keep to themselves.
Give it a try, and let us know in the comments how it went!