Abhay Singh
Hello, everyone, and welcome to another episode of our podcast, how do they do it?
For those of you tuning in for the first time, this is a place we try to surface the best tactical advice people leaders need to improve talent density in their organizations with real executioners who have repeatedly done it in their jobs.
Now, if I do my job well, this episode should be available to you sometime in early 2024. And while thinking through what content might be most helpful at this time of the year, performance calibrations was an easy winner. A practice that's still, to the best of my knowledge, widely exists with many variations in how it's run, the purpose it serves. It does get a bad rep and often involves a lot of friction. So I thought this is the best time for us to declutter.
With me today is Rachel Kleiban, VP of people at OpenPhone, and she has worked with VSCO, Airbnb, and Gap, amongst others. While I was scrambling and scouting the Internet trying to find the best partner and practitioner who has deep expertise in managing calibrations, I came across her LinkedIn blog and I was certain that I have to find a way to get her on my podcast.
She carries a deep passion for performance management, so that's something we share and believes in, tying people's strategies to business strategies, as I have learned through her body of work and my interactions with her. And you will notice further today that she has a rare trait of being able to operate both at a high level and dig into the details of execution.
Thank you for joining us today, Rachel.
Rachel Kleban
Yeah, thanks for having me.
Abhay Singh
I always like to ask my guests that one aspect of your life that people would not find on social
media profiles and your web presence.
Rachel Kleban
I'm just very into right now is jigsaw puzzles, that silly little hobby. But I really have found it's just about the only thing I don't think about my phone or my email or my work during.
So a really nice meditation to dig into a good puzzle.
Abhay Singh
Things we do to get rid of our phones.
But yeah, it's meditative for sure.
Rachel Kleban
I know, right?
Tell us a bit about your new role at OpenPhone.
Your journey so far.
What propelled you to take up this?
Rachel Kleban
Yeah. So in October I joined Openphone, which is a Series B startup in the voiceover IP phone space. And I was a consultant for two and a half years prior to coming back in house. And I think I loved consulting, but I did really miss aspects of being part of
a team working towards a common mission.
And so when I met the folks at OpenPhone, I was really. I guess I had a lot of respect for the culture that they were building.
Despite being a fully remote from inception company, there's a real strong sense of connection and collaboration, which I was attracted to and sort of thoughtfulness around the culture.
And they're also sort of at my favorite stage, they're series B, about 115 folks. And that's the moment when you get to say,
wow, we've built something really special until here. And unless we're intentional, we're going to lose it. So I got to join right in the part to think about that intentionality of growing a team and scaling what's really special and delightful about the team.
Abhay Singh
Coming back onto what we have gathered to talk about - performance calibrations.
Maybe it's January, HR leaders have gotten the forms filled. They're ready with their spreadsheet or something. It's aless discussed, I don't know, discussion, it happens about performance reviews, but no one really talks about what happens after that.
People sitting through day long meetings doing calibration, but it's still a necessary process when we speak to most talent leaders. One thing I pick right off the bat is the sheer time sink it takes to get it done. Almost as long as it gets to fill up the forms that they have to chase people through, but they are unsure if they're moving any needle on this. Why do you think it's even relevant to solve this difficult problem today if I'm an HR leader?
Rachel Kleban
Yeah. I mean, look, I totally understand the challenge here. It is time consuming. I think that that's on us, right? To make that time feel really valuable.And to make the time together feel like it's not just about a process you check the box on for your performance review process, which I do think is important. Right. I think that just doing performance reviews but not having any calibration borders on making the performance review itself a little bit less valid. Right. Because you haven't looked and said,
did we do it right? Do we agree with each other? But I actually think there's a bit of a mental shift that can happen that makes calibration feel more useful,which is instead of thinking of it as a tool for your performance management system, think about it as a tool for your manager or leadership development strategy. Right. I've seen people sit in that room and learn and build the muscle sort of individually and as a team, as an organization about accurately and fairly assessing talent. And I've seen that muscle be flexed other times during the year, whether that's in conversation about performance or development and thinking through, about how do we grow and manage our team. And when you've seen a team go through several cycles of calibration together and get better at it, you realize you're not just checking a box for performance, you're actually developing a really important skill in managers. And so I think when you can shift your thinking to something a little bit broader than that, it feels like less of a sort of time sync and more of an investment.
Abhay Singh
Interesting. So I clearly picked up and I think it resonates with my prior experiences. Beth, that transparency, fairness, consistency of standards, that's something top of mind for most HR teams. I think despite organizations that have the best of managers, while no one really today claims that they have the best of manager, it's an ever evolving space for them. But just so that I am not assuming the obvious, and for any young HR leader, maybe at a seed startup, hearing in from you, having done it here, in and out, what are the two, three objectives I'm looking at to achieve through this process? That is a must. And of course, I might have some company specific objective I want to achieve as well. What can I look at generally?
Rachel Kleban
Yeah, I mean, I think you hit on most of them. So I think the first is alignment. Right. Do we agree on what's expected? And then consistency. Right. Are we assessing those sort of agreed upon expectations the same way? And then the last would be about creating space to talk about and check each other's biases. Right.Like human nature, we are bringing biases into our assessments and into these conversations. And when you create a space where you can be open about that and talk about that. It's actually quite illuminating, and it helps us feel really confident about having intentionally tried to remove as much of that as we can.
Abhay Singh
I always like to sneak in a controversial question or two. You having run through these processes many number of times, what is the CEO typically asking you to they want from this process? Because I'm guessing at some level, for some stages they would be sitting through, the CFOs are involved people, cost is a sort of line item on their balance sheets. But what are they looking at from this process? Are they just passive participants?
Rachel Kleban
It's an expensive meeting, right? There's no doubt about that, particularly when you're calibrating at leadership levels. But I've actually seen, particularly CEOs be really enlightened by an effective conversation. They're getting to see what their leadership team, the bar that their leadership team is holding. They're getting to realign on what those expectations are and let that cascade through the organization. So I think that it's an expensive meeting, but it's investment well spent. I did have one executive tell me that he used to dread performance reviews and sort of after I implemented this process that he actually enjoyed it. I don't know if that makes it worth the money, but that was a feather in my cap for sure.
Abhay Singh
I had once run into a finance executive, right. And he was very curious because he had, over the course of his experience, come up the curve and sort of realize the power of talent and how it gets them. And in previous years, they have been part of some sort of redundancy activities. And they said that at one point I didn't even know who the top performers or bottom performers were in my team. And this process sort of opened up their eyes, so they started paying more attention.
You wrote about that within that meeting process, we have to manage that meeting in a way to be able to lead to better outcomes.
What quick tips do you have from experience that you can share with the community when they step into their next calibration meeting?
Rachel Kleban
I think the biggest issue I see in calibrations is that there's this expectation that you get to talk about everyone, and ultimately, there's just not going to be time to do that. And it generally ends up being like, when you talk about everyone, there's not disagreement, there's not discussion about everyone. And so you end up spending a lot of time saying, oh, I agree, she is great, and here's why. Let me tell you more reasons why someone is so great. And so actually, I think the goal of calibration is to surface healthy debate and to surface inconsistencies, lack of alignment. Right. So, I'm always looking at ways to focus the conversation there. I think there's a tendency to solve this by saying, okay, well, we'll look at the top performers and the bottom performers. But I actually do worry about that because it assumes that there's total alignment in the middle. Right.
And I think that a lot of people get sort of popped in the middle and not notice that could be high performers or even low performers. Right.
So the way I solve for this is by adopting what I call, like, a flagging system, where either it's pre-work or live in the meeting, we review the list of employees and their ratings, and we ask participants to flag who they want to discuss. Maybe they don't agree with that rating. Maybe they want more information. They're surprised. They just want feedback on someone. Maybe that's their own direct report. And so by sort of doing it this way, it offers full visibility to the group. But you're not talking about everyone. You're only talking about the areas where there may be some disagreement or question or curiosity.
And then there's one other tip that I would offer, and that would be to create a space intentionally where biases can be called out. I think that we don't necessarily always do that intentionally. I start every calibration with a primer about the biases
that can come into a performance assessment, and we do sort of a round-robin where we look at all of those biases and each participant says, hey, I think I'm most prone to recency bias. I'd love for you to call me out on it. So you're sort of explicitly giving permission to each other to call each other out, and that can be really helpful. So this was, of course, what I can do during the meeting, but I'm guessing there is a bunch of stuff I can do before the meeting in order for me to go and have a successful meeting as well.
Yeah, come to mind. One is something I think people don't think about is how are you communicating with employees about what is calibration? Right. They're not in the room. They just see. I think in the old days where we were allsitting in offices, we'd, like, draw the blinds, and everyone would shut themselves in the room for hours. Now it probably looks more like blocks on calendars, but I think ultimately it's a benefit to just talk openly about what is calibration so that there's sort of more organizational understanding and less of this feeling of this secret meeting that happens and we talk about you. Right. I think it just takes the weirdness and the stigma out of it. And then prepping managers, of course, is really important. Right.
It's like, do they know how to be prepared? Do they know what's expected of them in that meeting? And do they know what good looks
like in terms of presenting information? And that's going to be helpful in just keeping things moving in the meeting. And then I think the thing that I learned later in my career was that it can be really helpful to share the data ahead of time. So if you have a secured way to share the calibration group, you're talking about their ratings, the information you want to talk about.
So I've started to do that where I can, and I find that that helps.
Abhay Singh
Interesting you bring that up. And I remember when I was a 21 year old fresh into my first job, and I could see that once assessments were filled in, these eight people go into this room, and when they come out, they know what increment and bonus I'm getting. And I didn't even know the word calibration back then, but I knew that this meeting is really important, and I must be at my best behavior a few days back. So this brings back funny memories. I don't know, good or bad.
Rachel Kleban
If I could just add that there's this sort of perception that employees don't think reviews are fair. Right. And if you envision as an employee that people go in a room and chat about you, you probably think, well, that doesn't sound very fair. But if you say, hey, the reason we do that is to make sure we're all assessing our teams consistently, that we're talking about our biases, that we're mitigating them, you actually have the opportunity to really paint a picture for employees that we are investing time in making this fair. It's not just at the whim of your manager, and that's a really different picture.
Abhay Singh
And interesting, you mentioned that it needs to be communicated to the employee base. I'm just thinking that, how would that work in the real world? Because certainly entry level talent, very difficult to get this complex HR concepts of calibrations and ratings and talent development. This is what we do with this. And even I think I wouldn't have got it till I was 25, 26 and I was an HR consultant. I get it now. After 10-12 years of working, what has worked well for you in terms of communicating to the end employees who are not managers, and how do we even craft the message for them.
Rachel Kleban
I mean, I guess I disagree a little. I think people can understand it. I think they generally, depending on where you're working, have a college degree, understand the basics. You don't have to overcomplicate it. Your manager is responsible for assessing your performance, but we don't want them to do it in a vacuum. We want to make sure that all of our managers have the same idea of what's expected so that we can fairly assess you. That's why we get together and we talk about what are our expectations of how are we assessing them fairly. We talk about our potential biases that might come in and how we can mitigate them so that the outcome is more fair. I don't think that's so hard to understand. I don't think that we need to underestimate or underestimate people or overestimate the complexity of our systems.
Abhay Singh
Yeah. And I think that's why it remains a complex problem, because sometimes there is no straight answer and there is a lot of context that goes into it. Another aspect that I have commonly seen, right, that there is a fine balance between how complex you make your process and how much value you're getting it after a point. The more complex that you add, you're just really not adding any value back to the process, but you're making it more complex while running the calibration process itself.
Of course, there's some people look at the manager teams, or they look at a functional, department, location, paybands. Now there is, of course, people look at diversity and equity components as well, at least the forward facing organizations. Have you found incremental, deeper value by going deeper into these calibration cuts for the calibration meeting itself with managers, with leaders.
Rachel Kleban
I actually generally keep it pretty simple. Right. I think the most important sort of cut, so to speak, is level, right? Are we comparing people or comparing people to the right expectations for their level? So going level by level, cutting it by level to me is critical. Anything else? Look. Zoom out at a company level. And I do like to look at gender and ethnicity and make sure there's no seams, there's nothing we need to be digging into. But in the collaboration itself, I actually prefer to keep it simple.
I think the one thing that's interesting, when you do get to a company level, when you're at your sort of executive level of
calibration, is looking function by function. And particularly having that conversation in conjunction with conversations about business performance or functional performance, you might have a marketing team which is on the whole rated higher than other teams, but that marketing team didn't hit their OKRs, or we aren't seeing the traffic we need, they're not meeting the goals.
And so is it right that that team is rated higher? It's being assessed at a higher level than others. To me, that demonstrates some inconsistencies amongst ratings, and I think that's a really important conversation to have, is like, does the overall performance of a function match the overall actual performance of a function?
Abhay Singh
Yeah, I'm smiling because you have again brought back a very funny memory. When I met with the CEO of a mid market company about four years back and said that I have received 80%, four and five ratings, but every function has missed targets by 30%. So how does this relate to each other? And we were just sitting there staring at each other.
Rachel Kleban
Was that lack of performance at an organizational level a product of individuals not doing their job, or were there things outside of their control? And so sometimes the answer is yes. This marketing team busted their butt and they worked really hard and they exceeded the expectation, and there was some outside factor that kept them from meeting their goals. But often it is a conversation of saying, like, this doesn't match up. Maybe we need to humble this a little bit.
Abhay Singh
Absolutely. I like your earlier suggestion that within the meeting itself, you focus on the most critical elements. We're looking at other cuts, but maybe the analytics team or someone can look at it on the outside. And if you find something alarming, then
you elevate that for a discussion. But not everything needs to be discussed.
Rachel Kleban
I tend to present that data to a leadership team, whether that's part of their calibration and are separate to get that higher level view of how are we performing as an organization.
How did this go? I think it's overly complicated for individual managers and teams.
Rachel Kleban
Most calibration meetings, the way I have understood, are only useful with a framework. Unless you're going old style, stack rank
and let's go from top to bottom. Either some are still doing the 80s jack Welch forced curve, or they're doing like a recommended distribution. Or now what I'm seeing most progressive companies doing, they're looking at unusual growth, which they look at talent velocity over a period of time. How do I select what works for me, and how do I convince of change within my organization?
Rachel Kleban
I don't actually have a strong opinion about what organizations should or shouldn't do around sort of forced distributions or versions of that. I do have a strong point of view that you need to be really clear on what you're doing going into the calibration, and I'd say the worst calibration experiences
I've had and observed are the ones where in the leader's mind, they're forcing a distribution. They aren't explicitly saying it to
the team, and so they're pushing. And the team doesn't know why they're pushing. Right. So if you walk into a meeting and say, our goal today is to hit this distribution, that's fine,that's a very different. I may or may not agree with it, but that's a very different outcome and a different process that you're looking for, because your goal is to actually move people until you get where you want to go.
If you're coming in with maybe no distribution in mind or sort of a recommended, we think we should have about this percent of the organization here. It's really, the goal is more around alignment, and it's around how are you assessing your talent and by having 50% of the population exceeds expectation. The conversation is not about who do we move, but it's about what bar are we holding. Are we holding a high enough bar for our expectations if we think everyone's exceeding it?
I don't have a strong opinion, but I have a strong opinion to have a strong opinion and to be clear about the outcome that you're achieving, because I'd say that that's probably, again, the worst calibrations I've experienced are the ones where managers are scratching their head, being like, what are we doing here? You said we didn't have a forced distribution, but feeling forced.
Rachel Kleban
As I'm thinking about it. Right. And there's of course, some stereotype, you can say, or a belief that runs. And maybe some of the old classical industries you've been in, retail, right, and manufacturing, maybe they still adhere to the systems that work in. No, it is 10% top performers, 80% in the middle, as you said, and 10% bottom. PIP, rotate out. But maybe the tech wizards of the world, you've been in Airbnb, or the most progressive firms at that matter. Now, you had an early stage startup, did you, in your experience with your consulting work is fine. Find this divide between the old and the new world that they are still focusing on
the GE methods, and now progressive companies or tech companies are focused somewhat on new methods.
Rachel Kleban
I don't see a ton of rank and rank stack rank force distribution anymore. And I'd say when I think about the pockets in which I've seen that it has been more maybe frontline, hourly customer support, sales teams where maybe there's still some element of holding on to that. But I'd say for the most part, in my experience, most people have a bit of a distribution in mind. They aren't forcing it. They're wanting to hear the story or the narrative behind why we're this way or that on it. And as a result, I think everybody's coming in a little higher than they want to. But I think that the conversation I get about are we holding the right bar? That sort of recommended distribution serves that purpose well, without it feeling honestly just kind of gross right to be having to be like, which person should we put in this or the that.
And sometimes when you do force that, you end up with like, well, this person might be louder, so I'll just appease them and put them in the higher rating and move this person to the lower. I think there's a lot of risk for bias there. Who's going to be the loudest, the person who has the most privilege, the person who's most used to advocating for themselves. And so I think that there's enough ickiness around a force distribution. I don't see it a particular lot anymore, but it's an interesting question.
I haven't seen it come up that much, but I know it still exists out there.
Abhay Singh
Yeah, it's one of my outstanding data requests to my data team that can we look at some trends over this? But I think maybe that's
a realization for another time.
We spoke about this briefly in my previous question is that I am now increasingly seeing clients approach us from, they want to track individual growth, that once this person entered the workforce, how much from one cycle to the have we actually helped them grow? How quickly can we get them to peak performance? What is our talent velocity like? And they want to move away from this practice, which they considered archaic. I think you use a different word. But of the comparing people, that doesn't really sit well. How hard it is moving to this method of tracking individual growth and do away with comparing people. Is it practical or still one of those theoretical concepts?
Rachel Kleban
Interesting. Well, first of all, I might challenge the assumption a little bit. I think a good calibration doesn't compare two people to each other, but rather assesses how you are comparing two people to a standard expectation. So I do think you can still do a good calibration that doesn't have to feel like or to be baked in comparing people to each other. To answer your question about how do you track growth velocity? And is that's interesting, honestly, it's not something that I've dug into a lot.
I guess the question I would ask is what indicator is that growth velocity saying and if and where does that break? Right.
Does that make a lot of sense from level one to level three? And it breaks when you start needing these more senior roles. Does it track? I don't know.
That's an interesting question I'm not going to have a great answer to because it's not something I've put a ton of thought into. But I think that for me, and particularly with the lens of diversity, equity, and inclusion, frankly, the key to performance reviews is about clear and consistent expectations against which you're consistently comparing everyone. And that's where you get the ability to mitigate a lot of bias, to drive a lot of consistency.
Abhay Singh
I think hats off on the earlier tip that you gave for any HRBP hearing in, is that a lot of times in this meetings it ends up becoming A versus B sort of a conversation. And when you are facilitating that conversation, you're the shepherd, always bringing people back on, bringing that conversation onto comparing the standard. Right.
A golden tip. You might have to do it a thousand times in every calibration process, but you just got to keep on reminding that, because I've seen it turn into this A versus B conversation, that, hey, if A can be this, then how can you say that B can is also this, B is clearly not there.
Rachel Kleban
There's some value in that, right? Because that's your clue that you're either holding different expectations or assessing them differently. So to go back and say, well, let's see, let's go back to why we assessed A as exceeding expectations and what were the criteria? Okay, now let's look at B. Where is it different against those? So you are still kind of comparing two people, but you're using that comparison as a way to come back to the expectations.
Abhay Singh
And I think to your second point, you mentioned even we have seen some early models, but I think what is working well, and these are like two companies in thousands, so I think they are still early signals, but what they're looking at is goal progress, achievement and maybe competency development over a period of time. And if that is happening correctly and they want to reward people better who are growing quickly but still early days, I think that's something we are studying as well.
Abhay Singh
Which brings back to my next question, which was around this model, that you do need some sort of a framework to guide this conversation. No one's doing a force distribution. I think we can concur with that, even if some companies in some developing economies are still using some sort of a high/medium/low distribution. But what now mostly I see the market moving towards is some variation of the nine-box. I've seen three, nine, I've seen 16 as well, and I'm sure I'll one day see 24 as well. What is the value in using the ninebox frameworks? Have you used this and how has your experience been so far? Because this goes beyond calibration. There's a bunch of stuff that you can do through your talent segmentation into your other talent processes as well.
Rachel Kleban
I'm really hesitant about nine boxes. I've definitely used them early in my career. The thing I love about performance reviews and performance management is that it's a backwards looking measure, right? You can say, I observe this behavior against these expectations, and therefore I can more or less fairly assess your performance. When you start adding potential into the mix, it's really hard to be objective, it's really hard to be fair, and it's really hard not to let bias come into that. Right? So in the absence of having observable data or a clear definition of what am I looking for, what am I observing to assess potential?
Like our brain takes shortcuts, right? So, for example, I may say, look, I'm a VP of People, and the path I took was through HRBP and then talent management roles. So when I see someone who's on the path of HRBP and then talent, I say, oh, they must have potential. They're on the same path as me. And here I made it to where I am today, right? So I make shortcuts about what potential
looks like based on my own experiences. And that sort of leaves out people who take different paths, right? I think that's one thing.
And then, frankly, we close our eyes and this is totally normal, and we think about someone who's met their potential and we close our eyes and we think of a white male in a leadership role, right? So I think that the de and I, the diversity and equity implications of potential are really quite serious. In the early days when this was very much the practice, and we did this all the time, like at the GAP and at other places, I saw people get put in a box and told, you don't have potential, but you're a great performer. Right? And then nobody gave them a second look. And years later, I was just one person in mind who somebody said, what are you talking about? Let's give them a chance. And they flourished in their careers. And so I think it can be really limiting. I think it can be really dangerous. And I understand we want to look ahead, right? We want to plan ahead. So where I've seen it done more objectively, I've seen behaviors associated with potential that are like growth mindset or curiosity or learning agility or resilience, things that we think are the keys that get people to the next level, right. That we can observe today. So my advice is, if you're going to do it, be really thoughtful about how you're defining potential, really aligned on what that looks like and how you're actually observing those behaviors as opposed to making guesses.
Abhay Singh
That's super fascinating because I'm just correlating conversations in my head. And incidentally, last week when I was recording
this podcast that was on talent reviews, and this was actually one of my questions that I have for now, not understood that. What is the difference between a calibration meeting and when you're sitting down and doing a talent review? Now, if I hear you correctly and I'm a good student, is that what you're saying? Is that let the performance review calibration be around performance? You do that talent review, maybe do it three months after. You don't even need people to participate in that. Do all your talent planning separately from it. But that performance review is for a fixed period of time. How did you perform in that?
Just talking about that.
Rachel Kleban
That is both how you simplify it. Right. We talked about that already. I think that is how you keep it quite fair and how you don't have like twelve hour meetings. Right. Sort of all the things we've touched on, it's like the time investment, the complexity, and bringing managers. It's a lot for managers, like, your job is to assess your employees performance. Let's get really focused. And that data point can be used for a lot of things, including into some version of a talent review. But yeah, I do like to keep those conversations separate.
Abhay Singh
The only thing I like about those twelve hour meetings is that the company pays for my lunch and my dinner and I don't have to worry about it. There you go.
I'm going to take you back to the room. Right. And this very interesting term you had used, the bias buster in the room that HR can come on as. I don't want this to be like a complete glossary of biases because I know there are a lot of them. But what are top two or three biases that as an HR professional, I'm looking out for in that calibration meeting?
Rachel Kleban
So, I mean, I think there is. First, there's the group of sort of the laundry list you mentioned. Right. The things you find when you Google bias and performance reviews. There's a couple sort of gold star articles out there that I think everyone pulls and brings in the halo horn. The recency, I think, similar to me, is one that we end up talking a lot about. So those are the biases that I think kind of everyone's heard about, sort of knows about, that are important to just remind. Right. And those are the ones that feel easiest to kind of call out because they're generally not about a person's identity, but more about sort of your own view on somebody's performance. Right.
I think it's really important to listen for what would be called, like, gendered language. So we have find, research has shown that women get more feedback about their style and their communication and sort of their how. And men get more feedback about their results. Right. Both are important. But when you are only getting feedback about your how and your style and not getting feedback about your results, there is a meaningful implication about how people think about your performance, how they think about
you for promotion first, making sure the conversation is balanced, that everybody's getting feedback on both, or you're talking about sort of both style and substance. But to be really thoughtful about making sure that we aren't just saying, oh, she's such a great leader. Oh, she's such a great. No. What did she deliver? Let's talk about that. So I think that's something that's maybe less obvious. That can be really important. Hey, we just talked about John and talked about that project he delivered. I'm hearing that Sally is a great manager who really takes care of her team. But tell me more about what her team delivered. Right. Just asking those questions to shift the conversation towards that more balanced perspective.
So those are sort of the two buckets that I'd say I look for. And I think I've mentioned this already, but you don't have to be the only bias buster. Right. There are ways to really empower others. I think I gave one example. Another thing that I've done, I wish I had it with me in a drawer, is like, we printed, when we were doing them in person, we printed these cards that were like, tips for busting bias. And you saw people physically use them to like, hey, bias. So there are some just physical cues. Even in zoom, you can do it. There could be, like an emoji that you pick, that you drop in chat if you hear it. So making it feel tangible. Making it feel real can help.
Abhay Singh
I'm going to quickly link this to something else that you used, because as I'm hearing you, I'm thinking, if I'm a young HRBP, how am I doing this in that room? You said the master facilitator sounds easy. But often at times in these meetings, these big committee meetings, on a roundtable or on a zoom, there is my CEO, some senior leader, maybe some scary personalities, some outspoken personalities, all vying for this common kitty. And you said it yourself, everyone thinks that their team is the greatest and their people are the best. How am I getting good at this? This sounds like one of those classic things they used to have, influencing without authority. How can I, as a professional, HR professional, get good at this?
Rachel Kleban
Honestly, I think it's observing others do it well. And I think, as a more senior HR leader, creating opportunities. These rooms feel really private. Right. People don't want a big audience. But I think there's like a really good trick, which is to say, look, I needed to bring my junior HR VP in as the notetaker. I needed someone to run the slides. I need someone to track our decision so I can be fully engaged in facilitating. And that's how I've brought more junior VPs in to observe and to just sort of see what good looks like. That's the best advice I can have, is to watch it go down. Right.
I think the other thing is the senior most leader in the room can play that role of helping you learn and helping the facilitation go really well. So aligning ahead of time with that person and saying, what role do you want to play? What role should I play? What do you want to do when you disagree with a rating? Do you want to speak first? Do you want to speak last? How will we do this? Because that person's voice has an outsized voice.
So I think that's another thing, is if you're kind of doing this for the first time, take the leader as the partner to say, how are we going to facilitate this meeting? And that can help also bring someone who hopefully has some experience doing this as a bit of a mentor as well. Those are really helpful tips.
Abhay Singh
I know that there is no good answer that you're not going to be immediately getting good at this, but you have to start somewhere.
And I think I get your point. Like anything, do some homework, be as prepared as you can be, and then hopefully, over time, you can be really great at it.
Now, this is one of maybe my top two sort of pet peeves about calibration is that rating adjustment and feedback and standards and promotion. Common use cases covered. But a lot of organizations still link compensation directly to that process. So what a lot of managers end up doing is that they know the maths behind it and they're running the process to get to the number that they want.
How does one deal with that?
Rachel Kleban
If you're in that room and you actually get someone to admit they have a low performer and that's a win, right. Just having a safe space where people can make that hard call is important. But I think the best thing I've done in the past is honestly just make some rules around something like, okay, you can't be underperforming two cycles in a row. We just, like, we'll give you one. You have the time to improve. If there's no sort of more immediate action and you get to another cycle and you're underperforming again, that needs to be a term decision. So having some rules or some guidelines around or anyone who does not meet, or even meet some goes on a PIP.
If you use a PIP, right. So having some standard rules where you can go down the list and say, as an HRVP, my job is to follow up with every single of these managers after the review is over and say, here's the next step, because you had this, if this, then that. Look, I've seen it where it's like the person is not meeting time and time again and we just keep letting it go. So just some rigor, some alignment ahead of time and being honest about that with the organization, with managers about this is the follow up we're going to do after the review, I think is helpful. Right.
It's like the least fun part of the whole thing, but it's important.
Abhay Singh
Incidentally, you mentioned the word I was going to use to frame my next question, the PIP. Right. Is it still popular there?
I think dividing opinions in terms of whether does it work? Because if you really put someone on a PIP, people don't really buy that anymore. Very little organizations actually put that effort in to bring that someone back onto optimal performance level.
What has your experience taught you?
Rachel Kleban
I had this really funny conversation with someone the other day where they said, I don't believe in PIPs. I just think what you need to do is tell people exactly what's expected of them, what they need to do to improve and what the consequence is if they don't do it. And I said, but that's a PIP, right. So I think it's gotten this sort of like, oh, it's a terrible thing, and everybody just does it to check a box. I think that sucks if that's what your PIP is.
I do think that a couple of things I think about a PIP is people get so exhausted by the time they get to a PIP that they don't want to do the PIP. And they say, but I've given them so much feedback. I think just intervening sooner, being willing to go
to a PIP as a positive thing, to help somebody improve sooner, rather than this last ditch effort. I've been thinking a lot about somebody in my network I was having a conversation with, and she gave me this language around micro versus macro feedback, which I thought was really helpful, is like, as a manager, I'm giving you all this micro feedback. You did this wrong. You need to do this better. You need this little thing. What? This meeting was bad, but I'm never stepping back and saying, these are the expectations of your job and you're not meeting them. Here's the macro view of how you're doing. So the PIP becomes the first time you're having that conversation. If you're having those conversations earlier, I think the PIP becomes much less relevant. The reason we need it is because we never said, here's the big picture and here's the consequence. Right? So I think to the extent that your coaching process, your feedback process, can zoom out and offer that feedback and help people understand the impact, I could live without PIPs for sure. I just don't think managers are particularly good at that yet.
And then the last thing I'd say about a PIP is we often have this narrative, like, managers shouldn't have to do a PIP if they've lost faith in the employee, if they don't believe the employee should improve, I think that that's a product of waiting too long. You let yourself lose faith. I also think we generally shouldn't put people on PIPs that don't want to, that they themselves have lost faith in. Maybe their manager's ability to coach them, maybe the company, maybe they're fit for the role. And so I'm a proponent of sort of what people call, like a pipper package choice, where we don't force anyone to do a pip. We give them the opportunity to opt in or opt out with something meaningful. And so that's how I feel about PIPs.
I mean, I know they're, like, uncool right now, but I think if you take a step back and say, what is the purpose of the PIP? Truly in its essence, to help people understand what good would look like and what would happen if they don't do good. There's value in that in whatever form it takes.
Abhay Singh
I think those are some very relevant tips. And again, I think it's a vague answer to give from our perspective, right. Sometimes that there is context, but I was reading through the story that there can always be these corner scenarios, right? Someone going through bad health or ill health of a family member or something happens and people share, oh, but I put this person on PIP and they came back and I'm like, you got to look at the norm to make any sort of cadence, right? You don't look at sort of an exception to do that. So I think very relevant tips. I think apply it to your own context, see what the situation is like. Look at the people that you put on PIP. Have they actually improved? What is the delta like? If two people out of 150 do it, then maybe you need to figure out an alternative. And this seems like a practical suggestion, but what I love is that how easily we back into our next question. You mentioned the couple of points, right? And I was controlling myself when these came up. We came up with surprising messages and performance reviews, lack of announcement, hey, how can I be this? I thought I was great. And we came up with PIP. I sat down with my people science team, and bored them to death asking these questions before I hopped on. And they said that to truly reduce the cases of calibrations, managers need to get better at goal setting & continuous feedback, fact or fiction, not just. Managers need to get better at it, but organizations need to have systems that allow for that.
Rachel Kleban
So I'd say, I guess I generally agree with that. And I think there's a lot of excitement about continuous feedback that's going to solve everything and we're never going to have to do a performance review again. I have hesitancy around that. I think that everything we've talked about today is that calibration is a place for managers to come together and to align and build muscle around fairly and accurately assessing talent. Continuous feedback doesn't create space for that. Right. So would it be great if everyone was giving feedback all year round such that at the end of the year or the end of the six months or whatever, there was some mechanism to come together and say, what does that all add up to? Right.
I think I may be a little old fashioned that I still think that people come to work to perform a job and that as a company, it's okay for us to say, did they perform that job? Right. And so again, it comes back to that little bit of the macro versus micro feedback.
Yes, I agree, but I have a lot of hesitancy about how that actually happens, how we continue to have frameworks for fairness and equity. I think goal setting is a great anchor for performance reviews. I think that's trickier in startup environments where things change a little bit more frequently. You're saying, hey, you set these goals and we're going to assess if you did it, but then your goals change five times and that's okay because we're an agile startup that needs to change. That doesn't feel very fair to an employee. It feels a little bit outside of their control. So I think unless you are really buttoned up in your goal setting process and there's some consistency there, it's not the anchor I would choose.
I do think that a really useful tool that I happen to lean on heavily is like competency models or leveling frameworks that say
for your level, these are the expectations. And I'm going to bring this every time we have a feedback conversation. Every time I want to give you feedback, I'm going to go back and say the expectation of you is that you can own a project end to end. And I really saw you do that today with this project. Or your expectation is that you bring in cross functional stakeholders and you failed to do this on that project. Right. So to me that starts to anchor continuous feedback into a clear and consistent expectation such that at some point you might be able to zoom out and say, yeah, they really hit all these points. They are meeting expectations or they are exceeding expectations. So I think continued feedback is great. I think goal setting is great. I think all with some structure and some eye towards a bigger picture that allows us to say, here's a data point for making a comp decision or a promotion decision that we feel confident is fair and calibrated and that we can stand behind.
Abhay Singh
Again, some very fantastic tips. I think that one aspect I completely resonate with is I think some people use continuous feedback very loosely, but even through these series of conversations that managers is having that shared space with that competency model is sitting, that this is what is required to excel in your role. This is where you're at, this is what will take you to get to the next level. These are the roles that you can target. All this talk about self-managing career cycles, we are far away from that.I know I spoke about managers, I put them on the spot. But as an organization, HR leaders, that's something we are all working towards we as a company are trying to work towards that as well. So that remains a utopian dream.
I spoke about a couple of pet peeves. I think this one is my biggest one is the linkage to compensation in so many cases that I have seen that people and organizations have this model. Managers know how it works and they're just trying to game together that.
In fact, it's so funny. I do not want to name the company, but we were in Southeast Asia last month and there's a huge thousand employee company and they said my managers have figured out the way this works and they are fighting tooth and nail to get to this rating because they know then this is what this person will get paid. Feedback out of the window. Are they getting better out of the window? Everything else is out of the window. They're trying to game that system. How do I deal with that as an HR professional? I'll maintain the sanctity of the process itself. The feedback and fair assessments and compensation is an outcome of that.
Rachel Kleban
I think this is a really tricky one and it's human nature, right. We want our managers to support and advocate for their team and they find ways to do it in every little corner. I think to some extent it's an argument for a really well run calibration, right?
If the conversation is very focused and it's very clear on whether these ratings are an accurate representation of performance and that there's pushback that happens when there's inflation, or that people are sort of gaming the system, if anything, I think that problem is an argument for a very focused and effective calibration. The message I give managers is your job is to assess performance. Our job is to take that as an input towards paying them fairly. I personally am someone who doesn't offer a ton of manager discretion in compensation decision making. So that's not totally always popular. But I prefer to have an algorithmic approach because if we believe this is a well calibrated review and rating, and we plug that into a formula and it tells us, based on maybe some other factors like market position, comp ratio, it tells us exactly where you should be and that's what you're going to get the minute you start having managers tinker with it.
It's just bias, right? It should all be captured in the review. So I never bring compensation into the calibration. For that reason, I try to be very algorithmic and very clear that your job is to assess performance. And I do think people are still trying to game the system when they've got someone who they really worry is going to leave if they don't make more that they're going to push them into that higher, you know. That's why we do our best to compare that person's performance against the expectations and really make a case that's true. Awesome.
Abhay Singh
Thank you so much Rachel, for taking out your time. Thank you for everyone who listened in this far and survived through my questions. I'll include Rachel's blog that I mentioned in the show notes below. You can find Rachel on LinkedIn and contact her for any question I was unable to ask her. I'll see you in the next episode. Thank you and have a good day.