Emtrain logo

Effectively Mitigate Bias In Performance Reviews With Just-in-time Training

39 minutes
October 28, 2024
Share This Video

(0:07) Hello. All right, Randall, Jennifer, Janine. Hey, everybody.

Welcome to our session today. (0:24) We’re here with Shane Lloyd, who’s the Chief Diversity Officer at Baker Tilly. Shane, (0:30) I’m so delighted that you could join us today.

Welcome. (0:34) Thanks for having me. It’s great to partner with all of you and to share some of the insights that (0:38) we have gathered together.

So before we jump into it, Shane, I love working with you. I think you’re (0:45) an extraordinary leader. I want our audience to get to know you a little bit.

So I’m just going (0:53) to start. You graduated from Brown with a master’s in public health. Is that right? And then you went (1:02) and you’ve worked in roles consulting in diversity.

You’ve worked at Amazon in diversity. And here you (1:11) are now at Baker Tilly doing amazing, amazing work. What can you tell us about, you know, (1:18) a little bit about your experience coming up through DEIV and, you know, what led you to (1:27) participate and be a leader and advocate in diversity research at Baker Tilly? (1:33) Sure.

So oftentimes people will look at my resume and think, wow, he’s lived many lives given some (1:39) of the industries that he’s spanned. And I would say that the connecting thread between all of my (1:45) experiences is actually some of the skills that I’ve developed in my public health degree program. (1:51) So within public health, we tend to apply a systems thinking lens that helps us consider (1:57) what are some of the circumstances or factors that help or hinder people’s ability to obtain (2:02) their optimal health status.

Now, if you substitute optimal health status for academic thriving and (2:10) cultural thriving for students from underrepresented backgrounds at elite institutions, (2:15) systemic lens, different goal. If you look at it from the perspective of the work I did as a (2:20) DEI consultant, or even the work I’m doing here now at Baker Tilly, or even the work that I did (2:24) at Amazon, it’s really looking at the overall systemic picture of an organization and what (2:29) helps or hinders people’s ability to thrive across a variety of diversity dimensions. So in many (2:35) instances, as I continue to go through different organizations, although I’m not leaving Baker (2:40) Tilly, so I hope my Baker Tilly colleagues don’t hear that.

Next up. No, that’s not what I mean. (2:47) Just across different industries, applying that systemic lens, but then also trying to figure (2:53) out how do we, knowing that in corporate America, Amazon was also a research heavy organization as (2:59) well, even though it’s large and corporate.

But I think that in order to do diversity equity (3:05) inclusion work effectively, we cannot just leverage some of the best and leading practices (3:10) that have been tested out in certain companies or in different environments. We also have to follow (3:16) it up with our own internal studies to say, does this best leading practice work effectively from (3:21) my organization’s context within our population and within the policies, protocols, and procedures (3:27) and culture as we have it? And I think that added layer of research exploration becomes really (3:37) important to ensuring that when we talk about the value add of some of the programs and (3:42) interventions that we deploy under diversity equity inclusion within Baker Tilly, that we can (3:48) also back it up with our own research and not just, you know, what we know from what’s said by (3:53) McKinsey or Deloitte or EY, all amazing organizations, or Lean In or Catalyst. There’s a lot of really (4:00) good research out there and we still want to stress test it within our organizational context.

(4:04) So that’s the thread that kind of pulls things together. You know, that’s super important because (4:11) we’ve had, there was a lot of enthusiasm a couple of years ago for DEI and I think people just (4:17) threw everything at it. They just, you know, threw spaghetti at the walls and now we’re going (4:23) through an area where there is a lot of pushback and taking, there’s two approaches that I think (4:32) that you’re taking here with us.

We’re partnering, so for those who don’t know, (4:38) EMTRAIN is partnering with Shane and Baker Tilly to test bias interrupters training in the (4:46) performance management process and so we’ll get to that in a moment. But what’s really important (4:51) here is that we’re taking a very scientific approach. So we’ve got a doctoral candidate doing (5:00) studies and using a very traditional research protocol to test the effectiveness of this type (5:09) of training.

So we will talk about, you know, we had samples and we’re doing comparisons and, (5:15) you know, it’s a pretty heavy lift. Now I’m not saying that everybody has to do that, but (5:20) I think your point there, Shane, is that you can’t just take a one-size-fits-all approach. You’re (5:28) probably going to get a lot more pushback, you know, from organizations when they do that and (5:37) stress testing it and taking the approach where, you know, we’re seeing how managers react, how (5:45) cross-functional leaders react.

I think that that is probably the best, the leading practice, but (5:54) I think it is also hard for organizations to do so. What do you think is, what are the conditions (6:02) at Baker Tilly that even enable us to come in and do something of this complexity? You know, (6:10) what’s special? What’s different? Oh, sure. I will answer.

There’s a thought that your statement (6:17) just made and I’ll make sure to answer your question as well. So I think one of the additional (6:21) advantages of a research mindset is that it also allows us to better identify what I would call (6:30) the curb-cut phenomenon insights. So by curb-cut phenomenon, I mean you can have a targeted (6:38) intervention designated for a particular community, but as you take this, you know, 30,000-foot view (6:44) from a research perspective, you might find some additional added benefits where this targeted (6:51) intervention actually works in a variety of different contexts for a lot of different (6:55) communities that allow everyone to see the benefit, even if the initial exercise was to either address (7:02) inequity or to target a particular community.

I think one great example is, you know, the common (7:09) one that we talk about are curb cuts in the street. You know, it’s not just designed for people in (7:13) wheelchairs, but also works for people pushing strollers, riding bikes, or temporary injuries. So (7:17) how do we, when we talk about DEI interventions, how do we understand the broader application (7:22) that would actually support everybody thriving in the workplace? And to your question around what (7:27) makes Baker Tilly special, I would say that part of what makes Baker Tilly special is that within (7:33) our value system as a company, we have belonging as one of the five values within our company.

So (7:39) very, very high ranking, take it very seriously. So from entry-level employees all the way up to (7:44) executives, we have to live up to that standard. And then I would say the other part that’s (7:49) particularly important is one of our other values is collaboration.

So when we think about even this (7:55) project, it is a very cross-functional project where it’s spanning different teams within Baker (8:01) Tilly, and all of us are thinking about how do we structure this effectively so that way we can (8:05) learn the information that we need to, to better unleash and amplify our talent across all levels. (8:13) Okay, awesome. And that the curb cut effect is technically, I think they call it universal (8:19) design for learning.

Yes. UDL for our design. John Powell, yes.

(8:26) I hope it’s okay to give a shout out to the job you just posted, by the way, for all, (8:32) for anybody who loves DEI, operationally focused, research-minded, Shane is hiring a senior manager. (8:41) So heads up, look for that. Shane, where are they? Can they find that on LinkedIn or where is that? (8:46) Yes, it’s on LinkedIn.

It’s on our careers page. If anyone has any questions about where to find it, (8:55) reach out to me on LinkedIn. And then before we dive into the results, just one more thing.

You (8:59) also commissioned or supported this study with Sarah Mount. Do you want to just briefly (9:06) talk about the results of that before we jump in? Because I think there’s a lot of folks who (9:11) are afraid to go full force with DEI right now. Oh no, thank you so much for providing the (9:17) opportunity to bring some awareness and attention to that work.

So the study that Dr. Pereira is (9:24) referring to is Baker Tilly was the lead sponsor on a study with Sarah Mount, diversity best (9:29) practices, looking at diversity, equity, inclusion, backlash, and what employees think. (9:35) Part of what made that study particularly enticing is that when we think of the national conversation (9:40) around anti-DEI backlash, there is a heavy emphasis on the perspective of politicians. (9:47) People have definitely heard about the litigation.

There’s also both mainstream media interest and (9:53) also social media influencers weighing in and offering their perspective. And with this (9:58) particular study, it centered one of the most important of the many stakeholders that are (10:02) important to an organization. It centered some of the most important stakeholders being our (10:08) internal employees.

So making sure that with a sample size of over 3,000 across different (10:14) industries, how do employees within our organizations think about the diversity, (10:19) equity, inclusion strategies that are implemented? How do they rate, you know, (10:24) the performance of their leaders today in 2024 compared to 2020, 2021? And where do they see (10:30) the work progressing? And despite what’s frankly, despite what’s conveyed in the media, (10:37) most US workers are actually very supportive of diversity, equity, inclusion, and remain committed (10:43) to fostering inclusion and combating racism. And then there are also some interesting nuances (10:50) in the sense of how is it that employees are thinking about inclusion? Is there a standard (10:55) definition within the organizations or a standard set of metrics? Or is it that team members are, (11:00) team members or employees are thinking about inclusion in the context of, you know, what’s (11:05) in it for me, which may not be as expansive as how people might’ve been talking about it in 2020, (11:10) 2021, 2023. And then also talking about, you know, when it comes to organizations that are (11:17) trying to balance a commitment to inclusion and belonging with increasing pressures to (11:23) speak up or engage in societal issues, this research also offers a set of recommendations (11:31) for companies to figure out how do you navigate some of those politically sensitive (11:39) dynamics that also create belonging and inclusion challenges for employees and could challenge a (11:47) business’s ability to operate.

So I highly encourage people to check out that research (11:53) because not only does it say DEI is still important to employees, which is important, (11:57) it also weighs in on some of the more complex and murky situations. So that way, (12:01) organizations can find their footing. Awesome.

Thank you very much. And we’re also releasing (12:07) our culture report today. And I think we have quite, we have a couple of hundred thousand (12:14) data points across the country, almost perhaps a half a million saying the same thing.

People think (12:19) DEI is really important. So check out the Emtrain culture survey available today. Okay.

Let’s jump (12:26) into the findings, y’all. Nick, show us the findings, please. All right.

Let’s start here. (12:35) Let’s go to the next slide. All right.

All right. So the writing is a little small, (12:44) y’all. So let’s talk about it.

This is what we wanted to do was use evidence-based (12:51) bias mitigation techniques in the performance review process to increase diversity and partner (12:57) track eligible employees. That’s a lot of words. Managers were given just-in-time bias (13:04) interrupters training before the performance review, and we designed it to be industry (13:09) specific.

So first of all, that’s a point. Shane, I think when we went in there and the teams did the (13:19) training, they loved it. They found it very relevant.

Do you want to, can you say a little (13:25) bit about the feedback, the initial feedback we’ve gotten so far? Sure. So first I’ll say that (13:32) we were actually quite surprised by the initial feedback, because when you think of the conversation (13:36) around bias training or even anti-DI backlash, there is oftentimes the question of, are these (13:43) particular kinds of training still useful? Are they just either, you know, widely popular, (13:50) commonly described, but mixed in their results and execution? And we’re also dealing with people who, (13:56) you know, in the realm of professional services, they’re, you know, strapped for time. So we were (14:00) actually quite delighted and surprised by the enthusiasm, but here’s why the participants were (14:06) particularly enthusiastic.

In our framing of the concept of bias, we actually leveraged the (14:16) descriptors provided by Joan Williams, who is a legal scholar who does a lot of work around (14:23) creating systemic inclusion within organizations, and then also thinking about the appropriate (14:28) interventions to create equitable outcomes. And how she boils down bias is in some pretty key (14:38) categories that are essentially identity agnostic, but can help surface when those (14:49) bias dynamics are deployed disproportionately for people of different identity markers. (14:53) So as an example, one of the bias categories she discusses is this idea of prove it again bias, (14:58) where despite your time, tenure, or proven track record of success, you’re still finding that (15:04) people are either surprised by your accomplishments and achievements, or still asking you to do more.

(15:09) And that can be deployed commonly against women, caregivers, working moms, and specifically people (15:16) who are the first in their families to enter a profession, or first in their families to have (15:20) gone to college, or people of color. So when we use that language of the prove it again bias and how (15:25) that could be deployed inconsistently, when it comes to groups of different demographic groups, (15:32) that really, really resonated with the participants, as did this concept of (15:38) the tightrope. How do you manage being seen as effective and assertive and authoritative within (15:44) context and working effectively with groups without being hit with, you know, bossy or other (15:51) unflattering language? And how do you manage some of that professional identity management, (15:56) which could be deployed across a variety of different categories of diversity and difference? (16:02) And then I also think of tug of war being a very powerful phenomenon, whereby you would have (16:09) junior professionals seeking mentorship from more senior professionals and thinking that the (16:17) indicator of connection would be an affinity group, whether it’s shared gender, shared race, (16:22) or shared geography, or, you know, shared background with socioeconomic status, or (16:28) being a part of a particular religious or faith community.

And that did not work out as well. (16:32) And finding that that ended up being a source of, you know, competition and challenge. So when we (16:38) look at these broad categories, not just tied to, you know, racial gender biases alone, while (16:44) also looking at that, that provided the participants with a degree of language that (16:50) they hadn’t had before.

And in thinking about it that way, they were, they could actually (16:57) think of a number of examples where these concepts came to life. And now that they (17:02) had language could say, what are the ways that I can further inspect to make sure that I am (17:08) noticed, recognizing, addressing, and course correcting within the context of how I run my (17:13) own performance management process? Yeah, let me just double tap on that. So (17:18) Joan describes five different types of biases that to Shane’s point, it doesn’t matter what, (17:27) it’s not political, it’s not tied to your race or anything, it’s prove it again biases.

Did you ever (17:33) have to prove yourself over and over again to get the same level of recognition as your peers, (17:40) you know, so it’s that type of thing where there’s five different types of bias, it gave, (17:45) we gave people to something to really noodle on. And, and I think that they really appreciated that. (17:52) Okay, let’s go to the next slide, please.

And we were running out of time. So we’re going to have (17:59) to jump through these pretty quickly. But this goes, this one just talked about the cross functional (18:05) team.

There’s the executive sponsors, we had people solutions trainers, we had, you know, (18:13) a director of the team be part of our core team, we had a senior manager of performance management (18:19) as a systems advisor, we had a legal advisor, we had our research team, including a doctoral (18:25) student, and then we had Emtrain’s client success team. So it was no small team, even for the pilot (18:31) and Shane, I guess this is a testimony on a test of collaboration. And it was it was just (18:39) so smooth and easy.

And I don’t know if that would always be the case, but I really enjoy. (18:47) So let’s go to the next one, please. And this is a timeline, just we don’t want to get into (18:56) the details too much here.

Just to say that we started, we started onboarding in May. (19:03) And then, you know, here we are in October, and we’re, we’re getting ready to present the (19:10) the final packet of, of the study. But most of this stuff has most of the training took place (19:19) within a very short period of time, it was June and July.

So it doesn’t have to be a ton of time (19:27) to do something. A lot of thought went into cohort development upfront. And again, I don’t (19:36) want to spend that much time.

I’m sorry, Shane, I’m just gonna jump. Okay, awesome. And so we’ll (19:44) go to the next slide.

So the research design was simple. We have the leaders, we broke them into (19:50) three conditions. One where they got no training at all.

The leaders got no training at all. (19:58) One condition where they got eLearning only. And the third condition, we did eLearning plus a one (20:04) hour instructor led training, ILT is an instructor led training.

We did that right before the (20:11) performance review process. I think it was a week before they were due or just a couple of weeks. (20:19) And, and then we had them can they conducted the performance review process as, as, as usual.

And (20:27) then we surveyed their direct reports on their experiences of bias. So Shane, I do want to pause (20:34) here and talk about embedding this training right before performance reviews and how I think it’s (20:43) important for DEI leaders to be in the operations of things. Absolutely.

So I think that oftentimes (20:51) when it comes to weather, I’ll start here. I think it’s not uncommon for DEI interventions to (20:57) sometimes be deployed in a manner that’s bolted on where you either are trying to just latch (21:02) something on in a manner that it’s going to sound illiquid, but it’s a little intrusive and doesn’t (21:06) fit with the natural process. And that was not the case with this particular deployment, which I think (21:11) is vitally important.

The extent to which we can piggyback off of some of the operational cadences (21:16) of the organization, such that the intervention seems like a natural aspect of what is getting (21:23) done becomes really important. So with this instance, what was really helpful is that whether (21:30) or not people had been really thinking about developing the, you know, using, taking the whole (21:37) performance review cycle and really thinking about the evidence and what they were going to put (21:41) together, or if they were kind of dashing through the process, because they got the HR reminders, (21:47) were able to catch a broad swath of people. And when they received the intervention, it was (21:54) that the emphasis on just in time being particularly important.

It was right around (21:58) when people are thinking about, okay, now that we’re almost at the end of finalizing these results, (22:04) how can I do my best to make sure that this is a fruitful process for all parties involved? (22:09) And I would also emphasize to you, the opportunity to talk to our team members about the experience (22:15) and have that survey is really useful, because then we get additional insight around, you know, (22:21) how did people experience it? What kinds of questions do they have? You know, what level (22:26) of confidence might they have to even challenge some of the insights that are shared to ensure that (22:31) they get a fair shake within the process? Those are all very, very useful, because we know that, (22:37) you know, depending on the size and scale of an organization, you know, the performance (22:40) management process, it’s coming and it’s experienced, but we may not always be consistent (22:45) around finding out how do our employees really feel about or what insights did they gain (22:52) from the experience? (22:56) Yeah, this is not for the faint of heart, y’all. It takes a little bit. It’s amazing, because (23:05) it’s not easy to face these responses and yet to make traction and to do so, you really do have to (23:15) open up a little bit of a can of worms.

Okay, so direct reports performance score by leader study (23:21) condition. So basically, we train the leaders to mitigate bias and let’s and we wanted to know (23:27) what was the effect on the direct report score. So what we found was that direct reports received (23:37) slightly higher performance ratings overall when their leaders participated in the bias interrupters (23:44) training compared to the scores of the direct reports whose leaders were not trained.

But (23:50) there were no, the pilot, the condition was small. So this was not a statistically significant (23:56) difference, but the numbers are still there, right? I would still want a slightly higher bump. (24:01) I would want my leader to be in that training because I would want a slightly higher score.

So (24:07) this was, I think, the curve cut effect that Shane was talking about where it could potentially (24:13) raise all but but the story gets a little bit more interesting. So let’s go to over and look (24:20) at the distribution. So if now we’re looking at the direct reports performance score by training (24:27) intervention, meaning you know, we said there are three three interventions where you’ve got the (24:32) e-learning and workshop the e-learning only and no training and you’ve seen these bars are pretty (24:39) much the same.

So when you look at it by the intervention, we saw with this very small group (24:47) that there were no no changes really on the average performance review score. Now, where did (24:56) we see the biggest difference? And let’s come here. So this is this is this is what was really (25:04) interesting to me.

So if you look at these scores in the left-hand column, you have we we (25:12) did this by gender here. So in the left-hand column, you have the males and the right-hand (25:17) column, you have the female and then you look at the conditions. We found that in general, (25:25) if you look at the bottom two conditions, the e-learning and no training, the men typically had (25:30) higher scores than the women.

But when we added on that workshop, now you have the women having (25:39) higher scores than men. And there was a statistically significant difference here. And so (25:47) to me that that says that there’s that’s a number of things.

But let me pause and let me let Shane (25:55) comment on on this finding. What are your thoughts, Shane? Yeah, so I think that there are a couple (26:00) things that emerged. One, a greater sensitivity on the part of the managers to double check and (26:06) inspect their commentary in the performance rating process.

What we also know on the back end, too, (26:13) is that the receivers of performance feedback were also more rigorous in the sense of does this (26:21) insight have evidence to it? Is this actually just too succinct and generic and nice that to the point (26:30) that it’s not even helpful or effective? Even if I’m being rated effectively and told that I’m a (26:36) strong performer, even then I still want evidence to determine not just that I’m doing well (26:43) directionally, but I want concrete insights that can enhance. So for I think of it in a couple ways, (26:49) for those who need to do some course correcting to make sure that they are not having a bias impact, (26:55) the rigor being addressed. But for people who thought, oh, I can breeze through this performance (26:59) cycle because they’re a solid performer, they’re doing really well, I’m just gonna, you know, put a (27:04) bow tie on this and get this through the finish line.

That was not satisfying to our team members (27:10) either in the sense that all parties wanted really thoughtful, evidence-backed feedback, (27:17) and they did not want to be caught in, you know, either the conflict avoidant or the not to make (27:25) fun of people from the Midwest because there’s a lot of nice people in professional services, (27:28) but the Midwest nice phenomenon. (27:32) Right, right. Yeah, I think that as we looked at some of the other scores, we noticed that (27:38) there was, in general, there was, for the leaders who received training, their trend was that (27:47) some groups were not getting, it wasn’t, the variation in the scores wasn’t as great.

(27:55) There were, they were a little more conservative in general, but it was notable that (28:03) in this condition that the women were getting higher scores. And again, this was the pilot, (28:11) we want to, we’re going to be running the main study next year. We’d like to validate some of (28:18) these trends there, but that is, that’s what we were going for, right? The purpose was to (28:23) increase the number of women, the performance ratings and the visibility of women in the (28:30) partner track.

So, so far, promising. Question, did the employees know who received training? (28:40) I, we didn’t, I don’t recall us telling them, Shane. No, because regardless of the (28:46) quote unquote treatment group, all of the employees got a survey on their experience (28:52) in the performance process.

So no, they were not aware. Okay, so let’s go and look at that, (29:00) the results of that. So in the pilot study, the direct reports of leaders who were trained (29:10) reported experience more bias, experiencing more bias than those whose leaders were not trained.

(29:16) And let me, let me just say that this goes to Shane’s point that they’re, they’re, this was (29:26) also statistically significant. And I, I actually don’t have a full reason why I’m still processing (29:34) this. It’s somewhat new and it could be to what Shane is saying is that there was more discussion.

(29:39) They are more critical, you know, they, but I’d like to, we’re definitely going to test that a (29:47) little bit more and perhaps, you know, have some, some additional findings to, to run that out the, (29:53) the next time. Okay. Brittany, were you able to look at the time spent on the review process (30:00) to potentially get a sense of how much deliberation was happening? Thinking of how biases thrive in (30:05) these fast environments and wondering if the training encouraged folks to move slower.

(30:11) So Brittany, we gave the hiring the, the leaders a worksheet to look through. And so we gave them (30:20) the five types of biases to the definitions of them and, and some pointers on what that might (30:27) look like. And then in the worksheet that they were supposed to use in conjunction with the (30:34) performance process, they were to look at their feedback and say, which of the five does, (30:40) does this reflect, might this reflect? So which of the five might my feedback reflect? Prove it (30:47) again bias, tug of war bias, maternal wall bias, and so on and so forth, or none.

And so there we (30:55) were trying to slow down and have them reflect on, on their feedback. Now, what we didn’t do was (31:02) bounce it back to them. So we didn’t look at their feedback and then send, send it back.

If there, (31:08) if it reflected bias, we did not, not none of that, but we did ask them to, to self monitor. (31:19) And what I’ll also say too is around, because I think I can imagine people thinking, wait a minute, (31:24) is bias really happening? And what we also have to understand too, is that as people might (31:30) be suspicious of bias taking place, that doesn’t necessarily mean bias was in fact taking place. (31:37) But I think that the heightened awareness of, wait a minute, let’s really like think back about it.

(31:42) And now that I have the opportunity to talk about it, because someone’s surveying my thoughts about (31:46) the experience also, also gives us insight that with this being, if we ask our team members about (31:53) it and raise this concern, how do we make sure that people understand that across the board, (31:58) this is what managers at the individual level and the firm at a systemic level (32:02) is doing to address those concerns where they’re real or perceived. (32:09) Yeah. And again, I mean, what, what’s it, what you can’t, you can’t do anything about something (32:15) that is not visible and ignoring it is not the right answer.

So, so the good news is summary, (32:23) just-in-time training on how to identify bias can mitigate bias in performance ratings. (32:31) You know, contrary to what we would have thought, they were very eager to learn. They asked to (32:36) learn more.

So it really, it did indicate a desire to learn more. And, and these folks, (32:42) I want to add are billable, high level billable partners in a professional services firm. So if (32:49) they can take the time, chances are most anybody can take the time.

Calibration training. So we (32:57) also got a few future directions that we may act on in the future. We may move to calibration (33:03) training before, before the partners do the calibration.

We’re talking about also looking (33:11) a little more closely at their rating system and, and trying to break that down a little bit more. (33:18) But I, I, unfortunately, I think we’re out of time. Shane, 30 minutes is just not enough.

(33:24) You know, there’s so much to talk about. Do you have any thoughts that you’d like to leave folks (33:31) with in closing? Absolutely. I would definitely encourage anybody on this call to really think (33:36) about what is the likelihood or opportunity for you to do some of these experiments to find out, (33:42) you know, are these trainings having the impact that you imagined? And what is it that you can (33:47) learn? I think one of the most powerful things that we learned from this exercise is our team (33:53) members are, they’re very curious to have their experience of the overall performance management (33:58) process solicited.

So not just maybe with their, you know, one-on-one HR business partner, or just (34:03) with their manager in a one-on-one conversation, but if there are some intervals where as cohorts, (34:09) people are having opportunity to provide insight to their experience, that could also help us (34:15) understand where we might have really thoughtful equitable processes that are not perceived as (34:21) equitable for whatever reason. And then how do we close that perception gap? All right. And yeah, (34:27) thank you so much, Shane.

Everybody go boldly into the world of DEI and test. It seems like, (34:34) you know, the climate is there in spite what, what folks may believe from the hype. And we look (34:40) forward to seeing you again soon.

Thank you so much, Shane. Thank you, everyone. All right.

(34:58) All right. Thank you so much, Shane. Really, Nick, are we still live or are we not live?

Bias can be a sneaky beast in performance reviews. Ever feel like decisions are based more on assumptions than actual merit? You’re not alone. Many organizations struggle to create fair evaluation processes, leading to disengagement and mistrust. Drawing on a compelling case study demonstrating the effectiveness of bias-reduction training, this session with Shane Lloyd, Chief Diversity Officer at Baker Tilly, highlights how targeted programs help managers promote fairness by fostering more intentional and systematic evaluations. Mitigate bias with bias-reduction training.

Recent headlines have spotlighted Amazon’s employee evaluation practices after a former employee’s lawsuit raised concerns about bias in the company’s performance reviews and promotion decisions. This story echoes an issue across industries—how unconscious biases can silently influence decisions, affecting career paths and organizational morale.

Imagine a manager sitting down to evaluate their team, but unconscious biases cloud their judgment. This oversight can mean overlooking talented individuals simply because of who they are or where they come from. It’s not just unfair; it’s detrimental to business. When employees sense that evaluations are swayed by factors other than performance, it can lead to disillusionment and even loss of talent.

The Solution? Targeted, Just-In-Time Bias-Reduction Training

Addressing bias effectively calls for just-in-time training, delivered right before performance reviews or critical evaluation periods. By implementing systematic and intentional evaluations, organizations promote equity and encourage managers to focus on merit. A “just-in-time” approach helps managers recognize and challenge their biases, particularly when training is tailored to their industry’s specific challenges.

Combined with eLearning modules and brief facilitated sessions, these programs empower managers to foster fairer evaluations and build trust within their teams. This approach promotes long-term behavioral change by reinforcing bias-awareness practices in real time.

The Business Case for Mitigating Bias

By proactively mitigating bias in performance reviews, organizations can protect their reputation, promote employee loyalty, and reduce turnover. Investing in bias-reduction training isn’t just a matter of ethics; it’s a business imperative. When employees trust that their contributions will be assessed fairly, they’re more likely to be engaged, productive, and loyal.

As organizations look to close out the year and reflect on what’s ahead, now is an ideal time to make meaningful changes. Just-in-time bias-reduction training can be a powerful tool to align performance evaluations with merit and create a more inclusive workplace where everyone has the opportunity to succeed.

You May Also Like

Search all of Emtrain Resources

Search Emtrain’s course and microlesson selections, blog, resources, video libraries, and more.