What Shape is Talent?

I’ve been thinking about talent in the U.S. military. Talent management is impossible without talent evaluation, and for most organizations that means performance evaluation. Much as I would like to point to the science of performance management, my research in recent years suggests that there’s more art than science at this stage, certainly in the private sector. Although General Electric is famous as a leadership factory that successfully uses forced ranking to identify its top (and bottom) people, that model has proven to be disastrous at Microsoft and possibly at Yahoo! In sharp contrast, the new fad is for firms to eschew performance evaluations entirely, a model that Adobe announced to great fanfare last year. With wild variance in practices, it is impossible to advise “best practices” to the Pentagon or any of the services. However, it is more than possible to advise pragmatic, well-grounded principles.

The very best performance evaluation system may, in fact, be deployed by the U.S. military right now. Because the services are free to operate their own PE model, there are 4 (or 5 if we count the Coast Guard) different models in operation. Based on what I’ve seen, two are world class.

Let’s focus here on what the real world tells us about the shape of the distribution of talent. A forced rank method assumes that units have a “normal” or bell-shaped talent distribution. A forced ranking system, to be effective, should afford raters the correctly sized buckets to match the shape of their team’s performance distribution. For example, if a commander has 10% of his people clearly in the bottom, but is required to identify 20% in the bottom, everyone loses. The team loses, the boss loses, and the larger organization loses. If those numbers are reversed (20% bad, but 10% bucket), the outcome is also destructive to the higher goal of managing people effectively. The key moral of this story is that an effective PE system should have the flexibility to let the rater fit his evaluations to the the shape of his talent.

Imagine that talent is not normally distributed. A provocative academic paper published in 2012 (O’Boyle and Aguinis)1 found that talent is distributed more similar to a Paretian distribution, i.e. pyramid shaped, than to a normal distribution. They identified a consistent “superstar” phenomenon in five different fields, from professional basketball to academic research. This is an important point, and basically a valid point, yet a careful examination of the data reveals that the O’Boyle-Aguinis case is overstated.

Talent in the NBA

Consider, by way of explanation, performance of pro basketball players. As a random case study, I pulled all player statistics for the 1981-82 NBA season, a total of 373 player records including Dr. J and Moses Malone. Larry Bird and Magic Johnson were in their early prime, and neither was among the top 10 scorers. A histogram of total points scored by each player reveals a distinct Paretian distribution (figure 1 here).

image001

Three players were in a class by themselves that season. George Gervin set the maximum at 2551, followed closely by Moses Malone and Adrian Dantley. Breaking points into 20 bins over 5% ranges between 0-2551, you can see that only two other players were within the top quartile bins.  However, the problem is that total points is not actually how coaches assess basketball players, right?  We know that other factors matter just as much, even if we focus on offense. Assists. Rebounds. But it turns out that every quantifiable aspect of talent also follows a Paretian distribution. Consider total rebounds.

image009

Most NBA players get 1/3 of the rebounds that the superstars get, and a full quarter of players get 1/10 of the max. That’s Paretian. A slam-dunk for the O’Boyle-Aguinis thesis, if you will.

However, this is not how coaches evaluate talent. Rather, coaches consider how efficient players are in terms of their shooting percentage, and they also are sensitive to how much performance is achieved per minute or per game. Is it fair to label someone as a bottom talent if they only get to play 1 or 2 games in an 82 game season? No, and if we convert the above data into output per minute, then we see a very different distribution. A normal distribution.

image003

That should give HR executives some relief that their models are not completely miscalibrated. But a look at rebounds per minute should still concern us, because it is not normal at all.

image007

A performance evaluation system that uses any number of bins will not fairly assess NBA rebounders. Keep in mind, this is over a fairly large population of players. Most organizational units look more like teams, with maybe a dozen members. There’s no way a typical team will be able to (1) identify the right talent to measure, then (2) measure talent correctly, and finally (3) fairly rate people according to any neat distribution.

Take the U.S. Army for example, which uses a 3-bin rating: Above-Center-Below Mass. De Facto, only Above and Center are used, with no more than 50% of rated individuals in a unit allowed to have the Above rating (“ACOM”).  How is that fair if the talent distribution is normal?

My final step in considering NBA talent merged three performance metrics into a single score, what I label the PlayerScore. It works like this: each metric is transformed into a 0-1 score. For example, the top points/minute score is a 1.0, and other scores cascade below it from 0.97 to 0.83 to 0.00. There are many ties. I included rebounds per minute and assists per minute as well, then took the average of those three. The theoretical maximum is 1.0 for a player that gets a 1.0 in all three areas. In 1981-82 NBA, the top player had a 0.643, he played for the Los Angeles Lakers, and his name was Earvin Johnson. Right behind him was a guy named Larry Bird with a 0.621 score. But that’s just looking at three measures, and you could easily weight them differently or use a half dozen other measures such as steals, shooting percentage, foul shots taken and made, and so on.

image011

The point is that talent is approximately normal, but only under extremely favorable assumptions, on average, over large populations. In the real world, team talent is abnormally distributed. And it will change shape from one season to the next. All these observations about the real world affirm the lesson:

An effective PE system should have the flexibility to let the rater fit his evaluations to the shape of his talent.

—–

  1. HERMAN AGUINIS and ERNEST O’BOYLE JR., THE BEST AND THE REST: REVISITING THE NORM OF NORMALITY OF INDIVIDUAL PERFORMANCE, Personnel Psychology, Volume 65, Issue 1, pages 79–119, Spring 2012.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s