On Criteria

The first in a three-part series on how we think about the NBA Awards

Scott Halleran/Getty Images North America

I am not going to write my annual NBA awards column this year. I have too many other obligations, and I have already voiced way more of my opinion than anyone needs to hear on my podcast, The Restricted Area, and on The 94 Feet Report’s The Swingmen.

Now that the joint self-deprecation/self-plug is over, I will inform you that I do still have some stuff to say on the NBA Awards. A lot of stuff. My choice to forgo an awards column is not one of volume, but of efficiency.

Every year when I delve into my unofficial ballot, I end up being frustrated by the same questions. What does Most Valuable actually mean? How do we discern whether a coach, a GM, or a team’s players deserve the most credit for a surprisingly successful season? At what threshold should an injury cause an award to go to a less-dominant-but-healthier candidate?

Rather than navigating through these questions haphazardly while presenting my winners, I am going to focus on the questions themselves. The idea is to come to a clearer understanding of how we should choose winners each year, or if that is too arrogant, how I choose winners each year.

In part one of this three-part series, I am going to look at criteria. The focus will be on the seven major awards: Most Valuable Player, Defensive Player of the Year, Rookie of the Year, Sixth Man of the Year, Most Improved Player, Coach of the Year, and Executive of the Year.

The Universal Truths

Before getting into the specifics of each award, there are some generalities that can be applied to all seven of them.

  1. The NBA Awards are a time capsule. That means that they hold value: the more caught up in the moment we get, the more we forget what the award is about, the more we distort the game’s history. Insignificant things like narrative, recency bias, injuries to teammates, and arbitrary statistical frameworks need to take a backseat to significant things like who had the better season.
  2. The goal of basketball is to win. For all of these awards, the central focus must be on an individual’s contribution to winning above all else.
  3. And the ultimate goal is to win titles. Building off of №2, there must always be an emphasis placed on eliteness. Making a team merely good is exponentially easier than making a team great, especially in a league where merely good has become less desirable than ever.
  4. Prior seasons do matter. Not a lot, but a little. If a player has been a runner-up for an award multiple times, that should serve as a tiebreaker in a close race. It is also okay to use knowledge garnered during a previous season as supporting evidence for or against a seemingly incongruous metric. This is more true in some categories than it is in others, but it is a factor in every category (outside of Rookie of the Year, of course).

The Big One: MVP

Throughout most of NBA history, the MVP award was based on three things: team success, statistical dominance, and status on team. That’s it. If you had a good record, good numbers and were your team’s best player, you were an MVP candidate.

In recent years, with the light that advanced metrics have shed on individual impact, the definition of “value” has been contested.

This is not an inherently bad thing. Broadening our understanding of the game and expanding our concept of value is important. If properly applied, this can inform better MVP decisions.

The problem occurs when we use metrics to argue the theoretical over the material. Like when Russell Westbrook becomes a better MVP candidate than Kawhi Leonard because of how bad his supporting cast is, even though his team lost 14 more games than Leonard’s. Or when Anthony Davis becomes a threat to James Harden because of what he is doing without DeMarcus Cousins, even though Harden has the Rockets in position to win a championship and Davis is fighting for a playoff spot.

What we are really arguing when we make these cases, it seems, is that guys like Westbrook and Davis would be leading their teams to just as much success as Leonard and Harden, if only they had better and healthier teammates.

There are two huge flaws with this approach. The first is that impact on wins does not translate in a linear fashion. Just because Westbrook or Davis can elevate a 30-win supporting cast to 47 wins does not mean that they would elevate a 47-win supporting cast to 64 wins. Remember, making a bad team good is very different than making a good team great, and we should always credit the player who is actually making a good team great above the one who maybe could do so in theory.

The second flaw is that we are ignoring the team element of basketball. Guys like Harden, Stephen Curry, and LeBron James have not just been MVP-level players because they elevate their rosters; they are MVPs because they are easy to build around. Good players want to play with them. Many types of players fit next to them, making it easier for their GMs to find them the right teammates. To turn around and hold this against their MVP candidacy — stating that their rosters are “too good” — is nonsensical.

That is why, despite the stigma against traditionalism, the best way to choose between MVP candidates is the way we always have:

  1. Whose team has a better record?
  2. Who is more statistically dominant?
  3. Who is more crucial to their team’s roster?

We can still use modern stats when navigating through the traditional criteria. We can look at net rating and clutch stats to get a better understanding of how good a player’s team is. We can base statistical dominance on true-shooting percentage and RPM as much as we base it on points, rebounds, and assists. We can assess indispensability through on/off differential.

When we prioritize the theoretical over the material, however, we end up making regrettable decisions that will not age well. You should be able to look back in 25 years and remember why a guy won the MVP, without having to look up his team’s net rating with him off the court. These awards are a time capsule.

Defensive Player of the Year

We run into a lot of the same problems here as we do with MVP, where we consider how bad a team is without a player before we ask how good they are with that player. How bad a team is when a player sits can be used as a tiebreaker between two close candidates, but it should not cause us to say things like “Nikola Jokic is a better defensive player than Steven Adams.”

Of course, there aren’t many (any) defensive stats that are much better than on/off differential. It’s just that the emphasis needs to be on the “on” more than the “off.” The lack of quality defensive stats also means that the eye test needs to factor in heavily here. It is one thing not to trust everything you see, but it is just as dangerous to trust nothing you see— and by extension, put blind faith into numbers you know are imperfect.

We saw the Spurs post a better defensive rating with Kawhi Leonard sitting last season than they did with him playing. This was generally dismissed as a statistical quirk, with the eye test cited as evidence. But why do we stop at Leonard? Because his great defense is so easy to observe that it is an uncontroversial opinion?

The reality is that it is okay to trust your eyes, if you know what to look for. Yes, some people will be convinced that Karl-Anthony Towns is a great defender because he blocks shots, but the solution to this should not be to stop using our eyes: it should be to watch more closely.

Therefore, the criteria for DPOY should be as follows:

  1. How good is a player’s team defensively when they are on the court?
  2. How often is said player on the court? A strong net rating is more impressive and more valuable in 35 minutes than it is in 25.
  3. Eye test
  4. Auxiliary stats (how bad is team when player sits, opponent FG%, steal rate, etc)

Rookie of the Year

This one is relatively straightforward and, as a result, generally the least problematic. We ran into the issue of injury vs. productivity last year, but that will be addressed later on in this series. For now, we will get right into the criteria:

  1. Which rookie has contributed the most to winning basketball?
  2. Which rookie’s performance is most easily translatable to all-star level success?

That first criteria should not be confused with the first MVP criteria. Rookies are rookies, and are not expected to make teams title contenders or even playoff contenders. The fact that bad teams get top draft picks compounds this.

The question here is not one of record, but rather impact on record. O.G. Anunoby does not receive points above Frank Ntilikina (both of whom are positive wing contributors for their teams) because Toronto is better than New York. Both, however, receive points above Malik Monk, who torpedoed the Hornets nearly every time he stepped on the court.

As for the second criteria, this is simply a way of distinguishing between the Anunoby’s and the Donovan Mitchell’s. While Anunoby may have had a similar rookie season to Mitchell in terms of efficiency and on/off impact, it is clear that Mitchell’s success is more about his own ability, whereas Anunoby’s is more about his situation.

Like every other award, ROTY is a time capsule. The winner won’t always become a superstar, but if there is a clear indication that one dude is a better basketball player than the other, it is okay to table your statistical apprehension and factor that in. Otherwise, we end up with Tyreke Evans beating Stephen Curry and Michael Carter-Williams beating Victor Oladipo.

Sixth Man of the Year

This one should be even more straightforward than Rookie of the Year, but alas, it is not. Really, there is just one criteria:

  1. Which bench player has contributed the most to winning basketball?

Unfortunately, most voters seem to have a singular criteria that looks much more like this:

  1. Which bench player scored the most points per game?

Scoring off the bench is valuable. Most teams strive to achieve balance in their starting lineup, playing a few low-usage guys who can help their team without the ball in their hands. The downside of this is that there are less places to turn when the offense bogs down. Thus, having an instant-offense bench player is quite valuable. Said player can also act as the go-to guy on second units, a skill that should not be discounted.

The fact that Andre Iguodala — a Finals MVP and All-NBA defender that has come off the bench for four straight years — has never come close to winning this award is telling, though. So, too, is the fact that 10 of the last 11 winners have been guards, most of whom (J.R. Smith, Eric Gordon, Lou Williams, Jason Terry, Jamal Crawford x3) do not contribute in any facet of the game outside of scoring, and very few even do so efficiently.

If putting the ball in the bucket is the most important bench skill, it is by a slim margin. As such, it should be used only as a tiebreaker when bench players have similar overall value.

Most Improved Player

While every other award has ambiguity in its criteria, there is some basic consensus as to what we are voting on. With Most Improved Player, there is not. Quantifying “best” is massively easier than quantifying “most improved”, because every player is being graded on a completely unique curve.

What kind of improvement are we valuing? Does a leap from awful to decent matter as much as a leap from good to very good? Does predictable age-related improvement mean as much as out-of-the-blue growth? Do we compare a player’s performance to their more recent campaigns, or to their career highs?

Given that this is the award most in need of criteria, you might think I have the most to say about it. You’d be wrong. The lack of that base-level understanding of the award’s identity takes away my ability to attempt to improve our approach.

All I can say, really, is what I think the award should be.

  1. Which player has played the farthest above their previous top level? This, as opposed to strictly last season’s level, thus disqualifying a Tyreke Evans or Jrue Holiday.
  2. How long has a player been at said previous level? It is more impressive for a player to make a leap in Year 5 (2015–16 Kemba Walker) than it is for them to do so in Year 2 (2016–17 Nikola Jokic).
  3. How much did a player’s growth have to do with circumstantial change? I dock points for things like increased opportunity (shot attempts, minutes), improved teammates or change of system — anything that makes it harder to isolate whether or not the player has actually gotten better.

Coach of the Year

The most problematic of all the awards. Coach performance is difficult to assess, but we can do such a better job of eliminating extraneous information and bias.

The biggest flaw here, by far, is in the emphasis we place on preseason expectations. When a team outperforms its over/under by a large margin, we immediately throw its coach into award consideration.

Doing this unfairly rewards unproven coaches.

Erik Spoelstra consistently has the Miami Heat winning more games than the talent on their roster indicates they should. We are so used to it, in fact, that a team with Goran Dragic, Hassan Whiteside, and Dion Waiters as its best three players was given an over/under of 47.5 by Vegas entering the 2017–18 season.

Spoelstra’s COTY candidacy, therefore, was going to be predicated on how far above that number he could get. The injustice here is that Spoelstra is by far the biggest reason for that number in the first place. As a result, he is a non-candidate as his Heat sit at 43–37 and fight for the №6 seed. Meanwhile, a less established Quinn Snyder is considered a frontrunner for leading what is an equally if not more-talented Utah Jazz roster to 46 wins.

The public will soon catch on to Snyder’s expertise, and Vegas will start baking his contribution into Utah’s over/under. Once this happens, his candidacy will go the way of Spoelstra’s, Gregg Popovich’s, and Steve Kerr’s. It isn’t that these guys cannot win — just that they can only win in years when they exceed their own lofty expectations (Kerr winning 73 games, for example).

The other major flaw is the emphasis placed on roster hardships. We use injured players and bad personnel as a central criteria, much like we do with MVP, when we should really be prioritizing success and using these hurdles as tiebreakers.

Coaching a depleted, banged up team is just one kind of coaching. It is hard, yes, but in some ways, it is easy. A roster plagued by departed superstars, crowded injury reports, and a hodgepodge of no-name replacements is also a roster devoid of expectations and ego. Getting players to play hard and buy in when thrust into larger roles than they expected is easier than getting talented guys to accept and thrive in smaller roles. Coaching during what has been written off as a lost season allows for much more creativity and risk-taking than doing so when your team is expected to succeed, and there is nowhere to point the finger when you fail.

Given that, Coach of the Year voting should be based on the following:

  1. Team record
  2. Quality of roster (players must be imagined away from their current coaches to do this accurately. Imagine Harden before D’Antoni, or Kyle Lowry before Dwane Casey)
  3. Ability to handle adversity (injuries, personalities, hot seat status, etc)
  4. Buy-in/chemistry harbored (This is only a tiebreaker. Winning means more than how you win. That said, a team that wins 49 games while playing for one another points more to good coaching than one that wins 49 games while playing selfish basketball and relying on talent).

Many will balk at seeing “team record” as the №1 criteria here. If you are among the hesitant, consider that an elite coach will generally elevate a team as much as an elite player (10–15 wins).

And again, leading a team from bad to good is easier and less valuable than leading a team from good to great. Steve Kerr is a better coach than Mark Jackson, even though both improved the Warriors by similar win margins. Brad Stevens is having an easier time lifting this year’s decimated Celtics team into the 50s than he did lifting last year’s healthy and whole Celtics team into the 60s.

Elite status is the most coveted and competitive mantle to reach, therefore a premium must be placed on it.

Executive of the Year

We close with the second-most ambiguous award. We understand Executive of the Year better than we do Most Improved Player, but not by much.

The biggest area of arbitrariness is how much we are to weigh process vs. result. Does Kevin Pritchard get rewarded for the Victor Oladipo trade working out, even though it was a bad deal by just about every account at the time? Does Sam Presti get docked for Paul George and Carmelo Anthony failing to elevate OKC above last year’s level, even though we were all blown away by his ability to swing these trades at the price he did?

The difference between this question and the ones surrounding MIP is that I believe there is a right answer here: process and result should be weighted 50/50. To place more emphasis on result is to punish or reward a GM for something that is ultimately out of their control, while placing the emphasis on process does not give proper credit to unpopular moves.

The other issue here is that of time. Building a roster is not a one-year process, and any attempt to isolate it as such removes vital information. Let’s say Sam Hinkie was still the Sixers’ GM. He didn’t win Executive of the Year for drafting Joel Embiid, but does he deserve credit now for Embiid becoming a superstar? Daryl Morey didn’t win the award the year he traded for Harden (crazy, I know), but should he now that Harden has become the league’s MVP?

The answer to these questions is yes, of course — especially if the GM was not rewarded for these moves at the time. The lifetime achievement factor that I laid out at the beginning of this article is most applicable here.

That said, past moves do not mean as much as more recent ones. The more time passes, the more variables can affect decisions. Therefore, the criteria should be two-fold:

  1. Quality of moves made by an executive since last year’s draft (with 50 percent weight placed on retrospect and 50 percent based on how strong the move appeared at the time)
  2. Quality of moves made in previous years, as they pertain to team’s current status.

In other words, we should not still factor in Danny Ainge trading for KG, but we should still reward him for trading KG away.

Check back tomorrow for Part II, which will focus on how we talk about Games Played when choosing our award winners and All-NBA teams.