For Orioles fans, an exhilarating opening-weekend sweep of the reigning AL champion Tampa Bay Rays was punctuated by the highly anticipated debut of Zach Britton. The rookie left-hander worked six innings on Sunday, allowing one run on three hits and three walks while striking out six to win his first Major League game.
The start was something of an event for many evaluators who have tracked Britton’s progression through the minors. Several national analysts generated some interesting discussion on Twitter during the game. I couldn’t help but give pause, however, when noticing Keith Law’s assertion that “working against Britton’s numbers this year [is] Baltimore’s defense.”
Law’s assessment deserves some attention. Just how efficient should we expect Baltimore’s defenders, about a third of whom are new additions, to be this year,and just what kind of impact might that have on the numbers of Mr. Britton, the rookie ground-ball-machine?
Measuring defensive efficiency has been considered notoriously tricky since long before Bill James nearly wrote off the task in the seventies. Still, through extensive work and fortuitous technological advances, some very smart people have devised several ‘advanced’ defensive metrics over the past decade. It is debatable whether the payoff from these metrics is correctly proportional to the amount of effort requisite for their genesis, but statistics like UZR clearly seem to be, at the very least, a vast improvement over concepts like fielding percentage.
As I see it, there are two issues central to all defensive metrics. The first is descriptive: just how accurate are the metrics? Run prevention is an intricate tango between pitchers and their defenders, and separating the performance of one from the other has been the damning charge on defensive measurement since its first crude attempts. This issue has been and will continue to be tackled elsewhere, but it falls outside of the scope of this column. Suffice it to say that I consider Ultimate Zone Rating, Total Zone, and Dewan’s DRS to be the three most accurate assessments of defensive performance to date. Conveniently, they each purport to do about the same thing (assess how many runs a player prevents compared to the average defender, adjusted for position), so I’ll use them in this analysis.
The second issue with defensive metrics is inferential: even if we can be sure about the accuracy of our measurements, how useful are they in predicting future performance? Not very, as it turns out. Numbers for each of the aforementioned measurement systems frequently see wild swings from season to season. Part of this may be due to the fact that defensive data derived from a sample size of a season or less are not terribly reliable. It’s best to look at aggregates consisting of three seasons or more, as we do see some mean regression over the long term. Part of the problem, though, is that defensive performance itself seems to be highly volatile. Pick any sort of defensive measurement you can find and you’ll probably see more variance from season to season than you would with most offensive metrics. It may simply be the case that pure defensive talent doesn’t correlate as strongly with runs prevented as offensive talent does with runs added. It might not sound pleasant, but defensive production seems to be heavily contingent on opportunity and circumstance. The best method for inference is to attempt to find a player’s ‘mean’ production, consider his age and position, and assume a high standard deviation.
Let’s take a look at three year aggregates* of Baltimore’s starting eight:
|Player||Position||UZR Average||TZ Average||DRS Average||Aggregate Average|
* I should make a few notes here. The numbers posted, with the exception of those of Matt Wieters and Luke Scott, are averages from 2008-2010. Wieters only has two seasons of data from which to pull and Scott has only had scarce playing time in the field over the last two seasons. Wieters’ averages are from 2009-2010 and Scott’s are from 2006-2008.
These numbers are descriptions of past performance, not projections. That is an important distinction to make. Still, the eight starting defenders on the Orioles’ current roster have combined for a positive net gain in run prevention over the average team. Of course, Brian Roberts has nagging back issues, Luke Scott has been a DH for the past two years, and both join Derrek Lee in the wrong-side-of-thirty club, so we shouldn’t expect any of them to be as good as they have been in the past. At the same time, Adam Jones, Nick Markakis, Mark Reynolds, J.J. Hardy and Matt Wieters are in their athletic primes. While Roberts and Lee have seen negative trends in advanced metrics, Mark Reynolds (labeled previously by Law as a “brutal defensive player”) is trending upward.
Given the uncertainties and volatility involved in defense and its metrics, drawing inferences from these numbers is messy. Nonetheless, it’s hard to find compelling evidence that Baltimore’s defensive performance should be significantly below average in 2011. Britton, if healthy and on the roster for the entirety of the season, should pitch about one-eighth to one-ninth of the team’s innings. Assuming that the team is somewhere between 30 runs below and 30 runs above average, Britton shouldn’t lose more than a few runs more than he would given an average defense. He might gain a few. At the end of the day, it doesn’t seem likely that Baltimore’s defensive efficiency will have much of an impact on Britton’s success in 2011.