Sunday, February 28, 2010

3 Things I Would Change About The NFL If I was NFL Commissioner

This past season, we saw a great football game between the New Orleans Saints and the Minnesota Vikings in the NFC Championship game.  In overtime, the Saints defeated the Vikings, and, as a result, headed to Superbowl XLIV, where they eventually defeated the Indianapolis Colts.  It was a story-book ending to their season.


For Minnesota Vikings fans it was a bitter conclusion to what they were hoping was also going to be a story-book ending.  There they were, driving down the field in the waning moments of the fourth quarter when Brett Favre threw a fateful interception.  The Vikings never touched the ball after that.  At the start of the overtime, the Vikings lost the coin toss, and the Saints took the ball, drove down the field, and little known Garrett Hartley kicked the winning field goal.


This made me wonder about the merits of the overtime rules as they are, as well as other musings I've had on the game.


In this post, I will discuss 3 things that I would change about the game that would (in my opinion) make the game better.  Different, but better.

  1. Points After Touchdown
  2. Field Goals
  3. Overtime

Point After Touchdown


The point after attempt in the NFL is worth one point, if the team chooses to kick the ball, and the ball goes between the uprights.  The kick is placed on the 2-yard line, in effect making it a 19-yard field goal.  The team does have an option of either passing the ball into the end zone or running it in, and, if successful, the point after attempt is then worth two points.  Here are the success rates of the one point point-after attempts in the NFL in each of the last three seasons:


2007  -  1,165/1,177 = 98.98%
2008  -  1,170/1,176 = 99.49%
2009  -  1,165/1,185 = 98.31%


2007 - 2009  -  3,500/3,538 = 98.93%


But that's not all.  I mentioned earlier that a point after attempt (PAT) is the same as kicking a 19-yard field goal.  Over the past three seasons, the success rate on field goals 0-19 yards are 100% (35/35).  As a matter of fact, the last kicker to miss a field goal less than 20 yards was Kris Brown, then with the Houston Texans, all the way back in 2002.


My Proposed Change(s)


Eliminate the point after kick, as well as any field goal attempt less than 20 yards.


I offer two alternatives in lieu of this:


First, after each touchdown, you could place the ball on the 2-yard line, or maybe even the 3-yard line, and you must either run it in, or throw it into the end zone.  A successful attempt would be worth 2 points.  I have read that the success today on a two-point conversion is somewhere between 50%-55%.  If the ball was placed on the 3-yard line, presumably this would decrease to about 45%.


Second, if you choose, you could place the ball on the 17-yard line, and kick the point-after attempt.  This would be equivalent to kicking a 34-yard field goal.  A successful conversion would be worth 1 point.  Here are the success rates of field goals 30-39 yards for each of the past three seasons.


2007  -  253/279 = 90.68%
2008  -  286/321 = 89.10%
2009  -  240/287 = 83.62%


2007 - 2009  -  779/887 = 87.82%


Both of these scenarios would have about the same expected point value - about 0.9 points.  


One of the biggest reasons sports is exciting is the uncertainty of the outcome.  Moving away from a basically meaningless play (the current PAT) and converting it into both a strategic decision and introducing additional elements of uncertainty would create tremendous excitement.  In my opinion, it would also reduce the likelihood of there being a tie game at the end of regulation.


Field Goals


Two rules that changed the game of basketball for the better and made it more exciting for the fans are the introduction of the shot clock in 1954, and the introduction of the 3-point shot in the 1979-1980 season.  It is hard to argue that these two elements make the game less exciting today than it was before these inventions.  A given team always has the chance of "going for the three".  The success rate is lower, but the reward is better.  The basic premise of the 3-point shot is that the further out you are from the basket, the tougher the shot, and hence the higher the point value.  There is a single line, basically 23 feet 9 inches from the basket (the reason it's not exactly 23 feet 9 inches, and other facts about the three point shot are discussed here).


The following table shows field goal accuracy in the NFL, by distance, over each of the past three seasons.



Makes perfect sense, doesn't it?  The further out you are, the more difficult the kick.  Also, the further out you are in attempting a kick, the more likely it is to get blocked.



My Proposed Change(s)


Vary the value of the field goal depending on the distance.  Remember I said earlier that no field goal less than 20 yards should be allowed.  I also placed the point-after attempt on the 17-yard line, and gave it 1 point.  Here's how I would break down the values assigned to field goals.


0-19 yards = Cannot Attempt; must go for the touchdown
20-39 yards = 1 point
40-49 yards = 2 points
50 or more yards = 3 points


I know what many of you are probably thinking.  This would decrease the number of points scored in a game.  I disagree.  The notion that this change would decrease the number of points scored by a team in a game is based on the presumption that the coaches will call the plays the same way as they do today.  They won't.  I think that two things will happen.


First, inside the 2-yard line, since the team has to go for it, the expected outcome should be greater than 3 points (6 points * 55% plus either 2 points * 55% or 1 point * 90% - see PAT discussion above).  In addition, if the team fails to make it, the other team will be in a fairly bad spot, and there's a significant chance that the team that failed to make the touchdown gets the ball back with great field position.


Second, when the team is further out, say on the 10-yard line, I think that more teams will "go for it" on fourth down when they're faced with the possibility of only getting 1 point if they kick the field goal.  In other words, the lure of the extra 6 (or 7) points would be worth the risk.  Today, the lure of an extra 3 points (versus a 3-point field goal) isn't worth the risk in coaches minds.  David Romer, in an excellent paper, suggested that coaches today are making a big mistake by not going for it on fourth down more often than they do (he was referring to the general scenario, and not specifically being inside the opponent's 17 yard line).  If Romer is right (and I think that he is), then the new scenario ought to give the coaches that extra incentive.  The fans would prefer it, and it would make the game more exciting.


Overtime


There have been numerous blog posts since the Vikings-Saints game discussing the overtime rules, and suggestions on how to change the rules such that the outcomes are "fairer".  Even the NFL is contemplating changing the current format.  The basic premise being that the winner of the coin toss (who invariably elects to receive the ball) wins approximately 60% of the time, and 2/3 times in that scenario, the loser of the coin toss doesn't even get a chance to touch the ball on offense.


Brian Burke at Advanced NFL Stats in particular has done an analysis that suggest that today's overtime rules are tremendously unfair to the loser of the coin toss.  Instead of rehashing his analysis (you can read it here), I'll merely point out that I agree with his assessment.  In his article, he (and commenters) offers some suggestions.  Although these suggested improvements are quite valid, I believe that they are mostly theoretical and appeal only to those who have an intellectual bent.  These alternatives are not practical, and therefore, I believe, unlikely to be implemented by the NFL.


The NFL's proposed changes make it slightly better for the loser of the coin toss, but it still isn't a 50/50 proposition.  As such, I do not agree with the proposed changes by the NFL.


My Proposed Change(s)


First, no overtime games during the regular season.  What's wrong with a tie?  Especially, if the current overtime rules dictate that the way in which the winner is determined is basically unfair.  Might as well flip a coin (actually, given today's rules, it would be better to flip a coin).


Second, winner in overtime must score at least 7 points in the overtime, win by at least 4 points, and, if the team receiving the ball first scores on its opening possession, then the other team must be given a chance to respond.


Many have suggested that receiving the ball second is advantageous since they know what they need to do win the game, but since the requirements are that a team scores at least 7 and wins by at least 4, the second team has to score at least a touchdown to win on its first possession.  The win by 4 requirement makes it fairer than it is today for both teams.  If the team that receives possession first scores a field goal, it cannot win by that score alone.  The proposed change is a little bit like the tie-breaker rule in tennis, where one player gets one serve to start the tie-breaker, but then each player alternates two serves at a time, with the requirement that the winner score at least 7 points, and win by at least 2 points.


Conclusions


If all three of my proposed changes are adopted, the game would be a lot different than the game played today.  There wouldn't be any meaningless extra points kicked after a touchdown, teams would "go for it" more often than they do today, points scored would be quite different, as each decision becomes strategic, more points would be scored, there would be less likelihood of a tie at the end of regulation, in the event of a tie, there would be no overtime games during the regular season, and the overtime rules in playoffs would be fairer to the team that loses the coin toss.

Sunday, January 31, 2010

Who's Better, Who's Best - Manning, Favre and Brees

Last Sunday we were all treated to some good football.  The AFC Championship featured the surprising NY Jets, against the Peyton Manning-led Indianapolis Colts, and the NFC Championship showcased the Drew Brees-led New Orleans Saints who played host to Brett Favre and his Minnesota Vikings.


Not only are these outstanding football teams, but three of the four teams featured three of the game's best quarterbacks in Manning, Favre and Brees.  I thought it would be illustrative to take a look at each of the three quarterbacks individually, and then compare the three side by side to get a sense who maybe better, and perhaps even, who may be best.  In evaluating their relative merits, I will only consider their ability to throw the football, not win games, or how well they run, or whether they are clutch performers, or how many Super Bowl victories they have.


Here are the career NFL Passer Ratings for these quarterbacks:


Manning is clearly the best.  Well, at least that's what looking at the career NFL Passer Rating will tell you.  These figures differ from what you would find at the Pro Football Hall of Fame (as of January 23rd, 2010).  The reason being that the figures listed on the Pro Football Hall of Fame site were as of the beginning of the 2009 season, and mine are as of the end of the 2009 season just completed.















For reasons that I have chronicled in the past (see here, here, and here), the NFL Passer Rating formula is in and of itself is of little use in determining who among the three might be best.  The three quarterbacks very well might end up being ranked this way when it is all said and done, but that doesn't justify the rating system.

In order to complete our analysis, we have to make several adjustments to the NFL's passer rating system.  

The first, is that we need to take each player's passer rating, and relate it to some baseline, so that we may account for players playing in different eras (yeah, I know, it's a little bit odd to be talking about players in different eras when comparing three current quarterbacks, but Brett Favre has been throwing since 1992, and Drew Brees only since 2002.  And since the average NFL passer rating has been increasing over time (see here), we need to make an adjustment for that.  The method that I have chosen is to relate a given quarterback's performance relative to the mean for that year, and use the standard deviation in that year as the measuring stick.  In other words, the difference between the quarterback's actual performance compared to the mean, as a function of the standard deviation.  As I described here, we then convert each of these resulting measures, also called z-values, into percentile ranks, using the standard normal distribution.



The second, is that we need to account for the fact that they've each played a different number of seasons.  While I used a rather simplistic approach last year, I will present a more sophisticated approach starting with this blog.  This method will need further refinement over time, since I am early in my thought process regarding this particular topic.  However, it is an improvement over last year's simplistic approach.  Actually, the method used last year (average deviation) really doesn't take into account the number of years at all.  When I evaluated Brett Favre's career, I simply compared him to a group of quarterbacks who had played a like number of years, and simply ignored any comparisons to players who played far fewer seasons.  That method won't work in this case, as we need to compare three quarterbacks with three vastly different career lengths.  All else being equal, and they rarely, if ever, are, a player who has a 10 year career versus a player who has a 7 year career should get a higher rating (even if, and especially if, they both had the exact same ratings in every year they played).  But the question becomes, "how much higher"?

The third, is that we need to somehow account for the fact that each of the three quarterbacks has played not only a different number of seasons, but that those seasons represent different parts of their careers.  In some respects, since the last 8 seasons have been common to all three, we could simply look at the last 8 seasons.  But, that would mean that we would be comparing Brett Favre in his 11th through 18th seasons, Peyton Manning through his 5th through 12th seasons, and Drew Brees his first 8 seasons.  As I have shown in the past, the early years for a quarterback tend to be the "learning years", and hence this comparison would be inappropriate without the proper adjustments.  Brett Favre is also 40 years old, and we have seen declining performances as quarterbacks age (actually, their performances increase at first as they gain experience, and then subsequently decline as the physical toll takes effect).  One method is to look at each player's first 8 years, since in that case the comparisons would be over the identical portions of their careers.  And perhaps we could compare Favre and Manning for the first 12 years.


Last, we need not assume that the NFL Passer Rating is the measure to be used to evaluate who might be the better passer.  The NFL Passer Rating system has a lot of flaws.  I will use two measurement methods.  The first is the concept of Adjusted Passer Rating.  As I have previously mentioned, the average NFL Passer Rating has been steadily increasing every year.  The Adjusted Passer Rating normalizes the passer rating so that the average for each year is 66.7.  This is then very helpful in evaluating players who played in different years (such as the case here).  Last year, I introduced a simple measure of passing efficiency - CMI.  I now call this CMTI (or, simplistically, Completions Minus Thrice Interceptions).  It is not only simple, but it has a 86% correlation with the NFL Passer Rating.  And, because it is so powerful in explaining any variances in the NFL Passer Rating, and because it is so easy to compute, use and explain, I will use it here.


So, to recap, we will make the following adjustments:


  1. Relate performance in given year to the mean, and then in relation to standard deviation for that year - then convert to percentile rank
  2. Use a 'better' measure than NFL Passer Rating.  We will show that Adjusted NFL Passer Rating is 'better', and CMTI is simpler
  3. Take into account differing number of seasons played by each quarterback
  4. Take into account quarterbacks who are at different stages of their career


Here are the respective #s for each quarterback by season:


Brett Favre




Drew Brees




Peyton Manning




What do these performance measures say?  Well, there's a lot of information here, so let's take these one at a time.


The first is that the career averages shown (actually all of the averages shown) are simple arithmetic averages of each of the years, not a true weighted average.  You can see that these simple averages are close enough to the true average by looking at the career average NFL passer ratings shown in these three exhibits and compare them to the career average for each of the quarterbacks shown earlier.


The second is that regardless of what measure we use, whether it is the NFL Passer Rating, Adjusted NFL Passer Rating, CMTI, first 8 years, last 12 years or what not, it should be very clear that Peyton Manning has performed at a truly superior level compared to the other 2 players (or anyone else for that matter).  Just look at CMTI over the past 8 years, for example.  He has averaged being in the 90th percentile over the past 8 years.  Basically, that suggests that he is, on average, in the Top 3 every year.  Truly remarkable.  In Peyton Manning, we are watching one of the all-time best.  A once-in-a-generation quarterback.  In case you didn't notice, the rows highlighted in green indicate a performance that is in the 85th percentile or higher.  The areas in blue are 95th percentile or higher.


Ok,  Manning is best.  That was easy.  Now comes the difficult part.  Who's better, Favre or Brees?


The case for Favre:


On an Adjusted Passer Rating basis, Brett Favre's first 8 years are comparable to, if not better than Drew Brees' first 8 years (70th percentile versus 68th percentile).  We can't directly compare their two careers since one has had an eighteen year career while the other only eight.  It is very evident that, but for the last year, Favre's performance has been quite pedestrian over the past 10 years.  Well, we can't necessarily penalize him for that, because we don't know how Brees might do in his next 10 years.  We don't even know if he is likely to play another 10 years.  Given history, however, it is fairly safe to assume that he won't play another 10 years.  In my database, going back to 1932, and including both the AAFC in the 1940's as well as the AFL in the 1960's, I have a total of 416 quarterbacks who qualified in at least 1 year.  206 of these quarterbacks qualified in only 1 or 2 years.  The average is 4.125 years.  There are only 6 quarterbacks that qualified in 15 or more years.  Y.A. Tittle qualified in fifteen different seasons.  John Elway, Dan Marino and Johnny Unitas each qualified in sixteen different years.  And then there are the ultimate marathon men, Fran Tarkenton and Brett Favre, who both qualified on 18 different occasions.


Michael J. Schell, in his book Baseball's All-Time Best Hitters, makes an adjustment to player batting averages, called a late-career adjustment.  In essence, he recognizes that a player's performance declines as he ages, and so he makes an adjustment for it.  Schell basically caps the at-bats of a player at 8,000.  All at-bats beyond 8,000 are ignored.  This way, he can compare each player through their first 8,000 at-bats.  Doing that type of adjustment would be extremely difficult in the case of football.  We know that the average number of passes attempted by a player per game has increased over time (see here and here), so limiting to a fixed number of attempts would be difficult and meaningless at best.


We could limit the player performances to a given number of seasons.  Look at the three graphs below.




The first is the average percentile rank of CMTI for all 416 quarterbacks by age.


And the second is the same as the first, except with the x-axis denoting the number of seasons in which they qualified instead of their age.


It appears that there is a learning curve at first (and an elimination process, as presumably the worst players end up having shorter careers), followed by a decline after about 12 years.  Keep in mind that this first graph shows the career percentile rank.  What we're really interested in is does performance decline as a player ages?  So, what we really want to see is the average percentile rank for a given year of a player's career no matter how long they have played.


This is what the third graph shows.  It's actually a bit surprising.  There doesn't appear to be a significant drop-off.  As a matter of fact, out in the 15th-18th years, their performance seems to have got better!  Ok, so in the outer years, we're not talking about a lot of players (for years 17 and 18 its just 2 players - Tarkenton and Favre), so anomalies should be expected.  So what do we do?


I mentioned earlier that the average number of years in which a quarterback qualifies is 4.125 for all 416 quarterbacks.  The table below shows the data in more detail.




There are a total of 77 quarterbacks who have played (qualified) in 8 or more years.  That would be 77/416 = 18.5% of the eligible list.  The average number of years for this group is 10.8, and the median number of years for this group is 11.  It would seem reasonable then to look at a minimum of 8 years as a cut-off.  But how do we recognize players who have played a lot longer?  The method that I have chosen is to look at a given player's best 8 years.  This means several things.  First, the obvious - it eliminates from consideration any player who has qualified in fewer than 8 years.  Second, it accounts for the fact that players have played longer, and, that fact doesn't hurt them.  As a matter of fact, it's the opposite.  The less a player has played, the more the "penalty".  Third, given the small sample of players playing beyond 14 years, it is quite possible that one or more of them had good years in those later years, so why exclude those years?


So, going back to the tables where I showed each of the three quarterbacks and their individual seasons, you can see that Favre's best 8 years are better than Brees' 8 years.  This gives credit to Favre's lengthy career.  Remember, less than 20% of the qualified quarterbacks even get to 8 years total, let alone be fortunate enough to have 8 good years.  Perhaps over time, Brees might very well exceed Favre.  And, if and when he does, we'll give him credit for it at that time.


The case for Brees:


Although comparable to Favre in his first 8 years based on Adjusted Passer Rating, using CMTI gives Brees the nod.  Also notice that Brees has had four years where he was better than 85th percentile, while Favre only had one.  Brees' last four years in New Orleans have been especially strong.  In each of years 5-8, Brees outperforms Favre, while in the first four, Favre outdoes Brees 3 out of four years.  Favre, it appears, built his legend largely on the success of his first year, and perhaps even starting with his first game.  Check out the following chart.




It shows the average percentile rank for all quarterbacks who played longer than 8 years, for each year they played.  Actually, it's not quite the number of years they played in as much as it is the number of years in which they threw enough passes to qualify to have a rating.  It's the blue line, and it includes data from the previously mentioned group of 77 quarterbacks.  The blue line in this graph is different than the line shown on the third graph earlier, in that this only includes data for those quarterbacks qualifying in 8 or more years.  The red line is the data for Drew Brees.  The green line is the data for Favre.  Clearly, outside of his first year, and then his 3 MVP years (years 3-5 for him), Favre hugged the average for this group of quarterbacks.  Some years he did better, and some he did worse.


We're almost done.  One more graph.




This one puts a slight twist on the notion of the best years.  Instead of just looking at the best 8 years, why don't we look at the best n years, for each year n that the quarterback has played.  So this way, we can look at the best year for each quarterback, the best two years, the best three years, and so on.  For good measure, I also threw in Manning's #s.


So what do we conclude from all of this?


Again, if there, was any doubt before, this should put it to rest.  Manning is clearly the best.


Between Favre and Brees, an argument could be made for either.  In terms of a total career, Favre gets the nod, simply because of the number of years he's played.  Brees has put up some impressive numbers so far.  If he continues this for a few more years, he will most likely finish ahead of Favre, but that remains to be seen.  Based on the evidence so far however, he is doing at least as well as, if not better than Favre did in his first 8 years.  Perhaps we'll check-in in a few years when Brees retires.  And, speaking of retirement, in my next post, I'll take a look at Kurt Warner's career.



Sunday, January 17, 2010

The Adjusted NFL Passer Rating Revisited

This posting is an update and an enhancement to a previous posting that discussed the concept of the Adjusted NFL Passer Rating.  Back in 1971, when Don Smith, an executive with the Pro Football Hall of Fame, and his team developed the current NFL Passer Rating formula, he suggested that the intent of the formula was such that an 'average' Passer Rating would be equal to 66.7.  In essence, he decided that an average performance for each of the four components that make up the passer rating system should be equal to 1.0, with an exceptional performance defined as a score greater than 2.0, and a poor performance getting a 0.  Since the formula adds the scores for each of the four components and divides by 6, an average performance would be equal to 66.7.

Since we have seen that the average NFL passer rating has increased over time (see my previous post on the topic here), I was curious as to what the average in 2009 looked like, and, more importantly, what would the actual NFL passer ratings have looked like if we adjusted each of the components such that the averages for 2009 were in fact 1.0.

Recall that in the formula , based on data from the 1960's, adjustments are made to each of the four components such that the average for each component worked out to be 1.0.  The adjustments, call them k1, k2, k3 and k4, are:

  • k1 = 30
  • k2 = 3
  • k3 = 1
  • k4 = 1

Again, these factors were arrived at by looking at data from the 1960's.  And, these same factors are applied to the formula regardless of when a quarterback actually played.  This has the benefit of course of creating a formula that is the same for all quarterbacks against a fixed standard.  The problem of course is when the overall standard changes over time, and this is what has happened over the past 80 years or so.

Well, we don't have to use data from the 1960's.  We will use each year as a stand-alone year.  So, for 2009, the factors that make each component equal to 1.0 for an average performance turn out to be:

  • 2009k1 = 40.90
  • 2009k2 = 2.98
  • 2009k3 = 1.20
  • 2009k4 = 1.78
Likewise, the factors for 2008 were:

  • 2008k1 = 41.00
  • 2008k2 = 2.94
  • 2008k3 = 1.28
  • 2008k4 = 1.95
And, for good measure, the actual factors for the NFL for 1969 were:
  • 1969k1 = 32.64
  • 1969k2 = 2.99
  • 1969k3 = 0.99
  • 1969k4 = 1.12
These 1969 factors are very close to the k1, k2, k3 and k4 values used in the formula.

The table below shows the 2009 Adjusted Passer Ratings for the 32 quarterbacks that threw enough passes this past season to qualify.  The table also shows their actual 2009 NFL Passer Ratings.  You can see that the Adjusted Passer Ratings are clearly lower (as a group) than the actual NFL Passer Ratings.  That's because the actual NFL average passer rating in 2009 was 83.0, and, obviously, the average Adjusted Passer Rating for the NFL for 2009 was 66.7 (because we forced it to be so by adjusting the formula such that the average value for each component is equal to 1).



Although the Adjusted Passer Ratings themselves are a lot lower as a group than the actual NFL Passer Ratings, you'll notice that the rankings for each are essentially the same.  Why is that?  Very simply, since both calculations use the same essential formula with the values changed, all this really does is 'normalize' the actual ratings around a value of 66.7 as opposed to 83.0.  And yes, since the relative value of each of the four components now change, there should be some movement in the rankings.  Interestingly, I also included my CMTI calculations (and their respective ranks), and it also highly correlates with both the Adjusted Passer Rating rankings and the NFL Passer Rating Rankings.

Just so that you can see for yourself, I've included the table (shown below) that shows the average NFL passer rating by year from 1932 to 2009 using the current NFL passer rating formula (where k1 = 30, k2 = 3 and k3=k4=1 for all years), as well as the adjusted k1-k4 factors for each year.  There is no need to show the Adjusted Passer Rating for the NFL in each year, since it is the same in every year - 66.7.  If you look at the bottom of the table, for years 1960-1969, you'll see where Don Smith derived his k values from.



Looking at this table you should be able to understand why using the current NFL passer rating system is of little to no use when you need to compare quarterbacks from one era to another.  It can truly only be used to compare one passer versus another in any given year, but not across years.  It stands to reason then, that the current NFL passer rating formula isn't much use when evaluating a quarterback's career, especially ones who've had a long career (you can see the career NFL passer rating leaders here).  In future posts, I will discuss this in more detail.

Sunday, January 10, 2010

2009 NFL Season - Passer Ratings

We now have data for the entire 2009 season, and I can calculate the best and worst performers for the season, using my standard measure, CMI - or more appropriately, Standard Deviations from mean CMI.  CMI is easily calculated, as it is simply [pass completion percentage] - 3 * [interception percentage].  I can calculate that at any point during the season.  In order to calculate the standard deviations from the mean CMI, I like to wait until the season is over (actually, even this can be calculated at any point during the season, but it's a lot simpler for me to just do it at the end of the season - perhaps next year, I can do this after every week during the regular NFL season).   For those of you familiar with statistics, the measure of standard deviation from the mean is also called a z-score (or z-value).  And, furthermore, z-score can be very easily converted to a percentile rank (assuming that the population is normally distributed.  In a later post, I will show that z scores for CMTI or NFL passer rating over the years is distributed normally).  The beautiful thing about a percentile rank is that it is always a number between 0 and 100.  What's not to like about that!

A quick couple of notes here regarding CMI, and many of the calculations that I will be using from here on out. While I will go through the exercise of creating arduous, complex and sometimes arbitrary (and even unnecessary) calculations, I will strive for simplicity whenever possible when I present my ideas, as I believe they are more likely to be accepted.   The use of percentile ranks is an example of this.

Last year, when I first developed CMI, and especially as I looked back in time, I had to figure out how many players to use each year to determine the number of qualifiers.  I went through some elaborate mechanisms to calculate the number of passes attempted during the season (per team per game), and some qualifying standard.  As you are aware, the NFL uses 14 passes per game as the standard.  And yes, while it is simple to use, I am not a big fan of that standard since the average # of passes attempted per game has changed (increased) over time, as you can see from the table below.



The standard today should be closer to 16.

Last year, I arbitrarily assigned a factor close to 50% (with adjustments for the earlier years) to the total # of passes attempted by each team per game to determine the # of qualifiers.  I realized this year, that if I simply used the # of teams in the league as the cutoff for determining the # of qualifiers, I get fairly close to my original method.  And because it is simple, easy to explain, does a better job (in my opinion, anyway) than the current NFL standard, I will use that as the cutoff.  The table below shows the # of qualifiers by season using the NFL standard, my original standard, and my current standard.




As you can see, the # of qualifiers, regardless of which method you select is fairly close in each year, with the exceptions being the earliest years, where the NFL method simply excludes too many passers (in my opinion anyway).  Also note that the the AAFC is excluded by the NFL altogether, while the AFL and NFL have been combined in the years 1960-1969 by the NFL.  This doesn't make sense to me, since each league played a different # of games (and had a different # of teams).  Regardless, if you combine the AFL and NFL results from my method, you get close to the #s used by the NFL.  While I think that my method works, I also recognize that this standard may not be the most appropriate for future NFL seasons.

OK, back to the passer ratings.

Because I will discuss other passer rating systems/calculations that are related to CMI in later posts, I will take this moment to re-classify CMI as CMTI (acronym stands for Completions Minus Three times Interceptions).  The definition itself hasn't changed - just the acronym.

The table below shows each of the qualifying quarterbacks with their 2009 season's statistics, their NFL passer rating, the percentile rank of their 2009 NFL passer rating (in other words, the converted z-scores), their 2009 rank in terms of NFL passer rating, their 2008 NFL passer rating, the percentile rank of their 2008 NFL passer rating, their 2008 NFL passer rating rank, their 2009 CMTI, 2009 CMTI percentile rank, their 2008 CMTI and 2008 CMTI percentile rank, and their 2008 CMTI ranking, and finally, their average 2-year CMTI percentile rank (again a # between 0 and 100), together with that ranking.

I don't think there's any question as to who the elite quarterbacks are.  Also notice the struggles of first-year quarterbacks in each year.


Happy New Year!

Hi Everyone.

Happy New Year to you all.

It's been awhile since I've posted.  The reasons are many, so I shall not go into them.  Regardless, I am back posting.

I have been able to do some additional research in the meantime.  I have added data from the 1932 - 1939 NFL seasons to my database.  I have also added the 1946-1949 AAFC seasons to the database as well as the 1960-1969 AFL seasons.  And of course, updated for the just concluded 2009 NFL season.

I think that adding the data from the AAFC in the late 1940's as well as the AFL during the 1960's makes for a much richer database.  There are those however, who do not believe that including that data makes sense.  I disagree.

Having spent some time thinking about my pursuit here (creating a new passer rating system for the NFL), I've created new statistical measures in addition to those I created last year that I will discuss in upcoming posts.  I've also improved upon my original concept of CMI.

Anyhow, I'm glad to be back posting and look forward to a terrific 2010.

Cheers,
Kiran