Zenor's College Football Power Ratings

THEORY

The point of any set of power ratings is to explain the outcome of games played. In short,

	Margin of Victory = Team A Rating - Team B Rating

or,	m(a,b) = r(a) - r(b)

So, if Enormous State beats Watsamatta U. by 37, ESU should have a power rating 37 points above WU. Simple enough, until we note the pesky problem of unexplained error. What are we to do if Watsamatta beats Siwash by 10, and Siwash recovers to beat ESU by 3? There's no set of power ratings that can simultaneously account for the outcome of all three games. Heaving a sigh and bowing to reality, we stick an error term into the equation:

	m(a,b) = r(a) - r(b) +  error(a,b)	

The trick now is to find a set of power ratings to minimize the unavoidable error. As luck would have it, there is one unique set of power ratings that minimize this error. If you have a bit of grounding in vector calculus, the next section provides a simple proof.

PROOF

Since every game has its own unique and separate equation, what we now have is a simultaneous equation model. The number of equations = the number of games, and the number of unknown values = the number of teams (one power rating for each team). So long as the number of games played exceed the number of teams, the system is saturated and a solution exists. Rather than write down all those equations, it's simpler to express them in matrix-vector notation:

	M = XR + E

where M is a column vector containing the observed victory margin of each game. X is a matrix indicating who won and who lost. Each row in X represents a game; each column represents a unique team. For each game (row) the winner's column gets a "1", the loser's a "-1", and all others get a "0". R is a column vector of theoretical power ratings for each team. E is a column vector containing errors.

Here's were the vector calculus comes in: first, the total amount of (squared) error is

	E'E = (M - XR)'(M - XR)

We can take the vector derivative of total error with respect to R:

	d(E'E)/dR = -2X'M + 2X'XR

Setting the derivative to 0 satisfies the first-order condition for a minimum. When we set it to 0 and solve for R we get

	R* = matrix inverse(X'X)X'M

The second-order condition for minima requires that the Hessian matrix 2X'X is positive semi-definite. The proof is a little tedious, but it can be found in most mathematical statistics books.

Ah, the beauty of mathematics. To wit: this proves that the vector R* now contains the absolute, without argument, dead-certain-best power ratings for explaining all game outcomes, Q.E.D.

RESULTS

Two different sets of power ratings are available, based on two different ways of treating winning margins. The first method tries to explain the entire winning margin in all game played. It favors teams that win and win big. Think of it as the "Bowden - Osborne Memorial Blowout Index" (BOMB Index). The ratings are scaled so that the average team is at zero.

The second set of ratings severely discounts blow-out scores. In fact, any win is treated as if it were a one-point win. That is, FSU can run up the score all day on Central Florida and it won't change its rating. Think of it as the "Just-Win-Baby Index" (JWB Index). I like the implicit fairness of this method because it favors teams who win, against teams who win, against teams who win, etc. Again, the ratings are scaled so that the average team is at zero.

1996 Update

I have traditionally focused on accounting for the outcomes in all the games played in the current season; the philosophy being to resolve the yearly "Who's #1" dispute. In the interest of fairness I previously treated every game, whether in September or December, as equally meaningful and ignored the previous season results. My goal then was to provide a dispassionate way of assessing the best teams over the course of the entire season. This also meant that ratings couldn't be computed until about the 3rd or 4th week of the season.

I noticed, however, that most of the e-mail I received revolved around the predictions, rather than the ratings themselves. Where I was interested in accounting for the past, readers were interested in predicting the (short-term) future. To accomodate this, I experimented with a "fading memory" coefficient based on historical data. I found that applying a slightly higher weight on more recent games results in better forward predictions, even though it reduced the the R2 in terms of accounting for the past. I'm now using the fading memory coefficient in my estimation. This also allows me to incorporate last season's games, but they are essentially "forgotten" by the 5th or 6th week of the season.

The home field coefficient is also new and helps predict. I didn't use it in the past because my previous data source did not indicate the home team in game results. I have also added ratings for all div. I-AA teams for completeness.

<- Parent Directory

Michael Zenor / zenor@trajecta.com