**xHPI**** Technical Notes**

I’m sure that some people who read these rankings will be curious about how they are determined. If I want xHPI to be taken seriously, I have to open it up to scrutiny. This paper is an attempt to describe as clearly as possible the basis for the rankings.

While I’ve read my fair share of technical journals in which complex equations are described, I’m certain that some readers of this paper will be far more mathematically inclined than I am. I am going to try my best to explain my calculations in equation form, because for those who are comfortable with such equations, they may be the clearest way to explain my calculations. However, those who are less familiar with relatively complex equations may be no less curious about what goes into xHPI, so I’m going to try to explain it in lay person’s terms, as well. With any luck, I’ll be able to satisfy the curiosity of both the mathematician and lay person, though it’s entirely possible that I will end up confusing both.

The outcome of each game gives three basic pieces of information that we can use to evaluate the “goodness” or “badness” of a win or loss. First, obviously, is whether a team won or lost the game. All things being equal, obviously, a win is good, while a loss is bad. But most of us consider some wins to be better than others, and some losses to be worse than others. The other two pieces of information help us determine the quality of a win or a loss. One of those is the quality of the opponent: a win over a highly-ranked team is considered to be a better win than a victory over a team with a poor record. The final piece of information is the scoring margin. Blowouts are generally deemed better wins (or worse losses) than close games. Scoring margin is a controversial factor in ratings systems, which I will not address here. I have arguments for the importance of scoring margin (and adjustments to address some of the objections for use of scoring margin) on the “Commentary” page of my blog at xhpi.wordpress.com.

__Step
One: Adjusting the Game Score__

Ratings are based on all of the results of each game each team has played. The first step in calculating the xHPI score for a team is to adjust the score into a more usable form. I substitute a measure I will call “Average Quarterly Share” (AQS) for the actual score. The AQS makes two adjustments: it substitutes a team’s share of the score (with a modification, explained below) for the actual score, and it incorporates a measure of the team’s relative score throughout the game, not just the final score.

A team’s average quarterly share for any game can be calculated with the following equation:

where is the score for team *t* at the end of the nth quarter, and is the combined score for both teams at the
end of the nth quarter.

The average quarterly share accounts for half of the team’s score for the game. The other half comes from the team’s adjusted share of the final score. This adjustment is similar to the adjustment made to the quarter scores: ten points are added to each team’s score, and the share of adjusted points for each team is calculated. The sum of those two factors, divided by two, yields that team’s game score, . This calculation is represented in the following equation:

=

where is the final score for team *t* and is the combined final score of the two
teams.

Note that each quarter’s score, as well as the final score, are adjusted by adding ten to each team’s score. I do this for two reasons. First, it avoids having any team’s score being zero. Having a score of zero plays havoc with later calculations because multiplication is involved, meaning that the product will always be zero. Having a zero as any factor in a game’s result—whether it is the degree of win or loss, or quality of opponent—negates the other factors, and skews the results. Secondly, zeroes can also completely remove margin of victory or defeat from the result. Without such an adjustment, there would be no difference between a 3-0 victory and a 70-0 victory. This seems counterintuitive, since most people would judge the latter score to be a much more dominant victory.

Following this process will yield scores for the two teams that will always add up to 100.

To illustrate this adjustment process, I will use the scores from the 2011 SEC Championship Game played by Georgia and LSU. The line score of the game was:

Georgia 10 0 0 0 - 10

LSU 0 7 21 14 - 42

Instead of using the points scored in each quarter, I use the scores at the end of the each quarter, so the line score is modified as follows:

Georgia 10 10 10 10 - 10

LSU 0 7 28 42 - 42

To calculate the AQS for each team, ten points are added to each quarterly score and to the final score, as follows:

Georgia 20 20 20 20 - 20

LSU 10 17 38 52 - 52

Completing the calculations within each set of parentheses in the AQS equation, above, each team’s percentage of the modified quarterly scores are calculated:

Georgia .667 .541 .345 .278 - .278

LSU .333 .459 .655 .722 - .722

The next step in the AQS equation is to complete the operations within each set of brackets. Essentially, this multiplies each quarterly share of the score by the number representing the quarter (first quarter x 1, second quarter x 2, etc.). This adjustment incorporates the idea that later scores are more important than early scores, so the value of the scores at the end of each quarter gets progressively larger throughout the game. Carrying out those operations transforms each quarter’s score as shown below (the final score is not included below, because it does not figure into this step of the calculation). The four quarterly products are then added together, as follows:

Georgia .667 + 1.081 + 1.034 + 1.111 = 3.893

LSU .333 + .919 + 1.966 + 2.889 = 6.107

Finally, dividing each result by ten yields each team’s AQS.

The game score is simply the average of the AQS and the final score, as modified in the same way that each quarter’s score was modified (add ten to each team’s score and then calculate the percentage of the total adjusted points scored).[1]

In our example, that average looks as follows when we substitute the AQS and the modified final scores into the equation:

For Georgia: = = 33.36

For LSU = = 66.64

I make one last adjustment. To increase the likelihood that the winning team ends up with a greater share of the final score, two points are subtracted from the losing team’s game score and added to the winning team’s game score. The adjustment doesn’t eliminate all instances of a losing team having a higher game score; in the 2011 season, slightly less than one game per week had a losing team with a higher game score. However, the adjustment does reduce the number of such instances by about two-thirds.

__Step
Two: Converting Scores to Ratings__

As discussed above, each game outcome contains three pieces of information about the quality of the win or loss. The process of computing the game score basically incorporates two of those items: whether a team won or lost, and the margin of victory or defeat. Converting the game scores to ratings requires introducing the third piece of information—opponent quality—into the calculation. xHPI takes the game score an multiplies it by a measure of opponent quality to come up with an outcome for each game. The final xHPI rating for a team is average of those outcomes for every game the team has played in a season.

The simplest measure of a team’s quality is its win-loss record. All things being equal, teams with higher win percentages are better than teams with lower win percentages. Of course, all things are rarely equal, which is why so many people there are a variety of ranking and rating systems available.

However, in the absence of an alternative, xHPI uses win percentage as a starting point. However, simple win percentage brings with it the problem with multiplying by zero that was discussed earlier. To avoid this problem, xHPI simply adds .5 to each team’s winning percentage. Each team’s adjusted win percentage (AWP) therefore can range from .5 to 1.5.[2]

The equation for making the adjustment looks like this:

) +.5

where W = total wins by a team, and L = total losses by a team.

Once the adjusted winning percentage is determined, it can be multiplied by the game score calculated above. Those results for each of a team’s games can then be averaged to yield an initial quality rating (IQR).

In equation form, that calculation is:

Where
AWP* _{oi}*
= the adjusted winning percentage of the opponent in game

In other words, for each game, the game score is multiplied by the adjusted winning percentage of the opponent. That calculation is repeated for each of the games the team has played, and those products are added together. That sum is divided by the number of games the team has played, yielding an initial measure of the team’s quality.

This
initial measure is an improvement over winning percentage as a measure of a
team’s quality, because it incorporates more information. However, winning percentage remains a
significant contributor to the final result.
To reduce the effect of winning percentage on the final rating, the calculation
that is used to determine the initial quality rating is repeated, but with the
opponent’s IQR substituted for the AWP.
In equation form, that final rating—its xHPI
rating—for team *t* is:

[1] Had this game gone into overtime, the overtime score would have been reflected in the final score, but not any of the quarterly scores. The number of overtimes is immaterial.

[2] This adjustment has some similarities to a Laplace method that I have seen others employ. I have been experimenting with using a true Laplace adjustment. Substituting a Laplace adjustment changes the rankings slightly, but has only a small effect on the predictive power of the ranking system.