The All-Knowing Derbytron is now on derbytron.com

This site will no longer be updated. Go to derbytron.com for all the fresh Derbytron content.

4.30.2009

Methodology

The basis of the Derbytron rankings is to order teams by their relative quality. Every part of this equation is used to determine where teams fit. #1 should beat #2. #34 should beat #35. Etc (assuming these games are played on a neutral floor). This isn't a system that will determine who is having the best season or who has the best history. So, how does it work?

My goal with this equation is to rate average teams as .5, better teams move closer to 1, and worse teams move closer to 0.

50% - What happened in the game? That's important, right?
This is determined using the pythagorean expectation. Essentially, it's an equation that uses the score of a game to predict a team's winning percentage. Using this method will produce a number between 0 and 1. A close game will give a number close to .5 to both teams. A blowout will give a number approaching 1 to the winning team and a number approaching 0 to the losing team. This is what I call the vacuum game rating as this number only takes into account the score of the game (it doesn't matter where it's played or how good or bad either team is, thus, it's like it's being played in a vacuum).

25% - Who are you playing and what does their vacuum look like?
The vacuum rating is then combined with the opponent's average vacuum rating. This number gives a good indication of the quality of an opponent but it is not everything. A team could be very good but if they've only played the top 4 teams, they may have a very low vacuum rating. That is why it is only 25%.

25% - Who has your opponent played? Kind of important.

How has the opponent fared when compared to their competition? This is a number calculated by combining game ratings with opponent ratings and giving a more accurate description of the quality of the opponent.


Other Variables

Where was it played?
A percentage is added to all away games and taken away from all home games. There is an advantage to playing at home so that needs to be reflected in the ratings. Neutral games are unaffected by this variable.

When was it played?
Games from this season are far more important than last season. Teams can change dramatically from one year to the next. So, only games from the last six months of the previous year are counted. Not all games are used if teams played more than five games in the those six months. In that case, only the last five are counted (unless multiple games were played on the same weekend (nationals and regionals being the prime example)). These games carry much less weight than games from the current year.

Blowouts shouldn't hurt you.
One of the problems I ran into with this system was highly ranked teams blowing out lowly ranked teams and having their overall rating number lowered. For example, if a team that is rated as a .7 plays a .2, even if the .7 beats the .2 by 400 points, the .7's overall average would come down after combining the numbers with the low numbers of the other team. I made up for this by inserting a blowout contingency plan. A blowout cannot hurt you. I have determined that a blowout is more than 80 points, so if a team beats another team by 87 points but that game rating is lower than their final rating, they will be awarded with their final rating for that game. So, a team cannot move down in the rankings after blowing out another team. It is a sliding scale from 80 so there isn't a huge difference between beating a team by 79 or 80.

Teams that haven't played anyone else suck.
Teams playing their first game are not counted. All of the previous averages that I've mentioned are determined without the current game as a factor. But, the averages of teams that have only played two games wouldn't be averages at all, they'd just be the ratings of the one other game they've played. So, for teams that have only played two games, both games are calculated into their average ratings.

What games are being used?
Currently, data is only being taken from WFTDA Sanctioned games which means only WFTDA teams are being counted in the ratings. Eventually, I would like to include every interleague team in the country including secondary travel teams and non-WFTDA teams. That is not possible at this time. I'm working on it, though.

Hopefully all of this makes sense. Any questions you have should be posted in the comments (don't be shy) and I'll reply and update the method description with more accurate information.

3 comments:

  1. I created a ratings method, too:

    http://thepivotline.com/ratings/

    The more, the merrier!

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. What is the methodology behind "schedule strength"?

    ReplyDelete