Questions

This is everything you need to know about the BCS. Probably more. Please check here first before sending me e-mail with questions about the ratings. It will save us both time.

Last updated 11/27/10

Answers

What is the BCS?
The Bowl Championship Series, or BCS, replaced what was known as the Bowl Alliance. It is the latest attempt to create a National Championship without having an actual playoff. The BCS is administered by the conference commissioners and the Notre Dame AD. They have created a rating system to determine who should play in the National Championship game at the end of the season. The top two teams in the ratings at the end of the regular season will meet in the title game.

There are four bowls involved in the BCS: Rose, Sugar, Orange and Fiesta.

Starting with the 2006 season, a fifth BCS game has been added to serve as the championship game. This is not the "plus one" system. This simply means that there will now be ten teams playing in BCS games instead of eight. The fifth game is played at the site of one of the other four. The four bowls will rotate as hosts. The Fiesta hosts the championship game this year, followed by the Sugar, Orange and Rose in the rotation.

Which teams are eligible for BCS bowls?
The automatic qualification standards are not the same for all teams. The teams are divided into four groups: automatic qualifying (AQ) conferences (ACC, Big Ten, Big 12, Big East, Pac Ten, SEC), non-automatic qualifying conferences (C-USA, MAC, Mtn West, Sun Belt, WAC), Notre Dame, and other independents.

There are ten spots in the five BCS bowl games (Rose, Fiesta, Orange, Sugar and the title game). No conference may place more than two teams in the BCS games, with one exception. If an AQ conference has the top two teams in the standings, but neither is the champion, then those two teams play for the BCS title as at-large teams, and the champion participates as well. This interpretation was changed in the 2008 season. The old interpretation was that the champion would not participate to keep the two-team limit in place.

  1. The top two teams in the rankings. Those teams are assigned to the title game.
  2. AQ conference champions, regardless of ranking.
  3. The highest-rated champion of a non-AQ conference if it either ranks in the top 12 or is ranked in the top 16 and also ranked ahead of one of the champions of an AQ conference.
  4. Notre Dame, if it finishes in the top eight.
  5. The #3 team, if it is a member of an AQ conference and there is still an open spot.
  6. The #4 team, if it is a member of an AQ conference and there is still an open spot and no team qualifies under rule 5.

If there are still open spots after all that, then any team can be selected by a BCS bowl if it:

  • Has 9 wins against I-A opponents (teams may count one I-AA win toward that total), and is rated in the top 14 of the BCS standings, or
  • Is a non-AQ conference champion and meets the qualification standard in #3, but was not the highest-rated team to do so.

    If, because of conflicts with two-team-per-conference limit, there are not enough teams in the at-large pool to fill the bowl openings, the pool will be expanded down the BCS standings by four teams. Teams must still have nine wins to be considered. If that still fails, the process will be repeated until the bowls are filled.

    Note that for independents not named Notre Dame, the only way to automatically qualify is to finish #1 or #2.

    Also, the #3 provision only applies to the champions of the non-AQ leagues. That means, for example, if TCU were to finish 11-1, but have that loss be to Utah, which is 8-4 overall, but 8-0 in Mtn West play, then TCU could not automatically qualify under rule #3 no matter how high it is ranked.

  • OK, we've picked the teams, now which bowl do they play in?
    At this point, we have two groups of teams: those that have automatically qualified and a pool of at-large candidates. Here is how they get assigned to the bowls.

    1. The top two teams in the BCS rankings play in the national title game.
    2. The BCS bowls get their conference champion tie-ins. The tie-ins are: Big Ten and Pac 10 champions to the Rose Bowl, SEC champion to the Sugar Bowl, Big 12 champion to the Fiesta Bowl, and the Orange gets the ACC champion. The Big East champion is not tied to a specific BCS game.
    3. If the #1 team is the champion of a conference tied to one of the bowl games, that bowl gets to choose a replacement for that team.
    4. If the #2 team is the champion of a conference tied to one of the bowl games, that bowl gets to choose a replacement for that team.

    A bowl choosing a replacement for a tie-in lost to the national title game is not required to choose another team from its tie-in conference. Also, if both the #1 and #2 teams need to be replaced in tie-in bowl games, the bowl choosing a replacement for the #1 team may not select a team from the same conference as the #2 team without the permission of the tie-in bowl for that conference.

    For the four year period beginning in 2010, the first time the Rose Bowl loses one of its anchor teams to the BCS title game, and a non-AQ team automatically qualifies for selection, and that team is not in the title game itself, the Rose Bowl is required to take that team.

    At this point, seven of the ten spots will be assigned. The selection order for the other three is the following.

    1. the bowl that will be played fourth that year.
    2. the bowl that will be played third that year.
    3. the bowl that will be played second that year (right after the Rose Bowl).

    For the 2009 season, the order is Orange, Fiesta, Sugar.

    Once all the games are set, the BCS folks can still do some shifting around to try to get more desirable matchups. For example, they may try to avoid regular season rematches or rematches of last year's bowl games. Also, TV may have something to say here. Note that this sort of flexibility has always been an option, but it has never been used.

    What is the formula?
    The current version uses three basic factors:

  • Point total in the Harris Interactive poll.
  • Point total in the coaches' poll.
  • Ranking in the six selected computer ranking systems, after throwing out the best and worst ranking for each team.

    In the polls, a team's score is its point total divided by the best possible point total for that poll. In 2008, there are 114 Harris voters and 61 coaches voting, which means that the best possible score for the Harris poll is 2850 (114 voters x 25 points for a first place vote) and in the coaches' poll, the best possible score is 1525.

    The four computers for each team will be treated like voters in a mini-poll. That means, the team ranked #1 in a computer ranking will get 25 points. The #2 team will receive 24, and so on, down to the #25 team in a computer getting one point. Each team's four computer scores (after tossing the best and worst) will be added and divided by 100 (the best possible score) to give the computer average.

    Then, the three numbers will be averaged for the total BCS score, highest being better.

    Here is an example of how to calculate the BCS ratings:

    • In the Harris poll, Purdue has 1556 points. That was good enough for the #10 ranking, but that doesn't matter. Their score for the Harris poll part of the formula is 1556/2850 or .5450.
    • In the coaches' poll, the Boilers have 664 points, which is then divided by 1525 to give a score of .4354.
    • Purdue has computer rankings of 3, 4, 4, 6, 9 and 10. When you throw out the best and the worst, you are left with 4, 4, 6 and 9. Those rankings are worth 22, 22, 20 and 17 points respectively in the mini-poll, which adds up to 81 points. That is then divided by 100 for a total of .81.
    • The Boilermakers final BCS score then is the average of those three numbers, or (.5450 + .4354 + .81) / 3 = .5968.

    Note that every game counts fully in every part of the BCS formula. One of the bigger myths is that championship games don't count, or duplicate opponents don't count, but that is not true.

  • When is the first official release?
    In 2009, the first official release will be October 18th.
    What happens if there is a tie for second?
    The tiebreaker is as follows.

  • Head-to-head.
  • Result against highest-ranked common opponent in the BCS standings.
  • BCS rating using all six computer rankings.
  • Flip the cosmic coin.
  • Which computer rankings are being used?
    • Jeff Anderson-Chris Hester
    • Richard Billingsley
    • Wes Colley
    • Kenneth Massey
    • Jeff Sagarin
    • Peter Wolfe
    What do you know about the different computer rankings?
    Not a whole lot. Most of the formulas are proprietary. Some are more forthcoming about what goes in than others. All of the systems use the same basic set of data (except where noted): Date of game, location of game, who played and who won. What distinguishes them is what they do with the data, how much they weigh certain factors, and what set of teams they rank.

    None of the rating systems consider margin of victory.

    Unless otherwise noted, all publish ratings from the beginning of the season and therefore have some prior season bias at least early on. In those systems, at some point, the prior season data is no longer relevant and each season stands on its own.

    • Jeff Anderson-Chris Hester
      Rates D1A teams only. Strength of schedule factors in conference strength of opponents, which is based on how conference teams do in non-conference play. It also appears to give weight to how a team performs against better opposition. Does consider game location, but not the date. Does not publish until after 5th week.

    • Richard Billingsley
      Rates D1A teams only. Carries a team's rank over from previous year and values early part of season more highly. Also gives slight emphasis to recent performance. If a team does not play, its raw rating (as opposed to ranking) does not change that week. If a team wins, it goes up and if it loses, with rare exception, it goes down. Has a detailed explanation on his site, although he does not provide his formula. Game location given "very minor" consideration.

    • Wes Colley
      Rates D1A teams only, plus provisional 1A teams (like Troy St in 2001). His ratings only consider games between I-A opponents, although in 2007, he made a change to account for games against I-AA foes by grouping I-AA teams and treating the groups like a I-A team. Guidelines for forming those groups are listed on his site, but he doesn't always follow them precisely. Publishes his formula on his website, but you need to be a math geek to understand it. Publishes ratings at the beginning of the season, but uses no prior season data. Everyone starts at 0.5. Game location and date are not considered.

    • Kenneth Massey
      Rates all NCAA and NAIA teams. Starts everyone at zero and starts publishing at the beginning of the season. The formula does not consider homefield advantage or game date. Massey provides a description of his MOV based system on his site, but we can't be sure how much, if any, of that description applies to his BCS ratings.

    • Jeff Sagarin
      Rates all DI teams, both IA and IAA. The BCS will not be using the ratings Sagarin is famous for, but rather a rating system he calls "Elo Chess," which does not include MOV. Presumably, he named it "Elo Chess" because it is based on the rating system used for chess players developed by Arpad Elo. Home field advantage is considered.

    • Peter Wolfe
      Rates all NCAA and NAIA teams. Does not publish rankings until the week of the first release. Rankings based on actual outcome vs probability of that outcome occurring. Game location is a factor.
    How does this thing really work?
    The BCS is a tough system to beat because of its dependence on the polls. Tulane went 11-0 in 1998, but because they were not highly regarded before the season started, and because they played in what is considered a weak conference, they had no shot of playing in the title game. If a team is not in the top 4 in each poll (Harris and Coaches) at the end of the season, there is no realistic chance to finish in the top 2 in the BCS.

    However, the polls are not the biggest factor. Losing is. That is because a loss will negatively impact the polls for sure and likely also the computer rankings.

    The computer rankings are less of a factor than the polls, but they can still overrule the polls if the point totals are close.

    The computer ratings are also more of a mystery because many are proprietary, so exactly how they are computed is unknown. Each poll is a consensus of the opinion of its voters. That is 113 Harris voters and 62 coaches, so it can be said that the polls are a consensus of 175 opinions. Each computer rating though is basically the individual programmer's opinion. That means that the computer ratings piece of the BCS is a consensus of six opinions. And the "best" and "worst" opinions do not count. So it can be said that Richard Billingsley's opinion, for example, is more important than any one writer's or any one coach's. Is that a good idea? I think it is fine, but draw your own confusion.

    The BCS is between a rock and a hard place. The good thing about giving this much power to the pollsters is that it increases the chance that the fans will get what they want each year, which is the #1 and #2 teams in the polls playing each other for the title.

    Note that I did not say the two best or most deserving teams playing each other, which brings me to the bad part about giving this much power to the voters. The polls do a terrible job of measuring that. Just about all they care about is who has the longest winning streak and are biased by preseason expectations.

    There is also an ethical problem with poll voters being given this much power. The coaches clearly have a conflict of interest, since it is their programs that will benefit from the distribution of all this money. What's to stop, for example, the voting coaches from the non-AQ schools from voting up the standard-bearer among their group for any given season as high as possible in an effort to get them into the $15M game?

    The coaches have shown that are not above messing around at the top of their poll. In 2001, after the Big XII title game, Colorado was still right behind Nebraska in the coaches poll. However, after a week of hand-wringing in the media over the possibility of the Huskers playing for the National title without winning its conference, the coaches switched their votes the following week in an effort to affect the outcome. It almost worked. Note that after the bowls, in which both NU and CU got creamed, Nebraska was back ahead of the Buffs.

    In 1997, we have the legendary example of the coaches switching their #1 votes from undefeated Michigan to undefeated Nebraska after the bowls so as to provide a lovely parting gift to Cornhusker coach Tom Osborne, who had announced his retirement.

    The writers had ethical concerns as well. The strictest standard of journalism ethics states that as a reporter, you report news, not make it. The reason the BCS formula doesn't simply use the polls and nothing else is that the AP objected to being given that much power because of those very concerns. With every vote counting now, it was possible that one writer could be the person who decides which school gets the $15M bonus in a close enough race. In fact, that became very evident at the end of the 2004 season in the race between Cal and Texas for the fourth spot in the BCS rankings and an automatic at-large bid. Voters were deluged with e-mail accusing them of biases and other nasty things. That's why the AP dropped out after 2004.

    The polls have always been accused of having a geographic bias also, though I have not studied it personally.

    When will we get a playoff?
    The BCS has a TV deal through 2009 and the Rose Bowl has one through 2013, so we will probably have to wait until at least then. I do not think we will see one until the non-BCS bowls start to dry up and go away. I am not talking about bowls like the New Orleans Bowl or the Las Vegas Bowl, but more like the Capital One (formerly Citrus) or Outback bowls. Right now, there is no incentive for the big schools to create one. They get just about all their bowl-eligible teams into postseason football. There is not a playoff format that will put eight SEC teams (for example) into the postseason.

    Also, right now, they have this big pile of money from the BCS that they share among themselves. If there is a playoff, the pile of money might be bigger, but the wealth would get shared more evenly. Why would the big schools go for that?

    There are several economic and political challenges to creating a playoff besides the one I just mentioned. The biggest political roadblock is that university presidents do not want one. Since the presidents run the NCAA, it is going to be a pretty hard sell to get one going. There has been no indication that the presidents are softening on this issue.

    I think it is more likely that we will go back to the old system, where the Big Ten and Pac 10 champions played in the Rose Bowl no matter what, than see the creation of any sort of true playoff.

    Do you want a playoff?
    I have mixed emotions about it. The purist in me wants to see all the conference champions (at least) get a shot to settle it on the field. The Purdue fan in me realizes that my team will not make the playoffs too often, so that takes away some of the interest for me, personally. I kind of like the idea of my team getting to play some postseason football at 6-5.
    What do you think of my playoff proposal?
    Late in 2001, I was getting so many playoff proposals e-mailed to me that I decided I was going to stop commenting on them. That is, until someone sends me one that has an answer for all the political, economic and logistical roadblocks that prevent us from having one right now. Feel free to keep sending them to me, but do not expect a response.
    How do I read the BCS Ratings page?
  • Rank - BCS Ranking.
  • School - School name.
  • W-L - Team record.

    BCS Data

  • BCS - BCS Score, which is the average of the other three factors.
  • HARPct - Harris poll point total divided by best possible score.
  • USAPct - Coaches' poll point total divided by best possible score.
  • CRPct - Computer ranking score (see example for calculation details).

    Polls

  • HAR (Rank) - Harris Interactive poll point total, with ranking in parenthesis.
  • USA (Rank) - USA Today coaches' poll point total, with ranking in parenthesis.

    Computer Rankings
    All rankings are relative to other Division I-A teams.

  • Avg (Rank) - Average of the computer rankings after removing the best and worst ranking for each team. Since not all of the rating systems begin publishing at the beginning of the season, if fewer than five rankings are available, no rankings are removed before averaging. The ranking among computer averages is in parenthesis.
  • AH - Anderson-Hester.
  • BIL - Richard Billingsley.
  • COL - Wes Colley.
  • MAS - Kenneth Massey.
  • SAG - Jeff Sagarin.
  • WLF - Peter Wolfe.

    The column heading for each ranking is a link to that ranking's site.

    Once at least five computers are publishing data, the best ranking for each team is in blue and the worst is in red.