Home - About XODI - Archive

Frequently Asked Questions

How do these results compare with other ranking systems?

The rankings for the top ten countries (the test nations) compare well with other ranking systems such as the official ICC ODI rankings as well as those from AQB, Rediff, Amul and Herman, some of which include nations who have played games in the World Cup or other official ODIs.

Why are they only updated quarterly?

This is mainly a time constraint on my part. It is easier to enter results into a database in a block. It also takes some effort to track down results for games between some of the smaller teams.

Which games and teams do you include in the ratings?

Games should be between teams that are effectively the top teams available to play for that country (or group of countries) so games involving 'A' teams, provincial teams or teams with 'guest' internationals will not be included.

What is the greatest number of points a team has gained in one game?

302 - In 2003 Sri Lanka rolled England for 88, then scored the runs in 13.5 overs. England had contributed 429 points to the game pool and Sri Lanka 312 but the Lankans walked away with 614 to England's 127.

Other stats include:

Most comprehensive victory - In the 2004 Asian Championships, Nepal rolled Iran for 29 then scored the runs in 11 balls, allowing them to claim 98% of the points on offer although since only 34 points were on offer in the game, Nepal's 33 points was technically 97%.

Greatest percentage increase in points - Again in the 2004 Asian Championships, Bahrain entered their first game with a nominal ranking of 100 points against the more seasoned Hong Kong team with 774 points. Bahrain won the game by 24 runs allowing them to increase their rating by 44 points (or 44%).

Highest stakes games - In February and March 2005, Australia and New Zealand played a five match series where Australia went into the series with 5016 points and New Zealand with 4238. Theoretically all games were played for the same number of points although rounding meant some games had 926 points on offer while others had 925. Australia won all five games and finished with 5498 points to New Zealand's 3756.

Will you be doing a test ranking system?

Not initially. The XODI ratings only work because tournaments such as the World Cup and Champions Trophy pit non-test teams against test teams on a semi-regular basis. At present, there is almost no cross-over between test teams and non-test teams when it comes to four innings cricket so there is nothing meaningful to base the rankings on. If that situation changes, then a "test" ranking system would be considered.

How were the initial rankings determined?

Prior to the rankings starting, a number of World Cup and ICC Trophy tournaments were analysed by an interative process. Every team was given the same number of points at the start of the tournament, results were plugged in to see how they affected the rankings, then the modified rankings were put back in at the beginning. This process was repeated a number of times until there was little variation between the starting numbers and the final numbers.

By comparing the results of teams that played in both the ICC Trophy and World Cup, the two sets of numbers were calibrated roughly to give a relative scale. These modified numbers were then graphed against each team's percentage of games won in the World Cup and ICC Trophy and the resulting lines of best fit used to give initial seeds for each team.

While this worked well for the top teams, it became clear the ratings were inconsistent for new teams who were given a starting point score of 200 if they were an Associate member of the ICC or 100 if they were an Affiliate member. In September 2006, the system was modified so that once a team has played at least five games, its initial rating is recalculated using an iterative process similar to the one used to calculate the original initial rankings. This recalculation is made at the next update after the team reaches the five game criteria and from then on its initial rating is fixed at that recalculated amount. The effect of this is to accelerate teams towards the 'true' rating rather than having a lot of teams bunched around the 100 point mark as was previously the case.

If you have any questions or comments about the rankings, send us an email


Copyright © 2005 Yahoo! Inc. All rights reserved.