One of the unsung heroes of the Katowice Major’s updated format is the player-selected seeding system, a setup which ranks the teams based on their subjective strength level as voted on by the rest of the field. So far, it seems like a perfectly viable alternative to other popular seeding options – it’s accurate and also gives us a great insight into how the contenders themselves view their competition.
CS:GO is a surprisingly tough game to predict – just ask the desk analysts. It makes sense why they would treat the guesses as a bit of a game: boiling down a complex machinery of a Counter-Strike match into a binary yes-or-no decision misses the point somewhat. This is perhaps why it’s all the more interesting to glean this sort of an insight from the very players who duke it out on the server, especially when their skin is truly in the game: the Katowice Major’s format changes featured a player-selected seeding system, ESL’s innovation premiered at IEM Chicago. As per the official blog post,
“All sixteen teams in this particular stage will be asked to rank their fifteen opponents based on skill. Individual team votes that are outside of the expected range of the spread will be discarded in order to eliminate anomalies (e.g. a team giving a team #15 when most other teams rated them as #1) as a precaution to rule out collusion.
Logical alternatives are entirely algorithmic or a list derived from third-party expert opinion. The latter clearly would suffer from legitimacy issues and would be tough to universally implement in an open circuit. Meanwhile, the former is also not without faults: for instance, HLTV’s stats can be quite oversensitive to changes at the lower ranking levels (where the low weight given to their regularly attended events can be really skewed by one successful qualifying run). As an example, Vitality rose seven spots for their successful Major run (19th to 12th) and are now rated higher than Fnatic.”
The player-selected seeding can also bring out interesting storylines in the hands of the right tournament organiser. For instance, Na’Vi somehow pegged North as the second-strongest team at IEM Chicago when no one else even had them in their top five, perhaps motivated by their upset win at Stockholm two months prior. This is also the best solution when it comes to teams with recent lineup changes and little to no competitive data to show: their rivals can infer their strength based on what they’ve shown during the online scrims, information no other party has any real access to.
The accuracy of the predictions are astonishing: seven out of the eight teams making it out of the New Challengers Stage were in the top half of the final rankings, and even the outliers of Renegades came it at ninth. In the New Legends Stage, the teams got six out of the eight playoff qualifiers right (including ENCE at eighth!) with the Aussies once again taking up the ninth spot. The one real outlier here is NiP: the Ninjas made it to the Spodek as the eleventh seed. NRG and BIG were the sides which failed to make the cut despite their rivals’ expectations.
There’s also another data point to consider: IEM Chicago’s 2018 edition, the first event to feature this kind of a seeding system. The participants correctly identified four of the six teams that would make it to playoffs.
It seems like the system does a surprisingly good job at highlighting potential dark horses – beyond ENCE and Renegades at the Major, the fact that three teams pegged Cloud9 as the strongest team in the qualifying stage with a third-place overall ranking also serves as a good example – but, perhaps understandably, fails to predict the underperformance of top sides.
The real surprise of the New Legends Stage predictions has to be BIG’s seventh-place ranking despite their well-publicised roster turmoil and discouraging performances since XANTARES’ arrival: similarly, they failed to forecast the early elimination of Na’Vi and MiBR in Chicago.
Current HLTV rankings next to the WESG groups to show how atrociously imbalanced these groups are.
Average seed next to group name calculated based on a generous average ranking of #100 for unranked teams, except #50 for Russia and Wardell & Co.
How can this happen in 2019? pic.twitter.com/ZYE6spLTiq
— Tomi (@tomi) February 25, 2019
You could argue that it’s a bit of a self-fulfilling prophecy off the back of the seeding system – pitting the teams identified as strong against the clear minnows –, but the effectiveness of such pairing is well-established in other sports. Besides, a look at the upcoming WESG finals (and some of the more farcical Majors of the past) clearly shows that the alternative is infinitely worse.
While there are legitimate arguments to be made about a purely mathematical approach or even third-party expert seeding, the current track record of the player-selected system coupled with its added storyline potential makes it perhaps the best system we’ve seen so far in CS:GO.