Follow me on Twitter!


Monday, July 9, 2012

The Method to the Madness that is Predicting the CrossFit Games

Well, we're now less than 96 hours from the start of the 2012 CrossFit Games. As my old high school football coach would say, "The hay is in the barn." Sure, HQ decided to release about half of the workouts early, but for all intents and purposes, there's not much left the athletes can do to prepare. And the same goes for me: there's really nothing left to learn about these athletes between now and Friday morning, so I suppose that means it's prediction time. To quote the legendary Ronnie Coleman, "Ain't nothin' to it but to do it!"

So far, this blog has been a labor of love, but for these final predictions, I really had to put the emphasis on the labor. Despite the fact that I had done plenty of analysis on this year's open and regional results, there was really no way to get a good feel for how to predict the CrossFit Games without delving further back into the past. That meant getting my hands on the 2011 data, and by getting my hands on it, I mean doing tons of copying and pasting from the old Games2011 web site. Certainly, I could have used multiple years of data, but things would have quickly become more complicated, because prior to 2011, the Regionals were not standardized (and there was no Open), meaning it would have been awfully difficult to make any sort of predictions.

My initial thoughts were that there were a few different statistics about each athlete leading into the Games that might would do a decent job at modeling the actual results. They were: Regional results, Open results, prior Games experience and age. All of these were available online, and I thought each could help out in predicting how an athlete would perform in Carson. Since I already had a template in place, I went ahead and made some adjustments to the 2011 Regional results to account for some athletes competing weeks after other athletes. I also could have tried to adjusted the data for weather conditions (I seem to recall NorCal having some bad storms), but that was too difficult for the time I had available and probably not worth the effort. No adjustments were made to the Open results. For prior Games experience, I counted any experience in the Games after 2007 (the level of competition was simply too low and the atmosphere too different back then for the experience to really affect someone in 2011). Age was taken from the Games2011 web site. I'll go ahead and state now that Rob Orlando, Mikko Salo, Deborah Cordner and Helga Tofadottir were all excluded from the analysis entirely because they did not continue past the first event (all due in one way or another to the swim event).

[Quick note here: I am NOT taking into account the events that have been released already for 2012. First of all, half of the events are still unknown, and second, I think it is extremely difficult to say with any certainty how ALL of the athletes will do on each event. Will Dan Bailey do well on a 300m shuttle sprint? I assume yes because I happen to know he was a collegiate 400m runner. Will Austin Stack do well? I know virtually nothing about him other than his Regional and Open results (which include no running), so for me to even guess is pretty silly. And frankly, doing that much research on each athlete is beyond the scope of what I'm going for here.]

As I discussed in my last post, the Games uses a different scoring system than regionals or the Open. For now, I won't go into my opinions on the system itself, but this was something I needed to account for in my analysis. To do this, I ranked all the Games athletes in each event of the Open and Regionals, then converted those rankings to a score based on the new scoring system. The actual Games results were also scored this way, but there was a catch: because of the cuts, we did not have complete scores for athletes who didn't finish in the top 12. I felt it would be inappropriate to simply give these athletes 0 points for each of the events they missed. If I did, the 13th-place athlete for men (Patrick Burke) would have finished with 406 points, compared to 625 for 12th-place Zach Forrest. Did Forrest really perform 150% better than Burke? No.

MATH ALERT! (skip ahead if you don't care how I solved this issue and just want to see the damn predictions already) My solution was this: First, for each event after a cut, I compared the average points scored by the athletes who did complete the event to their average score on the prior events. The reason is that after the cut, the athletes were guaranteed to finish higher (on average) than before, so in order to assign points to the athletes who were cut, I needed to account for this. For each athlete who was cut, I took that ratio and multiplied it by his/her average score in the events he/she did finish. I re-calculated this after every cut. So for instance, 47th place Danie Du Preez received an average of about 16 points across the final events, despite the fact that 47th place in each event is only worth 10 points.

OK, so we've got the data set up, so let's take a look at what we found out. First, for the men, it became fairly clear that prior Games experience does matter. Here is a table showing the average rank for prior competitors vs. non-prior competitors, grouped by how the athletes finished overall at Regionals:


Aside from the bottom bracket, where we only had one prior Games competitor (Du Preez), the prior Games competitors fared better than the rookies who had similar Regional performances. However, for women, the relationship was non-existent. Here is the same chart for women:


Here we see that there was virtually no difference between those with or without Games experience, after accounting for Regional rank. Despite the fact that I assumed, Games experience would help us in our predictions, I simply had to ignore it for women.

More math-y stuff, proceed with caution! The effect of age was much more difficult to see with a chart like the above one, but I decided to include it in my regressions to see it was a significant variable. For the men, I used a four-variable linear regression, with regional points (Games scoring), Open points (Games scoring), prior Games experience (yes/no) and age. It turned out that all four variables were significant and in the direction I expected - better Regional results, better Open results, prior experience and youth all predicted better Games performance. But after some thought, I decided to try tweaking the age variable. Would we really expect an athlete to be worse at age 22 than at age 21? I wouldn't. So I messed around with different age cut-offs to see if it would be a better indicator. I ended up settling on 26, meaning the age variable was actually the athlete's age beyond 26 (or 0, if the athlete was 26 or younger). Age 26 seemed reasonable and gave me the most significant t-value on the regression.

But again, for women, no such luck! No matter what age cut-off I chose, the regression did not show any negative effects of older age - in fact, if anything, older athletes tended to do better (think Cheryl Brost finishing 7th at age 40 or Annie Sakamoto finishing 10th at age 35. I decided to ignore this variable altogether. My theory on this is that for women, the Games come down to who has the strength for the heavy lifts and the skills to perform the difficult gymnastic movements. A few of these movements or lifts (heavy front squats come to mind last year) can really eliminate many women from contention, regardless of age. On the men's side, there are fewer athletes who struggle that much with any one movement or lift.

So that's the background. For men, the model I used included Regional results, Open results, age and prior Games experience. For women, I simply used Regional and Open results. And now, for the absolutely 100% guaranteed predictions for this year's Games, see my next post...

No comments:

Post a Comment